paper_id
stringlengths
43
43
summaries
sequence
abstractText
stringlengths
98
40k
authors
list
references
list
sections
list
year
int64
1.98k
2.02k
title
stringlengths
4
183
SP:ecc41670e8132da6dd5fdc3e75405c3060733512
[ "This paper studied the combination of federated learning tasks in a meta-learning setting. In particular, with the assistance of the pre-trained meta-model, the new FL model's training can be completed within limited communication rounds. It was inspired by the meta-learning method used in few-shot learning scenario. This paper proposed a few-round learning (FRL) algorithm and designed global prototype-assisted learning (GPAL) scheme to assist training. It is an interesting topic to combine meta-learning with federated learning. ", "This paper presents a method for using meta-learning to train an initial model, which is intended to achieve optimal performance on an arbitrary downstream tasks when trained for R rounds of federated learning (for some small R). The paper presents aa meta-learning-flavored algorithm for conducting the pretraining step, and provides theoretical bounds on the gradient of the resulting loss (Theorem 1). The paper includes some empirical results on CIFAR100, miniImageNet, and FEMNIST, with comparison to some other baselines (although no baselines exist from other works for this particular task, according to the authors).", "The paper introduces a problem that is based on both meta-learning and federated ideas. They call this few-round learning, where the goal is to prepare an initial model that can quickly adapt to any group of clients within only a few rounds of FL. The problem is in my opinion significant and has a lot of practical application. The introduce a meta-learning algorithm using few rounds of FL followed by inference, to be performed by a group of clients on a possibly unseen task. The also show experiments that the scheme outperforms existing pre-training approaches including fine-tuning via FedAvg and personalized FL in both IID and non-IID scenarios. I think the paper has core value for the FL field. " ]
In federated learning (FL), a number of distributed clients targeting the same task collaborate to train a single global model without sharing their data. The learning process typically starts from a randomly initialized or some pretrained model. In this paper, we aim at designing an initial model based on which an arbitrary group of clients can obtain a global model for its own purpose, within only a few rounds of FL. The key challenge here is that the downstream tasks for which the pretrained model will be used are generally unknown when the initial model is prepared. Our idea is to take a meta-learning approach to construct the initial model so that any group with a possibly unseen task can obtain a high-accuracy global model within only R rounds of FL. Our meta-learning itself could be done via federated learning among willing participants and is based on an episodic arrangement to mimic the R rounds of FL followed by inference in each episode. Extensive experimental results show that our method generalizes well for arbitrary groups of clients and provides large performance improvements given the same overall communication/computation resources, compared to other baselines relying on known pretraining methods.
[ { "affiliations": [], "name": "Younghyun Park" }, { "affiliations": [], "name": "Dong-Jun Han" }, { "affiliations": [], "name": "Do-Yeon Kim" }, { "affiliations": [], "name": "Jun Seo" } ]
[ { "authors": [ "Keith Bonawitz", "Hubert Eichner", "Wolfgang Grieskamp", "Dzmitry Huba", "Alex Ingerman", "Vladimir Ivanov", "Chloe Kiddon", "Jakub Konecny", "Stefano Mazzocchi", "H Brendan McMahan" ], "title": "Towards federated learning at scale: System design", "venue": "arXiv preprint arXiv:1902.01046,", "year": 2019 }, { "authors": [ "Sebastian Caldas", "Sai Meher Karthik Duddu", "Peter Wu", "Tian Li", "Jakub Konečnỳ", "H Brendan McMahan", "Virginia Smith", "Ameet Talwalkar" ], "title": "Leaf: A benchmark for federated settings", "venue": "arXiv preprint arXiv:1812.01097,", "year": 2018 }, { "authors": [ "Fei Chen", "Mi Luo", "Zhenhua Dong", "Zhenguo Li", "Xiuqiang He" ], "title": "Federated meta-learning with fast convergence and efficient communication", "venue": "arXiv preprint arXiv:1802.07876,", "year": 2018 }, { "authors": [ "Alireza Fallah", "Aryan Mokhtari", "Asuman Ozdaglar" ], "title": "Personalized federated learning with theoretical guarantees: A model-agnostic meta-learning approach", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Neel Guha", "Ameet Talwalkar", "Virginia Smith" ], "title": "One-shot federated learning", "venue": "arXiv preprint arXiv:1902.11175,", "year": 2019 }, { "authors": [ "Yihan Jiang", "Jakub Konečnỳ", "Keith Rush", "Sreeram Kannan" ], "title": "Improving federated learning personalization via model agnostic meta learning", "venue": null, "year": 1909 }, { "authors": [ "Anirudh Kasturi", "Anish Reddy Ellore", "Chittaranjan Hota" ], "title": "Fusion learning: A one shot federated learning", "venue": "In International Conference on Computational Science,", "year": 2020 }, { "authors": [ "Jakub Konecny", "H. Brendan McMahan", "Felix X. Yu", "Ananda Theertha Suresh", "Dave Bacon" ], "title": "Federated learning: strategies for improving communication efficiency", "venue": "In NIPS Workshop on Private Multi-Party Machine Learning,", "year": 2016 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Tian Li", "Anit Kumar Sahu", "Ameet Talwalkar", "Virginia Smith" ], "title": "Federated learning: Challenges, methods, and future directions", "venue": "arXiv preprint arXiv:1908.07873,", "year": 2019 }, { "authors": [ "Sen Lin", "Guang Yang", "Junshan Zhang" ], "title": "A collaborative learning framework via federated meta-learning", "venue": "arXiv preprint arXiv:2001.03229,", "year": 2020 }, { "authors": [ "Brendan McMahan", "Eider Moore", "Daniel Ramage", "Seth Hampson", "Blaise Aguera y Arcas" ], "title": "Communication-efficient learning of deep networks from decentralized data", "venue": "In Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "Sachin Ravi", "Hugo Larochelle" ], "title": "Optimization as a model for few-shot learning", "venue": null, "year": 2017 }, { "authors": [ "Amirhossein Reisizadeh", "Aryan Mokhtari", "Hamed Hassani", "Ali Jadbabaie", "Ramtin Pedarsani" ], "title": "Fedpaq: A communication-efficient federated learning method with periodic averaging and quantization", "venue": null, "year": 1909 }, { "authors": [ "Felix Sattler", "Simon Wiedemann", "Klaus-Robert Müller", "Wojciech Samek" ], "title": "Robust and communication-efficient federated learning from non-iid data", "venue": "IEEE transactions on neural networks and learning systems,", "year": 2019 }, { "authors": [ "MyungJae Shin", "Chihoon Hwang", "Joongheon Kim", "Jihong Park", "Mehdi Bennis", "Seong-Lyun Kim" ], "title": "Xor mixup: Privacy-preserving data augmentation for one-shot federated learning", "venue": "arXiv preprint arXiv:2006.05148,", "year": 2020 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy Lillicrap", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Yue Zhao", "Meng Li", "Liangzhen Lai", "Naveen Suda", "Damon Civin", "Vikas Chandra" ], "title": "Federated learning with non-iid data", "venue": "arXiv preprint arXiv:1806.00582,", "year": 2018 } ]
[ { "heading": "1 Introduction", "text": "Today, valuable data are being collected increasingly at distributed edge nodes such as mobile phones, wearable client devices and smart vehicles/drones. Directly sending these local data to the central server for model training raises significant privacy concerns. To address this issue, an emerging trend known as federated learning (FL) [13, 9, 1, 11, 20, 16, 15], where server uploading of local data is not necessary, has been actively researched. In FL, a large group of distributed clients interested in solving the same task (e.g., classification on given categories of images) collaborate in training a single global model without sharing their data. While standard supervised learning uses some dataset D to find the model φ that would minimize a loss function f(φ,D), FL in comparison seeks the model φ that minimizes the averaged version of the local losses f(φ,Dk), computed at each node k using local data Dk. The learning process typically starts from a randomly initialized or some pretrained model and is carried out through iterative aggregation of the local model updates." }, { "heading": "1.1 Backgrounds and Main Contributions", "text": "Motivation. Unfortunately, FL generally requires a large number of communication rounds between the server and the clients for model exchange, to achieve a desired level of performance. This makes ∗Equal contribution.\n35th Conference on Neural Information Processing Systems (NeurIPS 2021).\nthe implementation of FL a significant challenge in bandwidth-limited or time-sensitive applications. Especially in real-time applications (e.g., connected vehicles or drones) where the model should quickly adapt to dynamically evolving environments, the requirement on many communication rounds becomes a major bottleneck.\nGoal and challenge. To tackle this problem from the service provider’s perspective, we aim to prepare an initial model that can quickly adapt to any group (focusing on its own task) of clients within only a few rounds of FL. The key challenge here is that the task of the group conducting FL (i.e., the downstream task for which the prepared model will be used) is generally not known when the service provider prepares the initial model. In the context of classification, different tasks mean classification involving different sets of classes. For example, classifying diseases A,B,C (task 1) is a different task compared to classifying diseases D,E,F (task 2). Since the group conducting FL for the downstream task can include classes that are unseen during preparation, existing FL approaches cannot tackle this problem.\nKey idea. Our key idea is to adopt meta-learning (which enables reliable prediction even when the task at inference is unseen when the model was meta-trained) to prepare the initial model that enables few-round FL. In other words, we aim to meta-train an initial model for few-round downstream FL. Once meta-training is over, the service provider would offer the trained model to some clients who want to solve a common task after collaborating through a quick few rounds of FL. These clients may or may not be the participants of the earlier meta-training phase, and their classification task is generally considered unseen during meta-training. A high-level description of our idea is depicted in Fig. 1(b). Given a small target value R, we take an episodic training approach to enable R-round FL for any group of clients. In essence, we find the initial model φ that would minimize the average of local losses f(θR(φ),Dk), where θR(φ) is the model to be updated from φ through R rounds of FL among future clients in the deployment stage. Despite the high practical significance of this problem formulation, to the best of our knowledge, this is the first work to propose a meta-learning strategy geared to few-round FL. It is also worth mentioning that model preparation is not a real-time requirement and can often be done when bandwidth demands are sparse.\nComparison with personalized FL. We stress that our idea has a different purpose and approach relative to the recent line of works on federated meta-learning [12, 4], which initiate a model for personalized optimizations at local clients (see Fig. 1(a)). The goal of these approaches is to obtain a personalized local model at each client within a few steps of gradient descents, in the deployment\nstage. To achieve this goal, in the preparation stage, a few steps of local updates and meta-update are first performed at each participant independently (with its own local data), and FL (or aggregation) is adopted just to take advantage of data of various participants: these approaches seek φ that minimizes the average of local losses f(θk(φ),Dk), where θk(φ) is the local model updated from φ through a number of gradient steps using local data Dk. In contrast to personalized FL that focuses on local client models in the deployment stage, our few-round learning inherits the ability of FL at deployment to obtain a global model. Hence, for our scheme, it is natural to adopt FL in the preparation stage to mimic the R-round FL scenario at deployment; in the preparation stage, meta-update is performed at each participant after the collaborative R FL rounds. To sum, our approach aims to prepare an initial model that leads to a global model within a “few rounds of FL”, while personalized FL aims for an initial model leading to personalized models within “few steps of local updates” based only on the local data. These are obviously two completely different problems with distinct solutions.\nMain contributions. Technically, we utilize a model-agnostic meta-learning (MAML) approach to prepare the initial model via an episodic training strategy. While directly applying MAML independently to each local model leads to existing solutions on personalized FL [12, 4], in our approach, R rounds of local updates and aggregations are first performed in each episode before the meta-update process. This unique episode construction compared to personalized FL methods mimics the deployment stage where actual inference is preceded by an R-round FL procedure. Another key ingredient in our solution is to adopt prototype aggregation in each FL round to construct global prototypes that serve as better class representatives compared to the locally computed prototypes, in learning embedding space. This strategy is especially effective when a non-IID (independent, identically distributed) data distribution across clients tends to induce a significantly biased model after performing local updates. The global prototypes serve as prior knowledge, a form of regularization, and prevent local models from overfitting to the local data. Moreover, the global prototypes (reflecting all classes across clients) can assist the local models to learn a more general embedding space. We call this approach a global prototype-assisted learning (GPAL) strategy. Our main contributions are summarized as follows:\n• We formulate a new problem of high practical significance, namely, few-round learning, where the goal is to prepare an initial model that can quickly adapt to any group of clients within only a few rounds of FL.\n• We propose a meta-training algorithm specifically geared to R rounds of FL followed by inference, to be performed by a group of clients on a possibly unseen task.\n• We guarantee convergence of our meta-training algorithm via theoretical analysis. • We show via experiments that our scheme outperforms existing pretraining approaches\nincluding fine-tuning via FedAvg and personalized FL in both IID and non-IID scenarios." }, { "heading": "1.2 Related Works", "text": "Few-shot learning. Few-shot learning is an instantiation of meta-learning. In the context of image classification, few-shot learning typically involves episodic training where each episode of training data is arranged into a few training (support) sample images and validation (query) samples to mimic inference that uses only a few examples [19]. Through a repetitive exposure to a series of varying episodes with different sets of image classes, the model learns to handle new tasks (classification against unseen classes) each time. Two widely-known few-shot learning methods with different philosophical twists, which are also conceptually relevant to the present work, are MAML [5] and Prototypical Networks [18]. MAML attempts to generate an initial model from which different models targeting different tasks can be obtained quickly via just a few gradient updates. The idea is that the initial model is learned via meta-training to develop an internal representation that is close in some sense to a variety of unseen tasks. Prototypical Networks, on the other hand, learn embedding space such that model outputs cluster around class prototypes, the class-specific centroids of the embedder outputs. With episodic training, simple Prototypical Networks are surprisingly effective in learning inductive bias for successful generalization to new tasks.\nWe stress that our few-round learning scheme (that targets a few global rounds of FL) has different purpose and technical approach compared to the existing works on few-shot learning (that targets few shots of data sample). Nevertheless, we take advantage of both concepts on MAML and Prototypical Networks to achieve our own goal: we adopt MAML in updating the initial model specifically geared to R-round FL, and adopt both prototype aggregation and prototype-assisted learning strategies to learn a general embedding space and successfully handle the non-IID issue in FL.\nFederated meta-learning. Recent research activity has focused on improving model personalization via federated meta-learning [12, 3, 4, 7]. The common goal of these works is to generate an initial model based on which each new client can find its own optimized model via a few local gradient steps and using only its own data. In these works, meta-learning employed during federated learning intends to enable each client to handle previously unseen tasks, in the spirit of MAML of [5]. User-specific next-word prediction at individual smartphones, for example, is a possible application. Compared to this line of work, we focus on creating an initial model that leads to a high-accuracy global model, rather than personalized models. In this way, we seek to take advantage of a higher variety of data as well as the larger data volume that would be made available through collaborative learning of a group of distributed nodes. A clear example is the diagnosis of a broader class of diseases that would be possible through collaborative training across more examples contributed by a larger group of individuals. Personalized FL methods (e.g., [12, 4]) especially have disadvantage in non-IID settings where each client necessarily lacks a sufficient variety of data. The results are reported in Section 4.\nOne-shot FL. Another line of work recently focused on one-shot FL, where the goal is to train a global model with just one communication round between the server and the clients. The authors of [6] proposed an ensemble method to choose reliable client-specific models from given clients. In the work of [17], local clients send XOR-encoded MNIST image data to the server, and the server decodes it to train the global model. While the server would need certain data in advance to decode the received results, XOR operation can serve as data augmentation while preserving privacy. In the fusion learning of [8], each local client uploads both the model parameters and the distribution parameters to the server. The server generates artificial data samples from the distribution parameters to train a global model. When the data gets complex, however, it is not clear whether conversion into a simple distribution would be reliable. Compared to the existing works on one-shot FL that employ some randomly initialized model, the key difference of our method is the use of meta-learning to prepare an initial model which can adapt to unseen tasks of individual groups’ of clients within R rounds of FL. The advantage of our scheme compared to these methods is shown in Section 4." }, { "heading": "2 Proposed Few-Round Learning Algorithm", "text": "" }, { "heading": "2.1 Problem Setup", "text": "Federated learning. Let N be the number of clients in the system. FL allows each distributed node k with a dataset Dk to participate in iterative learning of a global model θ without having to reveal its data to anyone else including the central server. As a given round r starts, each of K participating nodes (generally chosen anew every round) downloads a global model θr from the server and updates it using its own local data Dk. The updated local models θr+1k get all uploaded to the server to be aggregated to a new model θr+1 = ∑Kk=1 µkθ r+1 k , according to the relative dataset sizes µk =\n∣Dk ∣ ∑Kj=1 ∣Dj ∣ .\nThe same process gets repeated. FL generally requires a significant number of such global rounds to achieve the desired accuracy, with each round taking up substantial communication resources.\nProblem formulation. In preparing an initial model φ for any group of clients to pursue a few FL rounds, we use meta-learning based on episodic training, where each episode is constructed to mimic R FL rounds followed by inference. Once meta-training is over, in the deployment phase, the service provider offers the trained initial model φ to any group of clients wishing to pursue inference on some common task (possibly unseen during meta-training) after collaborating for R rounds of FL." }, { "heading": "2.2 Meta-Training (Preparation Stage)", "text": "More precisely stated, our meta-training phase is to find φ that minimizes the objective function\nF (φ) = EAt∼p(A) ⎡ ⎢ ⎢ ⎢ ⎣ 1 K ∑ k∈At f(θR(φ),Dk) ⎤ ⎥ ⎥ ⎥ ⎦\n(1)\nwhere At is a specific group with K participants drawn from p(A), the distribution over all possible groups, each with K participants; θR(φ) is the model after R rounds of FL in group At, starting from φ; and Dk is the local dataset of participant k in group At. In comparison, the objective function for personalized FL methods (e.g., Per-FedAvg of [4]) is F (φ) = 1\nN ∑ N k=1 f(θk(φ),Dk) where N is the\nnumber of clients in the system and θk(φ) is the model after a few gradient steps at client k starting from φ. We also reiterate that conventional FL aims at minimizing F (φ) = 1\nN ∑ N k=1 f(φ,Dk).\nAlgorithm 1 Proposed Meta-Training Algorithm for Few-Round Learning Input: Initialized model φ0 Output: Model φT after T training episodes\n1: for each training episode t = 0,1, ..., T − 1 do 2: The server constructs a group At ∼ p(A) of K participants chosen out of N users. 3: Each participant k ∈ At splits Dk into support set Sk and query set Qk. 4: θ0 ← φt 5: for each communication round r = 0,1, ...,R − 1 do 6: for each participant k in parallel do 7: Download θr and Γr−1 from the server (download only θr when r = 0) 8: for each class c ∈ Ck do 9: Γrk(c) = 1 ∣Sk(c)∣ ∑x∈Sk(c) gθ\nr(x) // Local prototype calculation with support set Sk 10: end for 11: θr+1k ← θ r − α∇θrf(θ r, Sk) // Local update of θ with support set Sk and GPAL 12: end for 13: θr+1 = ∑Kk=1 λkθ r+1 k // Model aggregation; λk is relative support set size 14: Γr = {∑Kk=1 λkΓ r k(c)∣c = 1,2, ...,Nc} // Prototype aggregation 15: end for 16: for each participant k in parallel do 17: Download θR, ΓR−1 from the server. 18: Compute local prototypes based on Qk. 19: θ0k ← θ 0 − β∇θRf(θ R,Qk) // Local meta-update of θ0 with query set Qk and GPAL 20: end for 21: φt+1 = ∑Kk=1 µkθ 0 k // Aggregation of meta-updated models; µk is relative data size 22: end for\nBefore training begins, each client k divides its local dataset into support set Sk and query set Qk. To create a training environment matching the actual R-rounds of FL followed by inference at deployment, in each episode of our meta-training phase, we update the model over R federated rounds using the support set and then makes a final adjustment (meta-update) using the query set. In other words, the support sets are utilized for learning how to solve the task, by performing R rounds of FL. The query sets are used for evaluating the performance on this task and performing the meta-update process. This overall process is repeated as the model is exposed to a series of episodes.\nThe detailed procedure of our meta-training is given in Algorithm 1. For a quick summary, as each episode t begins, the server selects a new set of K participants. The model φt, carried over from the last episodic stage, becomes the initial model θ0 for the current episode. After R rounds of FL with each round consisting of local updates via local support sets and a global aggregation, θ0 evolves to θR. Before moving to the next episode, local meta-updates are done based on θR using the local query sets to adjust the initial model θ0, in the spirit of MAML. As these meta-updated models get aggregated to φt+1 at the server, the new episode can begin." }, { "heading": "2.2.1 R Rounds of Local Updates and Aggregations", "text": "In defining the loss function, we utilize the class prototypes and associated Euclidean distance metric of [18], a proven method of simple yet effective learning of embedding space. For each communication round r, we not only aggregate the global model θr+1 but also the global prototypes Γr = {Γr(c)∣c = 1,2, ...,Nc} for all classes, where Nc is the number of classes over all clients. The class prototype is the class-specific averaged feature for data samples and calculated as Line 9 in Algorithm 1.\nModel and global prototype download (Line 7). In the beginning of round r ≥ 1, the server has the global model θr and the global prototypes Γr−1 = {Γr−1(c)∣c = 1,2, ...,Nc} from the previous round r − 1. Each participant k first downloads θr and Γr−1 from the server. Since there is no global prototype in the first round, the participants only download the model θ0 when r = 0.\nLocal prototype calculation (Line 9). The local prototype of Γrk(c) for participant k is computed as in Line 9 using the downloaded model θr, the associated embedder outputs gθ corresponding to the local support samples Sk(c) labeled c. This local prototype serves as a representative of class c calculated based on the local data (support set) of client k.\nLoss calculation from local prototypes. Let Γrk be the set of all classes of prototypes at participant k: Γrk = {Γ r k(c)∣c ∈ Ck}, where Ck is a set of all classes at participant k. Now using Sk, θ\nr and Γrk, each participant k computes the local loss according to\nLSklocal (θ,Γ r k (c)) =\n1\n∑c∈Ck ∣Sk(c)∣ ∑ c∈Ck ∑ x∈Sk(c)\n{d(gθ(x),Γ r k(c)) + log ∑\nc′≠c exp( − d(gθ (x) ,Γ\nr k (c ′ ) ))},\n(2)\nbased on Euclidean distance d(⋅) between Γrk(c) and gθ(x) for x ∈ Sk(c).\nAuxiliary loss from global prototypes. Relying only on the loss function of (2) based on the local prototype tends to bias the model, especially when data distributions across different clients are non-IID. This generally leads to a performance degradation of the global model. To get around, we propose a global prototype-assisted learning (GPAL) strategy, where the global prototypes serve as prior knowledge in a form of regularization to prevent local models from overfitting to their local data. Moreover, the global prototypes, reflecting classes not limited to the local dataset, can assist the local model to learn a more general embedding space. Given the global prototypes Γr−1 = {Γr−1(c)∣c = 1,2, ...,Nc} and {gθ(x)∣x ∈ Sk}, the auxiliary loss LSkaux(θ\nr,Γr−1) can be computed by replacing local prototypes Γrk with global prototypes Γ r−1 in (2).\nLocal update based on GPAL (Line 11). Based on the local loss LSklocal(θ,Γ r k) computed using local prototypes and the auxiliary loss LSkaux(θ,Γ r−1) based on global prototypes, the objective function becomes f(θr, Sk) = γL Sk local(θ r,Γrk) + (1 − γ)L Sk aux(θ r,Γr−1) (3) where γ is a balancing coefficient. For r = 0, we have f(θr, Sk) = LSklocal(θ r,Γrk) since the global prototype is not defined in the first global round. Line 11 of Algorithm 1 performs local update accordingly, where α is the learning rate. We call this strategy GPAL.\nIn FL, the clients can perform multiple local updates, say E times. Hence, the process of local prototype computation in Line 9 of Algorithm 1, loss computation in (3) and local update of Line 11 can be repeated E times to obtain θr+1k .\nModel and prototype aggregations (Lines 13∼14). After performing local updates, each participant k sends its updated local model θr+1k and the computed local prototypes Γ r k to the server. Then the server aggregates them according to Lines 13 and 14 in Algorithm 1, where the weighting factor λk =\n∣Sk ∣ ∑Kj=1 ∣Sj ∣ reflects the relative support set sizes.\nThe above local update and global aggregation processes are repeated for R global rounds (r = 0,1, ...,R − 1), and the server obtains θR and ΓR−1 in a given episode." }, { "heading": "2.2.2 One-Round Local Meta-Update and Aggregation (Lines 16∼21)", "text": "Towards the end of each episode processing stage, the participants download θR and ΓR−1 from the server. Each participant k uses its query set Qk to compute the local prototypes ΓRk as in as in Line 9. The query loss f(θR,Qk) is calculated similar to (3) based on Qk, θR, ΓR−1 and ΓRk . The meta-update would call for taking the derivative of this loss with respect to θ0: ∇θ0f(θR,Qk) = ∇θRf(θ R,Qk)× ∂θR ∂θ0 = ∇θRf(θ R,Qk)×(∏ R−1 r=0 ∑ K j=1 λj ∂ ∂θr (θr−α∇θrf(θ r, Sj))). But one would need the double derivatives from other user locations as well, which is highly inconvenient. Ignoring the double derivative terms, we simply replace ∇θ0f(θR,Qk) with ∇θRf(θR,Qk), as in Line 19. This is the same as making a first-order approximation to the MAML-like meta-update, as often done in the implementation of MAML variants including the original work of [5]. All our reported experimental results as well as convergence analysis in the present paper reflect this choice. The server finally aggregates the meta-updated models from all participants. The next episode begins as the server selects a new set of K participants." }, { "heading": "2.3 Testing (Deployment Stage)", "text": "In the actual deployment or test phase, given a group of clients, the server sets θ0 = φT and then leads R rounds of FL to obtain θR and ΓR−1. Now given a test sample, we make prediction based on θR and ΓR−1: the model output is first computed using θR and then comparison is made with the distances from all global prototypes in ΓR−1 to reach a decision." }, { "heading": "3 Convergence Analysis", "text": "We provide theoretical analysis to guarantee a certain convergence behavior for our meta-training algorithm for nonconvex loss functions fk(φ) ∶= f(φ,Dk). We need the following assumptions commonly made in convergence analyses of FL involving meta-learning, e.g., [12, 4]. Assumption 1. For all i, fi is L-smooth, i.e., ∥∇fi(φ1) −∇fi(φ2)∥ ≤ L ∥φ1 − φ2∥ for any φ1, φ2.\nAssumption 2. Let li(φ;x) be the loss function for a single data point x ∈Di of participant i. For all i = 1,2, ...,N , the variance of the loss gradients across data samples at a given participant is bounded, i.e., Ex∈Di[∥∇li(φ;x) −∇fi(φ)∥ 2 ] ≤ Vd for any φ.\nAssumption 3. Let f(φ) = 1 N ∑ N i=1 fi(φ) be the average local loss of all participants in the system. The variance of the gradient of loss fi across participants is bounded, i.e., 1 N ∑ N i=1 ∥∇fi(φ) −∇f(φ)∥ 2 ≤ Vp for any φ.\nTwo key lemmas and a theorem below establish the convergence of our method. All proofs are in Supplementary Material. Lemma 1. Assume that the learning rate α is in the range (0,1/L]. Then, the global loss function F (φ) in (1) is LF -smooth, where LF = L2R.\nLemma 2. Define the local loss of our scheme Fk(φ) ∶= fk(θR(φ)) at participant k. For a group A with K clients, define the loss averaged within that group FA(φ) ∶= 1K ∑k∈A Fk(φ). Assume α ∈ (0,1/L]. Then, the variance of the gradient of FA(φ) across groups is bounded as\n∣A∣ −1 ∑ A∈A ∥∇FA(φ) −∇F (φ)∥ 2 ≤ VpK −1 (4)\nwhere A is the set of all possible groupings of K participants drawn from a pool of N individuals.\nTheorem 1. Suppose Assumptions 1, 2, 3 hold and α ∈ (0,1/L]. Let ∣D∣ be the mini-batch size at the meta-update processes of all participants. Then, Algorithm 1 guarantees the following upper bound on the loss gradient associated with our learned model φT :\n1\nT T−1 ∑ t=0 E[∣∣∇F (φt)∣∣2] ≤ 4(F (φ0) − F (φ∗)) βT + (β,R, ∣D∣,K) (5)\nwhere φ∗ is the optimal solution of (1) and (β,R, ∣D∣,K) = βL2R+2(Vd∣D∣−1 + VpK−1).\nAs the number of episodes T increases, the upper bound of (5) settles to . For a given smoothness L, assumed loss gradient variance bounds (Vd, Vp) and a targeted number of FL rounds R, the error term is controlled by the meta-update learning rate β, the mini-batch size ∣D∣ and the per-episode number of participants K. For any reasonable value R, practical choices of β, ∣D∣ and K can make sufficiently small, as discussed in in Supplementary Material using representative parameter values." }, { "heading": "4 Experiments", "text": "We validate our algorithm on CIFAR-100 [10], miniImageNet [19], FEMNIST[2]. Following the data splits in [14], for CIFAR-100 and miniImageNet, 100 classes are divided into 64 train, 16 validation and 20 test classes. For FEMNIST, we divide 62 classes into 52 alphabet (uppercase, lowercase) and 10 digit classes. For each class of FEMNIST, we sort the images by its name and choose first 600 samples. After all, we have 600 samples for each class in every dataset. 52 alphabet classes are set to train classes, while 10 digit classes are set to test classes. The train classes are utilized for the preparation stage, and the test classes are utilized at deployment to model the unseen tasks. Comparison schemes. First, as a simplest baseline, we consider FedAvg [13], where a randomly initialized model is trained for R FL rounds at deployment. The preparation stage is not considered for this scheme. Thus, direct performance comparison would be obviously unfair for FedAvg, but we just want to show what kind of performance improvement is possible by meta-learned initialization versus random initialization. Second, we consider a FedAvg-based fine-tuning, where the model is first pretrained by conducting FedAvg in each episode during preparation, and then fine-tuned with new clients for R FL rounds via FedAvg at deployment. For example, in miniImageNet, a 64-way classifier model is first pretrained in the preparation stage. Next, the last linear layer is replaced by a\nXavier-initialized layer, and then the overall model is fine-tuned to the group at deployment. We also consider fine-tuning based on one-shot FL [6], where the local models are sampled and aggregated by ensemble cross-validation. We allow a larger number of available clients (in the deployment stage) for this scheme to accommodate user sampling. The model is first pretrained via FedAvg during preparation, and then fine-tuned based on the scheme of [6] for R rounds at deployment. Finally, although comparison with personalized FL [4] is tricky as the goal is different, a global model can still be trained by repeating local updates and aggregations for R FL rounds starting from the initialized model geared to client personalization. Comparison results with this “forced” global model are given in Supplementary Material. For our few-round learning (FRL), we try both a linear classifier and a distance-based classifier [18] for comparison. For the linear classifier, we connect an additional linear layer behind CNN layers, as in other baselines. The distance-based classifier utilizes prototypes instead of using the linear layer. For the distance-based classifier that utilizes prototypes, we observe the effect of our GPAL strategy. Although we utilize FedAvg for the model aggregation at the server, adopting other aggregation methods that outperform FedAvg can further improve the performance of our method and other baselines. Preparation stage. We assume N = 64 participants in the system in the preparation stage for CIFAR-100 and miniImangeNet. We assume N = 52 for FEMNIST. For every dataset, following [13], training data samples are prepared into 2N shards of 300 samples each, such that each shard corresponds to one image class. Each participant is given two shards, and these two shards may belong to either a common class or two distinct classes. This models non-IID data distributions across participants. To construct each episode, the server then randomly selects K = 10 out of N participants. Each participant uses one half of its local data from each class as support samples, and the remaining half as query samples. We typically set the target number of global rounds to R = 3. Each episode of our scheme requires 4 global rounds in the meta-training phase: 3 rounds of local updates and aggregation, and 1 round of local meta-update and aggregation. For a fair comparison, we let all baselines to consume the same amount of communication resources in the preparation stage: up to 40,000 communication rounds between the server and participants (other than FedAvg that employs no preparation rounds). Hence, our scheme is meta-trained over up to 10,000 episodes, taking 4 rounds in each episode. We also reiterate that model preparation at the service provider is not a real-time requirement and can be done when bandwidth demands are sparse; this offers an even more favorable performance/complexity tradeoff options for the proposed scheme. Deployment stage. At deployment, we distribute the initial model obtained in the preparation stage to a new group of clients. To measure the performance, we obtain the average test accuracy with a 95% confidence interval over 1000 different groups (with K = 10 clients in each group) after R rounds of FL. For testing, in one case we randomly sample τ classes from test classes that have not been seen during preparation and distribute across K = 10 clients. In the other case, we randomly sample τu classes from the unseen test classes and τ − τu classes from the train classes seen during meta-learning. We consider both IID and non-IID distributions. In the IID setup, the data samples from each class are equally distributed to K = 10 clients. In the non-IID setup, we distribute data as in the preparation stage. The support set is utilized for R FL rounds and the server calculates test accuracy with the global model and the gathered query sets of all clients. For the one-shot FL scheme, we allow 20 clients and the server samples K = 10 of them to aggregate. We focus on a 5-way setup (i.e., τ = 5) in the main paper with the τ = 10 case reported in Supplementary Material. Implementation details. The structure of the model follows the setting of [5] and [18], containing 4 consecutive 3 × 3 convolutional layers with 64 filters. Successively, each CNN output goes through batch normalization, ReLU, and 2×2 max pooling. In the case of CIFAR-100 where the size of image is 32×32, the last two max pooling layers are omitted to up-scale the feature map. We adopt the SGD optimizer with a learning rate of β = 0.001 for the meta-learner and a learning rate of α = 0.0001 for\nthe learner. We set the mini-batch size to 60 and the number of local epochs at each client to E = 1. All methods are implemented using Pytorch and trained with a single GeForce RTX 2080 Ti.\nResults with unseen classes at deployment. Tables 1 and 2 show test accuracies averaged over 1000 different groups after R = 3 global rounds at deployment, where the goal of each group is to classify τ = 5 classes that were unseen during preparation. First, it can be seen that FedAvg yields significantly lower accuracy compared to others, as expected, since it uses a randomly initialized model for training. By pretraining the model, FedAvg-based fine-tuning gives significant performance gains compared to naive application of FedAvg, underlying the importance of initialization efforts. The fine-tuning scheme based on one-shot FL shows further performance improvements in the IID setup. However, since K = 10 clients are sampled from 20 clients for this method, there possibly exist some unseen classes when building the global model in the non-IID setup, which lowers the performance compared to fine-tuned FedAvg. Our FRL algorithm performs the best, with the distance-based classifier showing better accuracy compared to the linear classifier. The relative gains of our methods for non-IID are particularly strong. It can be also seen that the performance of the global model can be further improved by our GPAL strategy. Fig. 2 shows how the final test accuracy (after 3 fixed FL rounds at deployment) improves with the number of communication rounds in the preparation stage. The overall results in Tables 1, 2 and Fig. 2 confirm the advantage of exploiting meta-learning and global prototype-assisted learning to facilitate few-round FL.\nResults with both unseen/seen classes at deployment. In Table 3, we report test accuracies with both unseen/seen classes at deployment; the goal of each group is to classify τ = 5 classes, 2 from the unseen classes and 3 from the seen classes. Since the tasks also handle classes already seen during preparation, the accuracies are generally higher than in Tables 1, 2. The trend is consistent with the results in Tables 1 and 2, confirming the advantage of the proposed algorithm.\nEffect of global prototype-assisted learning. To understand the effect of our GPAL method further, we visualized t-SNE of the embedding space at a client in Fig. 3. CIFAR-100 is considered with\neach client having two classes in its local data in a non-IID setup. When only local prototypes are used for training as in Fig. 3(b), it can be seen that the two classes of the client form clusters without considering the data samples of other clients (but still well-separated). By considering the global prototypes (reflecting classes of all participants), in Fig. 3(c), the data points in the local client form clusters while staying away from all other global prototypes, a clearly desirable feat. This prevents the local model from being biased to its local data and enables the local model to learn a more general embedding space compared to the case in Fig. 3(b) considering only the local prototypes. Other experimental results. Additional results on other settings including higher-way classification, larger group size and mismatched R are reported in Supplementary Material. Comparison with the “foreced” global model based on the personalization scheme is also shown in Supplementary Material." }, { "heading": "5 Conclusion", "text": "We proposed a meta-learning strategy to prepare an initial model geared to few-round federated learning. Given a group of clients with a new task, our meta-trained model generalizes well within only a few FL rounds. Convergence of our meta-training is guaranteed through theoretical analysis. Extensive experimental results confirm significant advantages of our idea over different baselines such as FedAvg-based fine-tuning and personalized FL in various setups. Our solution offers a promising direction for FL in practice, where minimizing training time and communication resources required in real-time is among key challenges." }, { "heading": "Acknowledgments", "text": "This work was supported by IITP fund from MSIT of Korea (No. 2020-0-00626) and by National Research Foundation of Korea (No. 2019R1I1A2A02061135)." } ]
2,022
Few-Round Learning for Federated Learning
SP:a7dd38170e565b5450928720a51a50952ce48d86
[ "Graph neural networks and federated learning are both promising directions of works individually. This papers is apparently one of the first few attempts to combine them for spatio-temporal data modeling. The time series data in the local nodes is modelled by an Encoder-Decoder architecture and spatial locality property of various nodes is captured by the server. The Encoder at each node projects the time series data into an embedding space. This embedding is used by the GNN at the server as node features. The server side GNN outputs node embeddings. The Encoder embeddings and the GNN embeddings are then concatenated and fed to the decoder that predicts the outputs for the subsequent time steps. To ensure that all the nodes encode their temporal data in a common space, the encoders are shared by the clients. Overall, the results look promising." ]
Vast amount of data generated from networks of sensors, wearables, and the Internet of Things (IoT) devices underscores the need for advanced modeling techniques that leverage the spatio-temporal structure of decentralized data due to the need for edge computation and licensing (data access) issues. While federated learning (FL) has emerged as a framework for model training without requiring direct data sharing and exchange, effectively modeling the complex spatiotemporal dependencies to improve forecasting capabilities still remains an open problem. On the other hand, state-of-the-art spatio-temporal forecasting models assume unfettered access to the data, neglecting constraints on data sharing. To bridge this gap, we propose a federated spatio-temporal model – Cross-Node Federated Graph Neural Network (CNFGNN) – which explicitly encodes the underlying graph structure using graph neural network (GNN)-based architecture under the constraint of cross-node federated learning, which requires that data in a network of nodes is generated locally on each node and remains decentralized. CNFGNN operates by disentangling the temporal dynamics modeling on devices and spatial dynamics on the server, utilizing alternating optimization to reduce the communication cost, facilitating computations on the edge devices. Experiments on the traffic flow forecasting task show that CNFGNN achieves the best forecasting performance in both transductive and inductive learning settings with no extra computation cost on edge devices, while incurring modest communication cost.
[]
[ { "authors": [ "Alekh Agarwal", "Animashree Anandkumar", "Prateek Jain", "Praneeth Netrapalli", "Rashish Tandon" ], "title": "Learning sparsely used overcomplete dictionaries", "venue": "In Conference on Learning Theory,", "year": 2014 }, { "authors": [ "Sanjeev Arora", "Rong Ge", "Ankur Moitra" ], "title": "New algorithms for learning incoherent and overcomplete dictionaries", "venue": "In Conference on Learning Theory, pp", "year": 2014 }, { "authors": [ "Sanjeev Arora", "Rong Ge", "Tengyu Ma", "Ankur Moitra" ], "title": "Simple, efficient, and neural algorithms for sparse coding", "venue": null, "year": 2015 }, { "authors": [ "Omri Azencot", "N Benjamin Erichson", "Vanessa Lin", "Michael W Mahoney" ], "title": "Forecasting sequential data using consistent koopman autoencoders", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Peter Battaglia", "Razvan Pascanu", "Matthew Lai", "Danilo Jimenez Rezende" ], "title": "Interaction networks for learning about objects, relations and physics", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Peter W Battaglia", "Jessica B Hamrick", "Victor Bapst", "Alvaro Sanchez-Gonzalez", "Vinicius Zambaldi", "Mateusz Malinowski", "Andrea Tacchetti", "David Raposo", "Adam Santoro", "Ryan Faulkner" ], "title": "Relational inductive biases, deep learning, and graph networks", "venue": "arXiv preprint arXiv:1806.01261,", "year": 2018 }, { "authors": [ "Kyunghyun Cho", "Bart van Merriënboer", "Caglar Gulcehre", "Dzmitry Bahdanau", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using rnn encoder–decoder for statistical machine translation", "venue": "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2014 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Chaoyang He", "Salman Avestimehr", "Murali Annavaram" ], "title": "Group knowledge transfer: Collaborative training of large cnns on the edge", "venue": "arXiv preprint arXiv:2007.14513,", "year": 2020 }, { "authors": [ "Wenbing Huang", "Tong Zhang", "Yu Rong", "Junzhou Huang" ], "title": "Adaptive sampling towards fast graph representation learning", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Peter Kairouz", "H Brendan McMahan", "Brendan Avent", "Aurélien Bellet", "Mehdi Bennis", "Arjun Nitin Bhagoji", "Keith Bonawitz", "Zachary Charles", "Graham Cormode", "Rachel Cummings" ], "title": "Advances and open problems in federated learning", "venue": "arXiv preprint arXiv:1912.04977,", "year": 2019 }, { "authors": [ "Sai Praneeth Karimireddy", "Satyen Kale", "Mehryar Mohri", "Sashank J Reddi", "Sebastian U Stich", "Ananda Theertha Suresh" ], "title": "Scaffold: Stochastic controlled averaging for federated learning", "venue": "In Proceedings of the 37th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Thomas N Kipf", "Ethan Fetaya", "Kuan-Chieh Wang", "Max Welling", "Richard S Zemel" ], "title": "Neural relational inference for interacting systems", "venue": null, "year": 2018 }, { "authors": [ "Max Guangyu Li", "Bo Jiang", "Hao Zhu", "Zhengping Che", "Yan Liu" ], "title": "Generative attention networks for multi-agent behavioral modeling", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Tian Li", "Anit Kumar Sahu", "Manzil Zaheer", "Maziar Sanjabi", "Ameet Talwalkar", "Virginia Smith" ], "title": "Federated optimization in heterogeneous networks", "venue": "In Proceedings of the 3rd MLSys Conference,", "year": 2020 }, { "authors": [ "Yaguang Li", "Rose Yu", "Cyrus Shahabi", "Yan Liu" ], "title": "Diffusion convolutional recurrent neural network: Data-driven traffic forecasting", "venue": "In International Conference on Learning Representations (ICLR", "year": 2018 }, { "authors": [ "Paul Pu Liang", "Terrance Liu", "Liu Ziyin", "Ruslan Salakhutdinov", "Louis-Philippe Morency" ], "title": "Think locally, act globally: Federated learning with local and global representations", "venue": "arXiv preprint arXiv:2001.01523,", "year": 2020 }, { "authors": [ "Ziyu Liu", "Hongwen Zhang", "Zhenghao Chen", "Zhiyong Wang", "Wanli Ouyang" ], "title": "Disentangling and unifying graph convolutions for skeleton-based action recognition", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Brendan McMahan", "Eider Moore", "Daniel Ramage", "Seth Hampson", "Blaise Aguera y Arcas" ], "title": "Communication-efficient learning of deep networks from decentralized data", "venue": "In Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "Guangxu Mei", "Ziyu Guo", "Shijun Liu", "Li Pan" ], "title": "Sgnn: A graph neural network based federated learning approach by hiding structure", "venue": "IEEE International Conference on Big Data (Big Data),", "year": 2019 }, { "authors": [ "Sina Sajadmanesh", "Daniel Gatica-Perez" ], "title": "When differential privacy meets graph neural networks", "venue": "arXiv preprint arXiv:2006.05535,", "year": 2020 }, { "authors": [ "Sungyong Seo", "Chuizheng Meng", "Yan Liu" ], "title": "Physics-aware difference graph networks for sparsely-observed dynamics", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Abhishek Singh", "Praneeth Vepakomma", "Otkrist Gupta", "Ramesh Raskar" ], "title": "Detailed comparison of communication efficiency of split learning and federated learning", "venue": null, "year": 1909 }, { "authors": [ "Virginia Smith", "Chao-Kai Chiang", "Maziar Sanjabi", "Ameet S Talwalkar" ], "title": "Federated multi-task learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Toyotaro Suzumura", "Yi Zhou", "Natahalie Barcardo", "Guangnan Ye", "Keith Houck", "Ryo Kawahara", "Ali Anwar", "Lucia Larise Stavarache", "Daniel Klyashtorny", "Heiko Ludwig" ], "title": "Towards federated graph learning for collaborative financial crimes detection", "venue": "arXiv preprint arXiv:1909.12946,", "year": 2019 }, { "authors": [ "Keyulu Xu", "Jingling Li", "Mozhi Zhang", "Simon S Du", "Ken-ichi Kawarabayashi", "Stefanie Jegelka" ], "title": "What can neural networks reason about", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Sijie Yan", "Yuanjun Xiong", "Dahua Lin" ], "title": "Spatial temporal graph convolutional networks for skeleton-based action recognition", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Rex Ying", "Ruining He", "Kaifeng Chen", "Pong Eksombatchai", "William L Hamilton", "Jure Leskovec" ], "title": "Graph convolutional neural networks for web-scale recommender systems", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 }, { "authors": [ "Jiaxuan You", "Rex Ying", "Jure Leskovec" ], "title": "Position-aware graph neural networks", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Bing Yu", "Haoteng Yin", "Zhanxing Zhu" ], "title": "Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting", "venue": "In Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI),", "year": 2018 }, { "authors": [ "Jun Zhou", "Chaochao Chen", "Longfei Zheng", "Xiaolin Zheng", "Bingzhe Wu", "Ziqi Liu", "Li Wang" ], "title": "Privacy-preserving graph neural network for node classification", "venue": "arXiv preprint arXiv:2005.11903,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Modeling the dynamics of spatio-temporal data generated from networks of edge devices or nodes (e.g. sensors, wearable devices and the Internet of Things (IoT) devices) is critical for various applications including traffic flow prediction (Li et al., 2018; Yu et al., 2018), forecasting (Seo et al., 2019; Azencot et al., 2020), and user activity detection (Yan et al., 2018; Liu et al., 2020). While existing works on spatio-temporal dynamics modeling (Battaglia et al., 2016; Kipf et al., 2018; Battaglia et al., 2018) assume that the model is trained with centralized data gathered from all devices, the volume of data generated at these edge devices precludes the use of such centralized data processing, and calls for decentralized processing where computations on the edge can lead to significant gains in improving the latency. In addition, in case of spatio-temporal forecasting, the edge devices need to leverage the complex inter-dependencies to improve the prediction performance. Moreover, with increasing concerns about data privacy and its access restrictions due to existing licensing agreements, it is critical for spatio-temporal modeling to utilize decentralized data, yet leveraging the underlying relationships for improved performance.\nAlthough recent works in federated learning (FL) (Kairouz et al., 2019) provides a solution for training a model with decentralized data on multiple devices, these works either do not consider the inherent spatio-temporal dependencies (McMahan et al., 2017; Li et al., 2020b; Karimireddy et al., 2020) or only model it implicitly by imposing the graph structure in the regularization on model weights (Smith et al., 2017), the latter of which suffers from the limitation of regularization based methods due to the assumption that graphs only encode similarity of nodes (Kipf & Welling, 2017), and cannot operate in settings where only a fraction of devices are observed during training (inductive learning setting). As a result, there is a need for an architecture for spatio-temporal data modeling which enables reliable computation on the edge, while maintaining the data decentralized.\nTo this end, leveraging recent works on federated learning (Kairouz et al., 2019), we introduce the cross-node federated learning requirement to ensure that data generated locally at a node remains decentralized. Specifically, our architecture – Cross-Node Federated Graph Neural Network (CNFGNN), aims to effectively model the complex spatio-temporal dependencies under the cross-node federated learning constraint. For this, CNFGNN decomposes the modeling of temporal and spatial dependencies using an encoder-decoder model on each device to extract the temporal features with local data, and a Graph Neural Network (GNN) based model on the server to capture spatial dependencies among devices.\nAs compared to existing federated learning techniques that rely on regularization to incorporate spatial relationships, CNFGNN leverages an explicit graph structure using a graph neural networkbased (GNNs) architecture, which leads to performance gains. However, the federated learning (data sharing) constraint means that the GNN cannot be trained in a centralized manner, since each node can only access the data stored on itself. To address this, CNFGNN employs Split Learning (Singh et al., 2019) to train the spatial and temporal modules. Further, to alleviate the associated high communication cost incurred by Split Learning, we propose an alternating optimization-based training procedure of these modules, which incurs only half the communication overhead as compared to a comparable Split Learning architecture. Here, we also use Federated Averaging (FedAvg) (McMahan et al., 2017) to train a shared temporal feature extractor for all nodes, which leads to improved empirical performance.\nOur main contributions are as follows :\n1. We propose Cross-Node Federated Graph Neural Network (CNFGNN), a GNN-based federated learning architecture that captures complex spatio-temporal relationships among multiple nodes while ensuring that the data generated locally remains decentralized at no extra computation cost at the edge devices.\n2. Our modeling and training procedure enables GNN-based architectures to be used in federated learning settings. We achieve this by disentangling the modeling of local temporal dynamics on edge devices and spatial dynamics on the central server, and leverage an alternating optimization-based procedure for updating the spatial and temporal modules using Split Learning and Federated Averaging to enable effective GNN-based federated learning.\n3. We demonstrate that CNFGNN achieves the best prediction performance (both in transductive and inductive settings) at no extra computation cost on edge devices with modest communication cost, as compared to the related techniques on a traffic flow prediction task." }, { "heading": "2 RELATED WORK", "text": "Our method derives elements from graph neural networks, federated learning and privacy-preserving graph learning, we now discuss related works in these areas in relation to our work.\nGraph Neural Networks (GNNs). GNNs have shown their superior performance on various learning tasks with graph-structured data, including graph embedding (Hamilton et al., 2017), node classification (Kipf & Welling, 2017), spatio-temporal data modeling (Yan et al., 2018; Li et al., 2018; Yu et al., 2018) and multi-agent trajectory prediction (Battaglia et al., 2016; Kipf et al., 2018; Li et al., 2020a). Recent GNN models (Hamilton et al., 2017; Ying et al., 2018; You et al., 2019; Huang et al., 2018) also have sampling strategies and are able to scale on large graphs. While GNNs enjoy the benefit from strong inductive bias (Battaglia et al., 2018; Xu et al., 2019), most works require centralized data during the training and the inference processes.\nFederated Learning (FL). Federated learning is a machine learning setting where multiple clients train a model in collaboration with decentralized training data (Kairouz et al., 2019). It requires that the raw data of each client is stored locally without any exchange or transfer. However, the decentralized training data comes at the cost of less utilization due to the heterogeneous distributions of data on clients and the lack of information exchange among clients. Various optimization algorithms have been developed for federated learning on non-IID and unbalanced data (McMahan et al., 2017; Li et al., 2020b; Karimireddy et al., 2020). Smith et al. (2017) propose a multi-task learning framework that captures relationships amongst data. While the above works mitigate the caveat of missing\nneighbors’ information to some extent, they are not as effective as GNN models and still suffer from the absence of feature exchange and aggregation.\nAlternating Optimization. Alternating optimization is a popular choice in non-convex optimization (Agarwal et al., 2014; Arora et al., 2014; 2015; Jain & Kar, 2017). In the context of Federated Learning, Liang et al. (2020) uses alternating optimization for learning a simple global model and reduces the number of communicated parameters, and He et al. (2020) uses alternating optimization for knowledge distillation from server models to edge models. In our work, we utilize alternating optimization to effectively train on-device modules and the server module jointly, which captures temporal and spatial relationships respectively.\nPrivacy-Preserving Graph Learning. Suzumura et al. (2019) and Mei et al. (2019) use statistics of graph structures instead of node information exchange and aggregation to avoid the leakage of node information. Recent works have also incorporated graph learning models with privacypreserving techniques such as Differential Privacy (DP), Secure Multi-Party Computation (MPC) and Homomorphic Encryption (HE). Zhou et al. (2020) utilize MPC and HE when learning a GNN model for node classification with vertically split data to preserve silo-level privacy instead of nodelevel privacy. Sajadmanesh & Gatica-Perez (2020) preprocesses the input raw data with DP before feeding it into a GNN model. Composing privacy-preserving techniques for graph learning can help build federated learning systems following the privacy-in-depth principle, wherein the privacy properties degrade as gracefully as possible if one technique fails (Kairouz et al., 2019)." }, { "heading": "3 CROSS-NODE FEDERATED GRAPH NEURAL NETWORK", "text": "" }, { "heading": "3.1 PROBLEM FORMULATION", "text": "Given a dataset with a graph G = (V, E), a feature tensor X ∈ R|V|×... and a label tensor Y ∈ R|V|×..., we consider learning a model under the cross-node federated learning constraint: node feature xi = Xi,..., node label yi = Yi,..., and model output ŷi are only visible to the node i.\nOne typical task that requires the cross-node federated learning constraint is the prediction of spatiotemporal data generated by a network of sensors. In such a scenario, V is the set of sensors and E describes relations among sensors (e.g. eij ∈ E if and only if the distance between vi and vj is below some threshold). The feature tensor xi ∈ Rm×D represents the i-th sensor’s records in the D-dim space during the past m time steps, and the label yi ∈ Rn×D represents the i-th sensor’s records in the future n time steps. Since records collected on different sensors owned by different users/organizations may not be allowed to be shared due to the need for edge computation or licensing issues on data access, it is necessary to design an algorithm modeling the spatio-temporal relation without any direct exchange of node-level data." }, { "heading": "3.2 PROPOSED METHOD", "text": "We now introduce our proposed Cross-Node Federated Graph Neural Network (CNFGNN) model. Here, we begin by disentangling the modeling of node-level temporal dynamics and server-level spatial dynamics as follows: (i) (Figure 1c) on each node, an encoder-decoder model extracts temporal features from data on the node and makes predictions; (ii) (Figure 1b) on the central server, a Graph Network (GN) (Battaglia et al., 2018) propagates extracted node temporal features and outputs node embeddings, which incorporate the relationship information amongst nodes. (i) has access to the not shareable node data and is executed on each node locally. (ii) only involves the upload and download of smashed features and gradients instead of the raw data on nodes. This decomposition enables the exchange and aggregation of node information under the cross-node federated learning constraint." }, { "heading": "3.2.1 MODELING OF NODE-LEVEL TEMPORAL DYNAMICS", "text": "We modify the Gated Recurrent Unit (GRU) based encoder-decoder architecture in (Cho et al., 2014) for the modeling of node-level temporal dynamics on each node. Given an input sequence xi ∈ Rm×D on the i-th node, an encoder sequentially reads the whole sequence and outputs the\nhidden state hc,i as the summary of the input sequence according to Equation 1.\nhc,i = Encoderi(xi,h (0) c,i ), (1)\nwhere h(0)c,i is a zero-valued initial hidden state vector.\nTo incorporate the spatial dynamics into the prediction model of each node, we concatenate hc,i with the node embedding hG,c,i generated from the procedure described in 3.2.2, which contains spatial information, as the initial state vector of the decoder. The decoder generates the prediction ŷi in an auto-regressive way starting from the last frame of the input sequence xi,m with the concatenated hidden state vector. ŷi = Decoderi(xi,m, [hc,i;hG,c,i]). (2) We choose the mean squared error (MSE) between the prediction and the ground truth values as the loss function, which is evaluated on each node locally." }, { "heading": "3.2.2 MODELING OF SPATIAL DYNAMICS", "text": "To capture the complex spatial dynamics, we adopt Graph Networks (GNs) proposed in (Battaglia et al., 2018) to generate node embeddings containing the relational information of all nodes. The central server collects the hidden state from all nodes {hc,i | i ∈ V} as the input to the GN. Each layer of GN updates the input features as follows:\ne′k = φ e (ek,vrk ,vsk ,u) e ′ i = ρ e→v (E′i) v′i = φ v (e′i,vi,u) e ′ = ρe→u (E′)\nu′ = φu (e′,v′,u) v′ = ρv→u (V ′) , (3)\nAlgorithm 1 Training algorithm of CNFGNN on the server side.\nServer executes: 1: Initialize server-side GN weights θ (0) GN , client model weights θ̄ (0) c =\n{θ̄(0),encc , θ̄(0),decc }. 2: for each node i ∈ V in parallel do 3: Initialize client model θ(0)c,i = θ̄ (0) c . 4: Initialize graph encoding on node hG,c,i = h (0) G,c,i. 5: end for 6: for global round rg = 1, 2, . . . , Rg do 7: // (1) Federated learning of on-node\nmodels. 8: for each client i ∈ V in parallel do 9: θc,i ← ClientUpdate(i).\n10: end for 11: θ̄c ← ∑ i∈V Ni N θc,i. 12: for each client i ∈ V in parallel do 13: Initialize client model: θ(0)c,i = θ̄c. 14: end for 15: // (2) Temporal encoding update. 16: for each client i ∈ V in parallel do 17: hc,i ← ClientEncode(i). 18: end for 19: // (3) Split Learning of GN.\n20: Initialize θ(rg,0)GN = θ (rg−1) GN . 21: for server round rs = 1, 2, . . . , Rs do 22: {hG,c,i|i ∈ V} ← GN({hc,i|i ∈ V};θ(rg,rs−1)GN ). 23: for each client i ∈ V in parallel do 24: ∇hG,c,i`i ← ClientBackward( i,hG,c,i). 25: ∇\nθ (rg,rs−1) GN `i ← hG,c,i.backward( ∇hG,c,i`i).\n26: end for 27: ∇\nθ (rg,rs−1) GN ` ← ∑ i∈V ∇θ(rg,rs−1)GN `i.\n28: θ(rg,rs)GN ← θ (rg,rs−1) GN\n- ηs∇θ(rg,rs−1)GN `.\n29: end for 30: θ(rg)GN ← θ (rg,Rs) GN . 31: // (4) On-node graph embedding update. 32: {hG,c,i|i ∈ V} ← GN({hc,i|i ∈ V};θ (rg) GN ). 33: for each client i ∈ V in parallel do 34: Set graph encoding on client as\nhG,c,i. 35: end for 36: end for\nAlgorithm 2 Training algorithm of CNFGNN on the client side.\nClientUpdate(i): 1: for client round rc = 1, 2, . . . , Rc do 2: h(rc)c,i ← Encoderi(xi;θ (rc−1),enc c,i ).\n3: ŷi ← Decoderi( xi,m, [h (rc) c,i ;hG,c,i];θ (rc−1),dec c,i ). 4: `i ← `(ŷi,y). 5: θ(rc)c,i ← θ\n(rc−1) c,i − ηc∇θ(rc−1)c,i `i.\n6: end for\n7: θc,i = θ (Rc) c,i .\n8: return θc,i to server. ClientEncode(i):\n1: return hc,i = Encoderi(xi;θencc,i ) to server.\nClientBackward(i, hG,c,i): 1: ŷi ← Decoderi(xi,m, [hc,i;hG,c,i];θdecc,i ). 2: `i ← `(ŷi,y). 3: return ∇hG,c,i`i to server.\nwhere ek,vi,u are edge features, node features and global features respectively. φe, φv, φu are neural networks. ρe→v, ρe→u, ρv→u are aggregation functions such as summation. As shown in Figure 1b, we choose a 2-layer GN with residual connections for all experiments. We set vi = hc,i, ek = Wrk,sk (W is the adjacency matrix) , and assign the empty vector to u as the input of the first GN layer. The server-side GN outputs embeddings {hG,c,i | i ∈ V} for all nodes, and sends the embedding of each node correspondingly." }, { "heading": "3.2.3 ALTERNATING TRAINING OF NODE-LEVEL AND SPATIAL MODELS", "text": "One challenge brought about by the cross-node federated learning requirement and the server-side GN model is the high communication cost in the training stage. Since we distribute different parts of the model on different devices, Split Learning proposed by (Singh et al., 2019) is a potential solution\nfor training, where hidden vectors and gradients are communicated among devices. However, when we simply train the model end-to-end via Split Learning, the central server needs to receive hidden states from all nodes and to send node embeddings to all nodes in the forward propagation, then it must receive gradients of node embeddings from all nodes and send back gradients of hidden states to all nodes in the backward propagation. Assume all hidden states and node embeddings have the same size S, the total amount of data transmitted in each training round of the GN model is 4|V|S. To alleviate the high communication cost in the training stage, we instead alternately train models on nodes and the GN model on the server. More specifically, in each round of training, we (1) fix the node embedding hG,c,i and optimize the encoder-decoder model for Rc rounds, then (2) we optimize the GN model while fixing all models on nodes. Since models on nodes are fixed, hc,i stays constant during the training of the GN model, and the server only needs to fetch hc,i from nodes before the training of GN starts and only to communicate node embeddings and gradients. Therefore, the average amount of data transmitted in each round forRs rounds of training of the GN model reduces to 2+2RsRs |V|S. We provide more details of the training procedure in Algorithm 1 and Algorithm 2.\nTo more effectively extract temporal features from each node, we also train the encoder-decoder models on nodes with the FedAvg algorithm proposed in (McMahan et al., 2017). This enables all nodes to share the same feature extractor and thus share a joint hidden space of temporal features, which avoids the potential overfitting of models on nodes and demonstrates faster convergence and better prediction performance empirically." }, { "heading": "4 EXPERIMENTS", "text": "We evaluate the performance of CNFGNN and all baseline methods on the traffic forecasting task, which is an important application for spatio-temporal data modeling. We reuse the following two real-world large-scale datasets in (Li et al., 2018) and follow the same preprocessing procedures: (1) PEMS-BAY: This dataset contains the traffic speed readings from 325 sensors in the Bay Area over 6 months from Jan 1st, 2017 to May 31st, 2017. (2) METR-LA: This dataset contains the traffic speed readings from 207 loop detectors installed on the highway of Los Angeles County over 4 months from Mar 1st, 2012 to Jun 30th, 2012.\nFor both datasets, we construct the adjacency matrix of sensors using the Gaussian kernel with a threshold: Wi,j = di,j if di,j >= κ else 0, where di,j = exp (−dist(vi,vj) 2\nσ2 ), dist(vi, vj) is the road network distance from sensor vi to sensor vj , σ is the standard deviation of distances and κ is the threshold. We set κ = 0.1 for both datasets.\nWe aggregate traffic speed readings in both datasets into 5-minute windows and truncate the whole sequence to multiple sequences with length 24. The forecasting task is to predict the traffic speed in the following 12 steps of each sequence given the first 12 steps. We show the statistics of both datasets in Table 1." }, { "heading": "4.1 SPATIO-TEMPORAL DATA MODELING: TRAFFIC FLOW FORECASTING", "text": "Baselines We compare CNFGNN with the following baselines. (1) GRU (centralized): a Gated Recurrent Unit (GRU) model trained with centralized sensor data. (2) GRU + GN (centralized): a model directly combining GRU and GN trained with centralized data, whose architecture is similar to CNFGNN but all GRU modules on nodes always share the same weights. We see its performance as the upper bound of the performance of CNFGNN. (3) GRU (local): for each node we train a GRU model with only the local data on it. (4) GRU + FedAvg: a GRU model trained with the Federated Averaging algorithm (McMahan et al., 2017). (5) GRU + FMTL: for each node we train\na GRU model using the federated multi-task learning (FMTL) with cluster regularization (Smith et al., 2017) given by the adjacency matrix. For each baseline, we have 2 variants of the GRU model to show the effect of on-device model complexity: one with 63K parameters and the other with 727K parameters. For CNFGNN, the encoder-decoder model on each node has 64K parameters and the GN model has 1M parameters.\nDiscussion Table 2 shows the comparison of forecasting performance and Table 3 shows the comparison of computation cost on device and communication cost of CNFGNN and baselines. We make the following observations. Firstly, when we compare the best forecasting performance of each baseline over the 2 GRU variants, GRU trained with FedAvg performs the worst in terms of forecasting performance compared to GRU trained with centralized data and GRU trained with local data (4.432 vs 4.010/4.124 on PEMS-BAY and 12.058 vs 11.730/11.801 on METRLA), showing that the data distributions on different nodes are highly heterogeneous, and training one single model ignoring the heterogeneity is suboptimal.\nSecondly, both the GRU+FMTL baseline and CNFGNN consider the spatial relations among nodes and show better forecasting performance than baselines without relation information. This shows that the modeling of spatial dependencies is critical for the forecasting task.\nLastly, CNFGNN achieves the lowest forecasting error on both datasets. The baselines that increases the complexity of on-device models (GRU (727K) + FMTL) gains slight or even no improvement at the cost of higher computation cost on edge devices and larger communication cost. However, due to its effective modeling of spatial dependencies in data, CNFGNN not only has the largest improvement of forecasting performance, but also keeps the computation cost on devices almost unchanged and maintains modest communication cost compared to baselines increasing the model complexity on devices." }, { "heading": "4.2 INDUCTIVE LEARNING ON UNSEEN NODES", "text": "Set-up Another advantage of CNFGNN is that it can conduct inductive learning and generalize to larger graphs with nodes unobserved during the training stage. We evaluate the performance of CNFGNN under the following inductive learning setting: for each dataset, we first sort all sensors based on longitudes, then use the subgraph on the first η% of sensors to train the model and evaluate\nthe trained model on the entire graph. For each dataset we select η% = 25%, 50%, 75%. Over all baselines following the cross-node federated learning constraint, GRU (local) and GRU + FMTL requires training new models on unseen nodes and only GRU + FedAvg is applicable to the inductive learning setting.\nDiscussion Table 4 shows the performance of inductive learning of CNFGNN and GRU + FedAvg baseline on both datasets. We observe that under most settings, CNFGNN outperforms the GRU + FedAvg baseline (except on the METR-LA dataset with 25% nodes observed in training, where both models perform similarly), showing that CNFGNN has the stronger ability of generalization." }, { "heading": "4.3 ABLATION STUDY: EFFECT OF ALTERNATING TRAINING AND FEDAVG ON NODE-LEVEL AND SPATIAL MODELS", "text": "Baselines We compare the effect of different training strategies of CNFGNN: (1) Centralized: CNFGNN trained with centralized data where all nodes share one single encoder-decoder. (2) Split Learning (SL): CNFGNN trained with split learning (Singh et al., 2019), where models on nodes and the model on the server are jointly trained by exchanging hidden vectors and gradients. (3) Split\nLearning + FedAvg (SL + FedAvg): A variant of SL that synchronizes the weights of encoderdecoder modules periodically with FedAvg. (4) Alternating training without Federated Averaging of models on nodes (AT, w/o FedAvg). (5) Alternating training with Federated Averaging on nodes described in Section 3.2.3 (AT + FedAvg).\nDiscussion Figure 2 shows the validation loss during training of different training strategies on PEMS-BAY and METR-LA datasets, and Table 5 shows their prediction performance and the com-\nmunication cost in training. We notice that (1) SL suffers from suboptimal prediction performance and high communication costs on both datasets; SL + FedAvg does not have consistent results on both datasets and its performance is always inferior to AT + FedAvg. AT + FedAvg consistently outperforms other baselines on both datasets, including its variant without FedAvg. (2) AT + FedAvg has the lowest communication cost on METR-LA and the 2nd lowest communication cost on PEMS-BAY, on which the baseline with the lowest communication cost (SL + FedAvg) has a much higher prediction error (4.383 vs 3.822). Both illustrate that our proposed training strategy, SL + FedAvg, achieves the best prediction performance as well as low communication cost compared to other baseline strategies." }, { "heading": "4.4 ABLATION STUDY: EFFECT OF CLIENT ROUNDS AND SERVER ROUNDS", "text": "Set-up We further investigate the effect of different compositions of the number of client rounds (Rs) in Algorithm 2 and the number of server rounds (Rc) in Algorithm 1. To this end, we vary both Rc and Rs over [1,10,20].\nDiscussion Figure 3 shows the forecasting performance (measured with RMSE) and the total communication cost in the training of CNFGNN under all compositions of (Rc, Rs) on the METR-LA dataset. We observe that: (1) Models with lower Rc/Rs ratios (Rc/Rs < 0.5) tend to have lower forecasting errors while models with higher Rc/Rs ratios (Rc/Rs > 2) have lower communication cost in training. This is because the lower ratio of Rc/Rs encourages more frequent exchange of node information at the expense of higher communication cost, while the higher ratio of Rc/Rs acts in the opposite way. (2) Models with similar Rc/Rs ratios have similar communication costs, while those with lower\nRc values perform better, corroborating our observation in (1) that frequent node information exchange improves the forecasting performance." }, { "heading": "5 CONCLUSION", "text": "We propose Cross-Node Federated Graph Neural Network (CNFGNN), which bridges the gap between modeling complex spatio-temporal data and decentralized data processing by enabling the use of graph neural networks (GNNs) in the federated learning setting. We accomplish this by decoupling the learning of local temporal models and the server-side spatial model using alternating optimization of spatial and temporal modules based on split learning and federated averaging. Our experimental results on traffic flow prediction on two real-world datasets show superior performance as compared to competing techniques. Our future work includes applying existing GNN models with sampling strategies and integrating them into CNFGNN for large-scale graphs, extending CNFGNN to a fully decentralized framework, and incorporating existing privacy-preserving methods for graph learning to CNFGNN, to enhance federated learning of spatio-temporal dynamics." }, { "heading": "A APPENDIX", "text": "A.1 DETAILED EXPERIMENT SETTINGS\nUnless noted otherwise, all models are optimized using the Adam optimizer with the learning rate 1e-3.\nGRU (centralized) : Gated Recurrent Unit (GRU) model trained with centralized sensor data. The GRU model with 63K parameters is a 1-layer GRU with hidden dimension 100, and the GRU model with 727K parameters is a 2-layer GRU with hidden dimension 200.\nGRU (local) We train one GRU model for each node with the local data only.\nGRU + FedAvg We train a single GRU model with Federated Averaging (McMahan et al., 2017). We select 1 as the number of local epochs.\nGRU + FMTL We train one GRU model for each node using the federated multi-task learning (FMTL) with cluster regularization (Smith et al., 2017) given by the adjacency matrix. More specifically, the cluster regularization (without the L2-norm regularization term) takes the following form:\nR(W ,Ω) = λtr(WΩW T ). (A1)\nGiven the constructed adjacency matrixA, Ω = 1|V| (D−A) = 1 |V|L, whereD is the degree matrix and L is the Laplacian matrix. Equation A1 can be reformulated as:\nR(W ,Ω) = λtr(WΩW T ) = λ |V| tr(WLW T )\n= λ\n|V| tr( ∑ i∈V wi ∑ j 6=i aijw T i − ∑ j 6=i wiaijw T j )\n= λ1( ∑ i∈V ∑ j 6=i αi,j〈wi,wi −wj〉).\n(A2)\nWe implement the cluster regularization via sharing model weights between each pair of nodes connected by an edge and select λ1 = 0.1.\nCNFGNN We use a GRU-based encoder-decoder model as the model on nodes, which has 1 GRU layer and hidden dimension 64. We use a 2-layer Graph Network (GN) with residual connections as the Graph Neural Network model on the server side. We use the same network architecture for the edge/node/global update function in each GN layer: a multi-layer perceptron (MLP) with 3 hidden layers, whose sizes are [256, 256, 128] respectively. We choose Rc = 1, Rs = 20 for experiments on PEMS-BAY, and Rc = 1, Rs = 1 for METR-LA.\nA.2 CALCULATION OF COMMUNICATION COST\nWe denote R as the number of communication rounds for one model to reach the lowest validation error in the training stage.\nGRU + FMTL Using Equation A2, in each communication round, each pair of nodes exchange their model weights, thus the total communicated data amount is calculated as:\nR×#nonself directed edges× size of node model weights. (A3)\nCNFGNN (AT + FedAvg) In each communication round, the central server fetches and sends back model weights to each node for Federated Averaging, and transmits hidden vectors and gradients for Split Learning. The total communicated data amount is calculated as:\nR× (#nodes× size of node model weights× 2 + (1 + 2 ∗ server round + 1)×#nodes× hidden state size). (A4)\nCNFGNN (SL) In each communication round, each node sends and fetches hidden vectors and graidents twice (one for encoder, the other for decoder) and the total communicated data amount is:\nR× 2× 2×#nodes× hidden state size. (A5)\nCNFGNN (SL + FedAvg) Compared to CNFGNN (SL), the method has extra communcation cost for FedAvg in each round, thus the total communicated data amount is:\nR× (#nodes× size of node model weights× 2 + 2× 2×#nodes× hidden state size). (A6)\nCNFGNN (AT, w/o FedAvg) Compared to CNFGNN (AT + FedAvg), there is no communcation cost for the FedAvg part, thus the total communcated data amount is:\nR× (1 + 2 ∗ server round + 1)×#nodes× hidden state size. (A7)\nTable A1: Parameters used for calculating the communication cost of GRU + FMTL.\nMethod GRU (63K) + FMTL GRU (727K) + FMTL\nNode Model Weights Size (GB) 2.347E-4 2.708E-3\nPEMS-BAY #Nonself Directed Edges 2369\nR 104 56 Train Comm Cost (GB) 57.823 359.292\nMETR-LA #Nonself Directed Edges 1515\nR 279 176 Train Comm Cost (GB) 99.201 722.137\nTable A2: Parameters used for calculating the communication cost of CNFGNN (AT + FedAvg).\nNode Model Weights Size (GB) 2.384E-4\nPEMS-BAY\n#Nodes 325 Hidden State Size (GB) 2.173E-3\nServer Round 20 R 2\nTrain Comm Cost (GB) 237.654\nMETR-LA\n#Nodes 207 Hidden State Size (GB) 1.429E-3\nServer Round 1 R 46\nTrain Comm Cost (GB) 222.246\nA.3 INDUCTIVE LEARNING\nWe have added results using 90% and 5% data on both datasets and we show the table of inductive learning results as Table A6. We observe that: (1) With the portion of visible nodes in the training stage increasing, the prediction error of CNFGNN decreases drastically. However, the increase of the portion of visible nodes has negligible contribution to the performance of GRU + FedAvg after the portion surpasses 25%. Since increasing the ratio of seen nodes in training introduces more complex relationships among nodes to the training data, the difference of performance illustrates that CNFGNN has a stronger capability of capturing complex spatial relationships. (2) When the ratio of\nTable A3: Parameters used for calculating the communication cost of CNFGNN (SL).\nPEMS-BAY\n#Nodes 325 Hidden State Size (GB) 2.173E-3 R 31 Train Comm Cost (GB) 350.366\nMETR-LA\n#Nodes 207 Hidden State Size (GB) 1.429E-3 R 65 Train Comm Cost (GB) 307.627\nTable A4: Parameters used for calculating the communication cost of CNFGNN (SL + FedAvg).\nNode Model Weights Size (GB) 2.384E-4\nPEMS-BAY\n#Nodes 325 Hidden State Size (GB) 2.173E-3 R 7 Train Comm Cost (GB) 80.200\nMETR-LA\n#Nodes 207 Hidden State Size (GB) 1.429E-3 R 71 Train Comm Cost (GB) 343.031\nTable A5: Parameters used for calculating the communication cost of CNFGNN (AT, w/o FedAvg).\nPEMS-BAY\n#Nodes 325 Hidden State Size (GB) 2.173E-3\nServer Round 20 R 44\nTrain Comm Cost (GB) 5221.576\nMETR-LA\n#Nodes 207 Hidden State Size (GB) 1.429E-3\nServer Round 1 R 49\nTrain Comm Cost (GB) 2434.985\n5% 25% 50% 75% 90%\n(a) PEMS-BAY\n5% 25% 50% 75% 90%\n(b) METR-LA\nFigure A1: Visualization of subgraphs visible in training under different ratios.\nTable A6: Inductive learning performance measured with rooted mean squared error (RMSE).\nMethod PEMS-BAY METR-LA\n5% 25% 50% 75% 90% 5% 25% 50% 75% 90%\nGRU (63K) + FedAvg 5.087 4.863 4.847 4.859 4.866 12.128 11.993 12.104 12.014 12.016 CNFGNN (64K + 1M) 5.869 4.541 4.598 4.197 3.942 13.931 12.013 11.815 11.676 11.629\nvisible nodes in training is extremely low (5%), there is not enough spatial relationship information in the training data to train the GN module in CNFGNN, and the performance of CNFGNN may not be ideal. We visualize the subgraphs visible in training under different ratios in Figure A1. However, as long as the training data covers a moderate portion of the spatial information of the whole graph, CNFGNN can still leverage the learned spatial connections among nodes effectively and outperforms GRU+FedAvg. We empirically show that the necessary ratio can vary for different datasets (25% for PEMS-BAY and 50% for METR-LA).\nA.4 THE HISTOGRAMS OF DATA ON DIFFERENT NODES\nWe show the histograms of traffic speed on different nodes of PEMS-BAY and METR-LA in Figure A2. For each dataset, we only show the first 100 nodes ranked by their IDs for simplicity. The histograms show that the data distribution varies with nodes, thus data on different nodes are not independent and identically distributed.\n0 20 40 60 80\n102 103 104 105\nNode 0\n0 20 40 60 80\n102\n103\n104\n105 Node 1\n0 20 40 60 80\n103\n104\n105\nNode 2\n0 20 40 60 80\n102\n103\n104\n105 Node 3\n0 20 40 60 80\n102\n103\n104\n105 Node 4\n0 20 40 60 80\n103\n104\n105\nNode 5\n0 20 40 60 80\n102\n103\n104\n105 Node 6\n0 20 40 60 80\n102\n103\n104\n105 Node 7\n0 20 40 60 80\n102 103 104 105\nNode 8\n0 20 40 60 80\n103\n104\n105 Node 9\n0 20 40 60 80\n102 103 104 105\nNode 10\n0 20 40 60 80\n102 103 104 105\nNode 11\n0 20 40 60 80\n102 103 104 105\nNode 12\n0 20 40 60 80\n102\n103\n104\n105 Node 13\n0 20 40 60 80\n102 103 104 105\nNode 14\n0 20 40 60 80\n102 103 104 105\nNode 15\n0 20 40 60 80\n102\n103\n104\n105 Node 16\n0 20 40 60 80\n102\n103\n104\n105 Node 17\n0 20 40 60 80\n102 103 104 105\nNode 18\n0 20 40 60 80 102\n103\n104\n105 Node 19\n0 20 40 60 80\n102 103 104 105\nNode 20\n0 20 40 60 80\n102\n103\n104\n105 Node 21\n0 20 40 60 80\n102 103 104 105\nNode 22\n0 20 40 60 80\n102\n103\n104\n105 Node 23\n0 20 40 60 80\n102\n103\n104\n105 Node 24\n0 20 40 60 80\n102 103 104 105\nNode 25\n0 20 40 60 80\n102\n103\n104\n105 Node 26\n0 20 40 60 80\n102 103 104 105\nNode 27\n0 20 40 60 80\n102 103 104 105\nNode 28\n0 20 40 60 80\n102 103 104 105\nNode 29\n0 20 40 60 80\n102 103 104 105\nNode 30\n0 20 40 60 80\n103\n104\n105 Node 31\n0 20 40 60 80 102\n103\n104\n105\nNode 32\n0 20 40 60 80\n103\n104\n105 Node 33\n0 20 40 60 80\n102\n103\n104\n105 Node 34\n0 20 40 60 80\n102\n103\n104\n105 Node 35\n0 20 40 60 80\n102\n103\n104\n105 Node 36\n0 20 40 60 80\n102\n103\n104\n105 Node 37\n0 20 40 60 80\n103\n104\n105 Node 38\n0 20 40 60 80\n102 103 104 105\nNode 39\n0 20 40 60 80\n102 103 104 105\nNode 40\n0 20 40 60 80\n102\n103\n104\n105 Node 41\n0 20 40 60 80\n102 103 104 105\nNode 42\n0 20 40 60 80\n102\n103\n104\n105 Node 43\n0 20 40 60 80\n103\n104\n105\nNode 44\n0 20 40 60 80\n102 103 104 105\nNode 45\n0 20 40 60 80\n102 103 104 105\nNode 46\n0 20 40 60 80\n102 103 104 105\nNode 47\n0 20 40 60 80\n102 103 104 105\nNode 48\n0 20 40 60 80\n102\n103\n104\n105 Node 49\n0 20 40 60 80\n102 103 104 105\nNode 50\n0 20 40 60 80\n102\n103\n104\n105 Node 51\n0 20 40 60 80\n103\n104\n105\nNode 52\n0 20 40 60 80\n102\n103\n104\n105 Node 53\n0 20 40 60 80 102\n103\n104\n105 Node 54\n0 20 40 60 80\n102 103 104 105\nNode 55\n0 20 40 60 80 102\n103\n104\n105\nNode 56\n0 20 40 60 80 102\n103\n104\n105 Node 57\n0 20 40 60 80\n102 103 104 105\nNode 58\n0 20 40 60 80\n102 103 104 105\nNode 59\n0 20 40 60 80\n102 103 104 105\nNode 60\n0 20 40 60 80\n102\n103\n104\n105 Node 61\n0 20 40 60 80\n102 103 104 105\nNode 62\n0 20 40 60 80\n102\n103\n104\n105 Node 63\n0 20 40 60 80\n102 103 104 105\nNode 64\n0 20 40 60 80\n102 103 104 105\nNode 65\n0 20 40 60 80\n103\n104\n105 Node 66\n0 20 40 60 80 102\n103\n104\n105 Node 67\n0 20 40 60 80\n102\n103\n104\n105 Node 68\n0 20 40 60 80\n103\n104\n105\nNode 69\n0 20 40 60 80\n102 103 104 105\nNode 70\n0 20 40 60 80\n102\n103\n104\n105 Node 71\n0 20 40 60 80 102\n103\n104\n105 Node 72\n0 20 40 60 80\n102\n103\n104\n105 Node 73\n0 20 40 60 80\n102\n103\n104\n105 Node 74\n0 20 40 60 80\n102 103 104 105\nNode 75\n0 20 40 60 80\n102\n103\n104\n105 Node 76\n0 20 40 60 80\n102 103 104 105\nNode 77\n0 20 40 60 80\n102 103 104 105\nNode 78\n0 20 40 60 80\n103\n104\n105\nNode 79\n0 20 40 60 80\n102\n103\n104\n105 Node 80\n0 20 40 60 80\n102\n103\n104\n105 Node 81\n0 20 40 60 80\n103\n104\n105 Node 82\n0 20 40 60 80\n103\n104\n105 Node 83\n0 20 40 60 80\n103\n104\n105 Node 84\n0 20 40 60 80\n102 103 104 105\nNode 85\n0 20 40 60 80\n102\n103\n104\n105 Node 86\n0 20 40 60 80\n102 103 104 105\nNode 87\n0 20 40 60 80\n102 103 104 105\nNode 88\n0 20 40 60 80 102\n103\n104\n105 Node 89\n0 20 40 60 80\n103\n104\n105 Node 90\n0 20 40 60 80\n102 103 104 105\nNode 91\n0 20 40 60 80\n102 103 104 105\nNode 92\n0 20 40 60 80\n102\n103\n104\n105 Node 93\n0 20 40 60 80 102\n103\n104\n105\nNode 94\n0 20 40 60 80\n102 103 104 105\nNode 95\n0 20 40 60 80 102\n103\n104\n105 Node 96\n0 20 40 60 80\n102\n103\n104\n105 Node 97\n0 20 40 60 80\n102\n103\n104\n105 Node 98\n0 20 40 60 80 102\n103\n104\n105 Node 99\n(a) PEMS-BAY\n0 20 40 60 80\n102\n103\n104\n105 Node 0\n0 20 40 60 80\n102\n103\n104\n105 Node 1\n0 20 40 60 80\n102\n103\n104\n105 Node 2\n0 20 40 60 80\n103\n104\nNode 3\n0 20 40 60 80\n103\n104\nNode 4\n0 20 40 60 80\n102\n103\n104\nNode 5\n0 20 40 60 80\n102\n103\n104\n105 Node 6\n0 20 40 60 80\n102\n103\n104\n105 Node 7\n0 20 40 60 80\n102\n103\n104\n105 Node 8\n0 20 40 60 80\n102\n103\n104\n105 Node 9\n0 20 40 60 80\n102\n103\n104\n105 Node 10\n0 20 40 60 80\n102\n103\n104\n105 Node 11\n0 20 40 60 80\n103\n104\n105 Node 12\n0 20 40 60 80\n102\n103\n104\n105 Node 13\n0 20 40 60 80\n102\n103\n104\n105 Node 14\n0 20 40 60 80 102\n103\n104\nNode 15\n0 20 40 60 80\n102\n103\n104\n105 Node 16\n0 20 40 60 80\n102\n103\n104\n105 Node 17\n0 20 40 60 80\n102\n103\n104\nNode 18\n0 20 40 60 80\n102\n103\n104\n105 Node 19\n0 20 40 60 80 102\n103\n104\n105 Node 20\n0 20 40 60 80\n102\n103\n104\n105 Node 21\n0 20 40 60 80\n102\n103\n104\n105 Node 22\n0 20 40 60 80\n103\n104\nNode 23\n0 20 40 60 80\n102\n103\n104\n105 Node 24\n0 20 40 60 80\n102\n103\n104\n105 Node 25\n0 20 40 60 80 102\n103\n104\nNode 26\n0 20 40 60 80\n102\n103\n104\n105 Node 27\n0 20 40 60 80 102\n103\n104\n105 Node 28\n0 20 40 60 80\n102\n103\n104\n105 Node 29\n0 20 40 60 80\n102\n103\n104\n105 Node 30\n0 20 40 60 80\n102\n103\n104\n105 Node 31\n0 20 40 60 80 102\n103\n104\n105 Node 32\n0 20 40 60 80\n102\n103\n104\nNode 33\n0 20 40 60 80\n102\n103\n104\n105 Node 34\n0 20 40 60 80\n102\n103\n104\n105 Node 35\n0 20 40 60 80\n102\n103\n104\n105 Node 36\n0 20 40 60 80\n102\n103\n104\n105 Node 37\n0 20 40 60 80\n103\n104\nNode 38\n0 20 40 60 80\n102\n103\n104\n105 Node 39\n0 20 40 60 80\n102\n103\n104\n105 Node 40\n0 20 40 60 80\n103\n104\n105 Node 41\n0 20 40 60 80\n102\n103\n104\n105 Node 42\n0 20 40 60 80\n102\n103\n104\n105 Node 43\n0 20 40 60 80\n102\n103\n104\n105 Node 44\n0 20 40 60 80\n102\n103\n104\n105 Node 45\n0 20 40 60 80\n102\n103\n104\n105 Node 46\n0 20 40 60 80 102\n103\n104\nNode 47\n0 20 40 60 80 102\n103\n104\n105 Node 48\n0 20 40 60 80\n102\n103\n104\n105 Node 49\n0 20 40 60 80\n102\n103\n104\n105 Node 50\n0 20 40 60 80\n102\n103\n104\nNode 51\n0 20 40 60 80\n102\n103\n104\n105 Node 52\n0 20 40 60 80\n102\n103\n104\nNode 53\n0 20 40 60 80\n102\n103\n104\n105 Node 54\n0 20 40 60 80\n102\n103\n104\n105 Node 55\n0 20 40 60 80\n102\n103\n104\nNode 56\n0 20 40 60 80\n102\n103\n104\n105 Node 57\n0 20 40 60 80\n102\n103\n104\n105 Node 58\n0 20 40 60 80\n102\n103\n104\n105 Node 59\n0 20 40 60 80\n102\n103\n104\n105 Node 60\n0 20 40 60 80\n102\n103\n104\nNode 61\n0 20 40 60 80\n103\n104\nNode 62\n0 20 40 60 80\n102\n103\n104\n105 Node 63\n0 20 40 60 80\n102\n103\n104\n105 Node 64\n0 20 40 60 80\n102\n103\n104\n105 Node 65\n0 20 40 60 80\n102\n103\n104\n105 Node 66\n0 20 40 60 80\n102\n103\n104\n105 Node 67\n0 20 40 60 80\n102\n103\n104\n105 Node 68\n0 20 40 60 80 102\n103\n104\n105 Node 69\n0 20 40 60 80\n102\n103\n104\n105 Node 70\n0 20 40 60 80\n102\n103\n104\n105 Node 71\n0 20 40 60 80\n103\n104\nNode 72\n0 20 40 60 80\n102\n103\n104\n105 Node 73\n0 20 40 60 80\n102\n103\n104\n105 Node 74\n0 20 40 60 80\n102\n103\n104\n105 Node 75\n0 20 40 60 80\n102\n103\n104\nNode 76\n0 20 40 60 80\n103\n104\n105 Node 77\n0 20 40 60 80\n102\n103\n104\n105 Node 78\n0 20 40 60 80\n102\n103\n104\n105 Node 79\n0 20 40 60 80\n102\n103\n104\n105 Node 80\n0 20 40 60 80\n102\n103\n104\nNode 81\n0 20 40 60 80\n102\n103\n104\n105 Node 82\n0 20 40 60 80\n102\n103\n104\n105 Node 83\n0 20 40 60 80\n102\n103\n104\n105 Node 84\n0 20 40 60 80\n102\n103\n104\n105 Node 85\n0 20 40 60 80\n102\n103\n104\n105 Node 86\n0 20 40 60 80\n102\n103\n104\n105 Node 87\n0 20 40 60 80\n102\n103\n104\n105 Node 88\n0 20 40 60 80\n102\n103\n104\n105 Node 89\n0 20 40 60 80\n102\n103\n104\n105 Node 90\n0 20 40 60 80 101\n102\n103\n104\nNode 91\n0 20 40 60 80\n102\n103\n104\n105 Node 92\n0 20 40 60 80\n102\n103\n104\n105 Node 93\n0 20 40 60 80\n102\n103\n104\n105 Node 94\n0 20 40 60 80\n102\n103\n104\nNode 95\n0 20 40 60 80 102\n103\n104\n105\nNode 96\n0 20 40 60 80\n102\n103\n104\n105 Node 97\n0 20 40 60 80\n102\n103\n104\n105 Node 98\n0 20 40 60 80\n102\n103\n104\n105 Node 99\n(b) METR-LA\nFigure A2: The histograms of data on the first 100 nodes ranked by ID." } ]
2,020
CROSS-NODE FEDERATED GRAPH NEURAL NETWORK
SP:9a099507d376dd1553a8d11b821ce564b8a595ff
[ "The paper under review proposes to generalize MMD for discrete random variables whose labels take value in $\\mathbb{R}^k$. They propose to estimate these generalized probability kernel distance using empirical estimators. Their properties are studied for two particular examples, namely a kernelized Stein discrenpancy and polynomials versions. Consistency and bias of both estimators are studied and bias corrected." ]
We propose a generalized probability kernel(GPK) on discrete distributions with finite support. This probability kernel, defined as kernel between distributions instead of samples, generalizes the existing discrepancy statistics such as maximum mean discrepancy(MMD) as well as probability product kernels, and extends to more general cases. For both existing and newly proposed statistics, we estimate them through empirical frequency and illustrate the strategy to analyze the resulting bias and convergence bounds. We further propose power-MMD, a natural extension of MMD in the framework of GPK, illustrating its usage for the task of two-sample test. Our work connects the fields of discrete distribution-property estimation and kernel-based hypothesis test, which might shed light on more new possibilities.
[]
[ { "authors": [ "Arthur Gretton", "Karsten M Borgwardt", "Malte J Rasch", "Bernhard Schölkopf", "Alexander Smola" ], "title": "A kernel two-sample test", "venue": "Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "Tony Jebara", "Risi Kondor", "Andrew Howard" ], "title": "Probability product kernels", "venue": "J. Mach. Learn. Res.,", "year": 2004 }, { "authors": [ "Tomas Mikolov", "Kai Chen", "Greg S. Corrado", "Jeffrey Dean" ], "title": "Efficient estimation of word representations in vector space", "venue": null, "year": 2013 }, { "authors": [ "Weikang Qian", "Marc D. Riedel", "Ivo Rosenberg" ], "title": "Uniform approximation and bernstein polynomials with coefficients in the unit interval", "venue": "European Journal of Combinatorics,", "year": 2011 }, { "authors": [ "Hao Yi", "Orlitsky Alon" ], "title": "Data amplification: Instance-optimal property estimation", "venue": "International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Hao Yi", "Orlitsky Alon", "Theertha S. Ananda", "Wu Yihong" ], "title": "Data amplification: A unified and competitiveapproach to property estimation", "venue": "Neural Information Processing Systems,", "year": 2018 } ]
[ { "heading": null, "text": "We propose a generalized probability kernel(GPK) on discrete distributions with finite support. This probability kernel, defined as kernel between distributions instead of samples, generalizes the existing discrepancy statistics such as maximum mean discrepancy(MMD) as well as probability product kernels, and extends to more general cases. For both existing and newly proposed statistics, we estimate them through empirical frequency and illustrate the strategy to analyze the resulting bias and convergence bounds. We further propose power-MMD, a natural extension of MMD in the framework of GPK, illustrating its usage for the task of two-sample test. Our work connects the fields of discrete distribution-property estimation and kernel-based hypothesis test, which might shed light on more new possibilities." }, { "heading": "1 INTRODUCTION", "text": "We focus on the two-sample problem, which is given two i.i.d samples {x1, x2, ...xn} , {y1, y2, ..., yn}, could we infer the discrepancy between underlying distributions they are drawn from. For such a problem, the option of hypothesis test(two-sample test) is most popular, and a variety of statistics in estimating the discrepancy is proposed. In recent years, RKHS based method such as maximum mean discrepancy(MMD) has gained a lot of attention. (Gretton et al., 2012) has shown that in a universal-RKHS F , MMD(F ,p, q) = 0 if and only if p = q, thus could be used for the two-sample hypothesis test. (Gretton et al., 2012) further provides unbiased estimator of MMD with fast asymptotic convergence rate, illustrating its advantages.\nOn the other hand, estimating distribution properties with plugin(empirical) estimators on discrete setting is an active research area in recent years, where people focus on problem settings with large support size but not so large sample size. The Bernstein polynomial technique is introduced to analyze the bias of the plugin estimators in (Yi & Alon, 2020), which provides remarkable progress on bias-reduction methods of the plugin estimators. It is thus interesting to ask if the plugin estimators could motivate new results for the RKHS-based two-sample test.\nAnother interesting topic is about the probability kernel, defined as kernel function over probabilities, instead of over samples. As is easily seen, any discrepancy measure of distribution p and q could potentially be valid probability kernels, not so much work focuses on this. While (Jebara et al., 2004) introduced the so called probability product kernels which generalize a variety of discrepancy measures, its properties remain further study.\nMotivated by above observations, our work focuses on a specialized probability kernel function which is a direct generalization of sample-based RKHS methods such as MMD. We focus on using plugin-estimator as the default estimator of the kernel function we defined, and illustrate that with the help of Bernstein polynomial techniques, we could analyze the bias and convergence bounds of these plugin-estimators. Our work thus connects the fields of discrete distribution-property estimation and kernel-based hypothesis test, which brings interesting possibilities." }, { "heading": "2 NOTATION", "text": "We use bold symbol p, q ∈ Rk to represent a probability function over a discrete support with support size k, and pi, qi represents the ith entry of p and q. We use {v1, v2, ..., vk}, vi ∈ Rd to represent the support of p, q. [k] := {1, 2, 3..., k} represents the set of indices of elements in {v1, v2, ..., vk}. We use φ ◦ (p, q) to denote an element-wise function from Rk × Rk to Rk, where (φ ◦ (p, q))i = φ ◦ (pi, qi) and φ ◦ p to denote an element-wise function from Rk to Rk, where (φ ◦ p)i = φ ◦ pi. With a slight abuse of notation, we denote pρ, p − q as element-wise function defined above. We use kernel(p, q) to denote kernel function which maps from Rk × Rk to real value R. And kernel(x, y), x, y ∈ Rd represents a kernel function from Rd × Rd to real value R. We use K to denote the gram matrix generated from kernel(x, y) on finite support {v1, v2, ..., vk}, where Kij = kernel(vi, vj). We use {x1, x2, ..., xn} ∼ p and {y1, y2, ..., yn} ∼ q to denote the samples from distribution p and q, where n is the sample size." }, { "heading": "3 GENERALIZED PROBABILITY KERNEL", "text": "Probability kernel function, defined as kernel function between distributions instead of samples, is a natural extension of the idea of kernel function in sample space. Definition 1. Given distribution p and q belongs to a family of discrete distribution with the same finite support {v1, v2, ..., vk}, vi ∈ Rd, where k is the support size, we define the probability kernel function as PK(p, q), which is a kernel function maps from Rk × Rk to real value R.\nMany discrepancy measures, such as MMD, can serve as probability kernel functions, but people usually don’t use the term of probability kernel function when describing them. The reason is that for most of the time, we only consider a limited number of distributions, and do not need or have the resources to navigate through all the distributions within the family. For example, when looking into the two-sample problem, we usually assume two samples {x1, x2, ..., xn} ∈ Rd and {y1, y2, ..., yn} ∈ Rd are i.i.d drawn from two distributions p and q, and use the discrepancy measure MMD[F ,p, q] to determine if p and q are indistinguishable in the RKHS F . We do not consider all other distributions in F that is irrelevant to our samples! So far the idea of kernel function between distributions is in practice not so much useful, however, here in this paper, we propose, when considering the plugin-estimator of many of the existing discrepancy measures, it is beneficial to view them as probability kernel functions." }, { "heading": "3.1 DEFINATION OF GENERALIZED PROBABILITY KERNEL", "text": "Definition 2 (Generalized probability kernel). Given the family S of discrete distribution on support {v1, v2, .., vk} where vi ∈ Rd. Let F be a unit ball in a universal-RKHS H with associated continuous kernel RK(x, y), where for any x ∈ Rd and y ∈ Rd, RK(x, y) maps from Rd ×Rd to R. We denote gram matrix Kij = RK(vi, vj).\nThe generalized probability kernel function on distribution p, q ∈ S is GPKF,φ(p, q) = φ ◦ (p, q)Kφ ◦ (q,p)T = ∑ i∈[k] ∑ j∈[k] φ ◦ (pi, qi)Kijφ ◦ (qj , pj)\nwhere φ ◦ (p, q) is an element-wise mapping function on discrete distribution p, q ∈ S, which maps from Rk × Rk to Rk,\nObviously, under this definition, the GPK is a symmetric probability kernel function where GPKF,φ(p, q) = GPKF,φ(q,p)\nMapping function φ represent a great amount of possibilities. For most cases, we need to narrow down the region and equipped it with some convenient properties so that the GPK measure could be useful.\nOne example is for the measurement of discrepancy, where we want GPKF,φ(p, q) = 0 if and only if p = q. Definition 3 (discrepancy probability kernel). Let S be a family of discrete distribution p ∈ S on support {v1, v2, ..., vk}. A discrepancy probability kernel is a kernel function PK(p, q) that PK(p, q) = 0 if and only if p = q\nTheorem 1. GPKF,φ(p, q) with the mapping function φ that satisfies:\n1. symmetric or antisymmetric with respect to p and q: φ ◦ (p, q) = φ ◦ (q,p) or φ ◦ (p, q) = −φ ◦ (q,p) 2. ‖φ ◦ (p, q)‖2 = ‖φ ◦ (q,p)‖2 = 0 if and only if p = q, where ‖ · ‖2 represents L2 norm. is a discrepancy probability kernel.\nProof. GPKF,φ(p, q) = ∑ i∈[k] ∑ j∈[k] φ ◦ (pi, qi)Kijφ ◦ (qj , pj)\n= φ ◦ (p, q)Kφ ◦ (q,p)T = ±φ ◦ (p, q)Kφ ◦ (p, q)T\n= ±vKvT\nK is a semipositive definite matrix, thus by definition of positive definite matrix, vKvT ≥ 0, where equality holds if and only if v = 0, and since v = φ ◦ (p, q), this condition further means φ ◦ (p, q) = 0, which holds if and only if p = q.\nAnother example is the polynomial GPK, which is our main focus of this paper. Such a subclass of GPK is interesting since we can build unbiased estimators of it using techniques of Bernstein polynomial in (Qian et al., 2011). As we will show in section 5., we also have analyzable convergence bounds for the resulting unbiased estimators, illustrating its potential usage for applications such as two-sample test. Definition 4 (polynomial GPK). The polynomial GPK is the subset of GPK that equipped with the mapping function φ that is polynomial in p and q: φ ◦ (p, q) = ∑o l=0 ∑o s=0 αl,sp\nlqs where o ∈ Z is the degree of the polynomial, and al,s ∈ R is the coefficient\nBelow we give some examples of polynomial GPK, which include MMD proposed in (Gretton et al., 2012), and the newly proposed power-MMD in this paper, which is a natural extension of MMD, from the view point of probability kernels." }, { "heading": "3.1.1 EXAMPLE 1: MMD AS MEMBER OF POLYNOMIAL GPK", "text": "Given discrete distribution p, q with support {v1, v2, ..., vk}, we can rewrite MMD with distribution probability function pi, qi:\nMMD2F (p, q) = ‖Ex∼pf(x)−Ex′∼qf(x′)‖ 2 H\n= ∥∥∥∥∥∥ ∑ i∈[k] f(vi)pi − ∑ i∈[k] f(vi)qi ∥∥∥∥∥∥ 2\nH\n= ∥∥∥∥∥∥ ∑ i∈[k] f(vi)pi − f(vi)qi ∥∥∥∥∥∥ 2\nH = ∑ i∈[k] ∑ j∈[k] (pi − qi)f(vi)f(vj)(pj − qj) = ∑ i∈[k] ∑ j∈[k] (pi − qi)Kij(pj − qj)\n= −GPKF,φl(p, q)\nWhere φl ◦ (p, q) = p − q, H is the RKHS defined in MMD literature, and f is the function that maps vi toH. GPKF,φl(p, q) is a special case of polynomial GPK where α1,0 = 1, α0,1 = −1, and all other coefficients are 0." }, { "heading": "3.1.2 EXAMPLE 2: PRODUCT GPK AS MEMBERS OF POLYNOMIAL GPK", "text": "Definition 5 (product GPK). Let p and q be probability distributions on support {v1, v2, ..., vk}, and l ∈ Z be nonnegative integer. The product GPK is a subset of polynomial GPK where αl,0 = 1, and all other coefficients are 0. the corresponding mapping function is: φ(p, q) = pl\nThe probability product kernel as in (Jebara et al., 2004) is a special case of product GPK where K is a identity matrix." }, { "heading": "3.1.3 EXAMPLE 3: POWER-MMD AS MEMBERS OF POLYNOMIAL GPK", "text": "Another interesting subset of polynomial GPK is the one extends MMD case into a power form and we denote it as power-MMD:\nDefinition 6 (power-MMD). Let p and q be probability distributions on support {v1, v2, ..., vk} and ρ ∈ Z be a positive integer. then the power-MMD is a subset of polynomial GPK where αρ,0 = 1, α0,ρ = −1, and all other coefficients are 0. the corresponding mapping function is: φ(p, q) = pρ − qρ\nApparently, MMD is a special case of power-MMD where ρ = 1, and power-MMD satisfies the requirement in Theorem 1, thus has the potential usage of discrepancy measure. In section 5., we will show that power-MMD has unbiased estimator with analyzable convergence bounds thus could be used for two-sample test." }, { "heading": "3.2 DISCUSSION OF GPK IN DISCRETE SETTING", "text": "As one may easily notice, the definition of GPK includes a gram matrix generated by the kernel function RK(vi, vj) which measures the discrepancy between vi, vj ∈ {v1, v2, .., vk}. While considering the cases of categorical distribution, values of discrete variables does not relate to any notion of distance, this raises the question: how the introduced gram matrix will be beneficial in any cases?\nThe answer is twofold: 1. Many natural processes produce discrete distributions where there possibly exists a similarity measure in values which imply the similarity in frequencies of occurrence(probability values). For example, in the field of natural language process(NLP), one may treat words as atomic units with no notion of similarity between words, as these are represented as indices in a vocabulary. However, given large number of training samples, similarity measure between words could be made possible using techniques such as words2vec(Mikolov et al., 2013). Such techniques generally result in better performance and have become the important preprocessing techniques for NLP tasks(Goodfellow et al., 2016). 2. As there are cases where the values of discrete variables are totally irrelevant, or people may use kernel function RK(vi, vj) which doesn’t correctly imply the similarity in probability values, the GPK framework may still capture the similarity between distributions. One example is the case of MMD, which is, as we discussed above, an element of GPK family. As proved in (Gretton et al., 2012), MMD is a distribution free measurement between two samples, which means no matter what kind of p, q and kernel(x, y), we have, the MMD2F (p, q) measure will be 0 if and only if p = q. However, the bad choice of kernel function does have a negative effect on convergence bounds of the empirical estimator proposed in (Gretton et al., 2012), and will influence the results of two-sample test. For this reason, we mainly focus on dataset with known relativity measures in our experiment section." }, { "heading": "4 PLUGIN-ESTIMATOR FOR GPK", "text": "So far we have defined the GPK and discussed some subsets of GPK with potential usage of two-sample test. Next we discuss how to build an estimator, given a member of GPK. In this section, we propose the plugin-estimator, which based on the count of occurrence of each value vi ∈ {v1, v2, ..., vk} in samples {x1, x2, ..., xn} ∈ p or q. We illustrate that by doing so, the techniques of Bernstein polynomial in (Qian et al., 2011) could be used to help building unbiased estimators for any members of polynomial GPK. Furthermore, we provide analyzable convergence bounds of these estimators.\nWe begin with the definition of plugin-estimators:\nDefinition 7. Suppose we have i.i.d samples of distribution p as Xn1 := {x1, x2, ..., xn1} ∼ p and Xn2 := {xn1+1, xn1+2, ..., xn1+n2} ∼ p. And also the i.i.d samples of distribution q as Ym1 := {y1, y2, ..., ym1} ∼ q and Ym2 := {ym1+1, ym1+2, ..., ym1+m2} ∼ q.\nLet N (n1)i denotes the number of occurrence of value vi ∈ {v1, v2, ..., vk} in sample Xn1 , and Si,n1 := (N (n1) i , n1) denotes the collection of N (n1) i and n1. The same follows for Xn2 , Ym1 and Ym2\nWe define the plugin-estimator of GPKF,φ(p, q) as GPKE [F , φ,X, Y ] = ∑ i∈[k] ∑ j∈[k] fφ (Si,n1 , Si,m1)Kijfφ (Sj,m2 , Sj,n2)\nwhere fφ is a function related to function φ, and K is the gram matrix brought by F .\nHere our setting is different from the unbiased estimator MMD2u of (Gretton et al., 2012), where in their setting Xn1 , Xn2 represent the same sample from p and so do for Ym1 , Ym2 from q. Instead, we are using the same setting as the linear time statistic MMD2l proposed in (Gretton et al., 2012). Another way of viewing this is that for our setting, given two samples {x1, x2, ..., xn}, {y1, y2, ..., yn} from p and q, we depart each sample of x and y into two parts, yielding 4 different samples with size n1, n2, m1, m2, and then calculate the empirical frequencies for plugin-estimator defined above." }, { "heading": "4.1 POLYNOMIAL GPK WITH UNBIASED PLUGIN-ESTIMATORS", "text": "One of our main contributions of this paper is the proposal that we can always find an unbiased plugin-estimator for any members in polynomial GPK family. The basic idea is that we can analyze the expectation of plugin-estimators through Bernstein polynomial, and use the existing results of (Qian et al., 2011) to build the unbiased plugin estimators. Theorem 2. Denote\ngj(k, n) := gj(k, n) = ( k j )( n j )−1 , for j ≤ k\n0, for j > k\nThen any member of polynomial GPK[F , φ,p, q] equipped with polynomial mapping function φ(p, q) = ∑o l=0 ∑o s=0 αl,sp\nlqs of degree o ∈ Z, has an unbiased plugin-estimator with mapping function fφ to be:\nfφ(Si,n1 , Si,m1) = o∑ l=0 o∑ s=0 αl,sgl(N (n1) i , n1)gs(N (m1) i ,m1)\nProof. The basic idea is directly using the result of Bernstein polynomial in (Qian et al., 2011) to build unbiased estimators. We put our formal proof in appendix\nFor notation simplicity, we define the plugin-estimator discussed above to be the default-pluginestimator for polynomial GPK: Definition 8 (default-plugin-estimator for polynomial GPK). The plugin-estimator defined in Theorem 2 is the default-plugin-estimator for polynomial GPK.\nThis plugin-estimator, according to Theorem 2, is an unbiased estimator" }, { "heading": "4.2 DEVIATION BOUND OF PLUGIN-ESTIMATORS", "text": "Another topic about plugin-estimator is its deviation bound. We directly use the McDiamid’s inequality to derive the default-plugin-estimator for polynomial GPK: Theorem 3. The default-plugin-estimator of GPK[F , φ,p, q] equipped with polynomial mapping function φ(p, q) = ∑o l=0 ∑o s=0 αl,sp\nlqs of degree o ∈ Z has the convergence bound:\n∀a > 0,Pr(|GPKE [F , φ,X, Y ]− E[GPKE [F , φ,X, Y ]]| ≥ a) ≤ 2e− 2a2 Z\nwhere\nZ = (( n1 ( τ (1)n1,m1 )2 +m1 ( τ (2)n1,m1 )2) Φ2m2,n2 + ( m2 ( τ (1)m2,n2 )2 + n2 ( τ (2)m2,n2 )2) Φ2n1,m1 ) K2max\nΦn,m = ∑ i∈[k] ∣∣∣∣∣ o∑ l=0 o∑ s=0 αl,sgl(N (n) i , n)gs(N (m) i ,m) ∣∣∣∣∣ τ (1)n,m = sup\ni∈[k] ( o∑ l=0 o∑ s=0 l N (n) i · |αl,s| · gl(N (n)i , n)gs(N (m) i ,m) )\nτ (2)n,m = sup i∈[k] ( o∑ l=0 o∑ s=0 s N (m) i · |αl,s| · gl(N (n)i , n)gs(N (m) i ,m) )\nKmax is the largest value of entries in K\nProof. The basic idea is to use the McDiamid’s inequality, and we put our formal proof into the appendix." }, { "heading": "5 EXAMPLE: POWER-MMD AS A NATURAL EXTENSION TO MMD FROM", "text": "GPK VIEWPOINT\nIn this section, we mainly discuss power-MMD as defined in 3.1.3. We analyze the bias and convergence bound of its plugin-estimators using the techniques we introduced so far, illustrating that such a natural extension to MMD from GPK viewpoint could be beneficial for two-sample test." }, { "heading": "5.1 PLUGIN-ESTIMATORS OF POWER-MMD", "text": "As we already discussed in section 4.1.3, power-MMD is a subset of polynomial GPK. According to Theorem 2, any member GPKF,φρ(p, q) in power-MMD has a default-plugin-estimator with the mapping function fφ(Si,n1 , Si,m1) = gρ(Ni, n1)− gρ(Mi,m1) Remark 3.1. When ρ = 1, the power-MMD return to the original MMD case. Remarkably, the default-plugin-estimator of this case is equivalent to the linear time statistic MMD2l proposed in (Gretton et al., 2012): GPKE [F , φl, X, Y ] = MMD2l [F , X, Y ] For details of the derivation, see appendix" }, { "heading": "5.2 DEVIATION BOUND OF PLUGIN-ESTIMATORS OF POWER-MMD", "text": "Corollary 3.1. Denote τn = supi∈[k] ( ρ\nN (n) i\ngρ(N (n) i , n) ) The default-plugin-estimator of power-MMD GPKF,φρ(p, q) has uniform convergence bound defined in Theorem 3. with τ (1)n,m = τn and τ (2) n,m = τm\nCorollary 3.2. Consider the case where n1 = n2 = m1 = m2 = n The default-plugin-estimator of power-MMD GPKF,φρ(p, q) has uniform convergence bound:\nPr(|GPKE [F , φρ, X, Y ]− E[GPKE [F , φρ, X, Y ]]| ≥ a) ≤ 2e −na2 (ρ2Φ2n1,m1 +ρ2Φ2m2,n2 )K2max ≤ 2e −na2 8ρ2K2max\nProof. The first inequality above comes from:\nρ\nNi gρ(Ni, n) = ρ(Ni − 1)(Ni − 2)...(Ni − ρ+ 1) n(n− 1)...(n− ρ+ 1) ≤ ρ n Nρ−1i nρ−1 ≤ ρ n\nwhere supi∈[k] ( ρ Ni gρ(Ni, n) ) = ρn only stands for extreme case such that there exist Ni = n, i.e. all the samples belongs to the same value vi ∈ {v1, v2, ..., vk}. And the second inequality above comes from:\nΦn,m = ∑ i∈[k] ∣∣∣gρ(N (n)i , n)− gρ(N (m)i ,m)∣∣∣ ≤∑ i∈[k] ∣∣∣gρ(N (n)i , n)∣∣∣+ ∑ i∈[k] ∣∣∣gρ(N (m)i ,m)∣∣∣ ≤ ∑ i∈[k] ( N (n) i n )ρ + ∑ i∈[k] ( N (m) i m )ρ ≤ ∑ i∈[k] N (n) i n ρ + ∑ i∈[k] N (m) i m ρ = 2\nRemark 3.2. recall in (Gretton et al., 2012), the deviation bound for linear time estimator MMD2l [F , X, Y ] is\n∀a > 0,Pr( ∣∣MMD2l [F , X, Y ]− E[MMD2l [F , X, Y ]]∣∣ ≥ a) ≤ 2e −na28K2max\nInterestingly, this bound is the same as the case ρ = 1 in Corollary 3.2. Note that according to section 6.1, the default-plugin-estimator of power-MMD with ρ = 1 is actually in equivalent to MMD2l case in (Gretton et al., 2012). Our bound generalize the bound in (Gretton et al., 2012) and provide a tighter version. Note that the bounds for special case of ρ = 1 has simpler derivation, and the reader may refer to appendix for more details." }, { "heading": "5.3 TWO-SAMPLE TEST USING POWER-MMD", "text": "Corollary 3.3. A hypothesis test of level α for the null hypothesis p = q has the acceptance region∣∣∣GPKE [F,φρ,X,Y ]√ Z ∣∣∣ <√ 12 log((α2 )−1) Where Z is defined in Corollary 3.1 The two-sample test for power-MMD then follows this procedure: 1. calculate v =∣∣∣GPKE [F,φρ,X,Y ]√\nZ ∣∣∣. 2. check if v < √ 12 log((α2 )−1), if so, accept the null hypothesis, otherwise reject the null hypothesis.\nNext we analyze the performance of our proposed two-sample test under two cases: ρ = 1 and ρ > 1\n5.3.1 ρ = 1 CASE\nFor ρ = 1 case, since GPKE [F , φ1, X, Y ] is equivalent to MMD2l [F , φ1, X, Y ], the only difference between our proposal and that of Gretton et al. (2012) is the convergence bound. According to Remark 3.2, we provide a tighter bound for the test statistic, thus we will certainly have a better performance using power-MMD.\n5.3.2 ρ > 1 CASE\nWe need to answer two questions for the ρ > 1 case: 1. when applying power-MMD in practice, is the proposed statistics numerical stable? 2. will the performance of two-sample test gets better when ρ gets larger?\nFor the question of numerical stability, since gρ(Ni, n) ≤ ( Ni n )ρ , the term will exponentially decrease with the increase of ρ. This effect will cause numerical problem when Ni n and ρ is large. One solution is to find an upper-bound of ∣∣∣GPKE [F,φρ,X,Y ]√ Z\n∣∣∣ which is numerical stable. Corollary 3.4. Consider the simplest case where n1 = n2 = m1 = m2 = n. Define\nCN := {N (n1)1 , N (n1) 2 , ..., N (n1) k , N (n2) 1 , N (n2) 2 , ..., N (n2) k , N (m1) 1 , N (m1) 2 , ...,\nN (m1) k , N (m2) 1 , N (m2) 2 , ..., N (m2) k }\nto be the set of all counts of occurrence in the four samples Xn1 , Xn2 , Xm1 , Xm2 . Denote SN = supNi∈CN (Ni) to be the maximium value in the set CN We have:∣∣∣∣GPKE [F , φρ, X, Y ]√Z ∣∣∣∣ ≤ ∣∣∣∣ SN ·GPK′EKmaxρ√2n · Φ′ ∣∣∣∣\nwhere GPK′E := ∑ i,j∈[k] ( gρ(N (n1) i , n)− gρ(N (m1) i , SN ) ) Kij ( gρ(N (n2) j , SN )− gρ(N (m2) j , SN ) ) and\nΦ′ = ∑ i∈[k] ∣∣∣gρ(N (n)i , SN )− gρ(N (m)i , SN )∣∣∣ For cases when Ni are not far less from SN , GPK′E will be much more numerical stable than GPKE [F , φρ, X, Y ]. To answer the question related to the performance of two-sample test when ρ get larger, we need to analyze the case when p 6= q, if ∣∣∣GPKE [F,φρ,X,Y ]√ Z\n∣∣∣ increase with the increase of ρ. Unfortunately, there is no clear answer to this." }, { "heading": "6 SUMMARY", "text": "To summarize, we introduce the framework of generalized probability kernel(GPK). While GPK represents a large family of probability kernels, we focus on polynomial GPK since all members of such subset of GPK have unbiased plugin-estimators. Remarkably, a natural extension of MMD from the viewpoint of polynomial GPK, which we call power-MMD, could be used for two-sample test. Theoretical study shows that for ρ = 1 case, power-MMD outperforms linear time MMD proposed in Gretton et al. (2012), and the performance of ρ > 1 case is left for future work. For members of GPK which do not belong to polynomial GPK, it is not easy to design an unbiased estimators. However, bais reduction techniqes proposed in (Yi et al., 2018) and (Yi & Alon, 2020) could be used, and we still have the chance to apply two-sample test with the resulting estimators. Such a possibility is also left for future work." }, { "heading": "A APPENDIX", "text": "A.1 BERNSTEIN POLYNOMIAL\nDrawing i.i.d. samples Y m from any distribution p the expected value of the empirical estimator for a distribution property is\nE [ ĤE (Y m) ] = ∑ i∈[k] E Mi∼bin(m,pi) [ h ( Mi m )]\nNote that for any function f,m ∈ N, and x ∈ [0, 1], the degree-m Bernstein polynomial of f is\nBm(f, x) := m∑ j=0 f ( j m )( m j ) xj(1− x)m−j\nTherefore, we can express the expectation of the empirical property estimator as\nE Ym∼p\n[ ĤE (Y m) ] = ∑ i∈[k] Bm (h, pi)\nA.2 PROOF OF THEOREM 2\nProof. Recall the definition of polynomial GPK: GPK(F , φ,p, q) = ∑ i∈[k] ∑ j∈[k] φ(pi, qi)Ki,jφ(qj , pj) = ∑ i∈[k] ∑ j∈[k] Ki,j o∑ l,s,r,t=0 αl,sαr,tp l iq s i p t jq r j\nrecall in (Qian et al., 2011)\nxj = n∑ k=j gj(k, n)bk,n(pi) = Ek∼bin(pi,n)gj(k, n)\nwhere gj(k, n) is defined in the beginning of the theorem\nGPK(F , φ,p, q) = ∑ i∈[k] ∑ j∈[k] Ki,j o∑ l,s,r,t=0 αl,sαr,tp l iq s i p t jq r j\n= ∑ i∈[k] ∑ j∈[k] Ki,j o∑ l,s,r,t=0 αl,sαr,tEN(n1)i ∼bin(pi,n1)gl(Ni, n1)\nE N\n(m1) i ∼bin(qi,m1) gs(N (m1) i , n2)EN(n2)j ∼bin(pj ,n2)gt(N (n2) j , n2)EN(m2)j ∼bin(qj ,m2)gr(N (m2) j ,m2)\n= E ∑ i∈[k] ∑ j∈[k] Ki,j o∑ l,s,r,t=0 αl,sαr,tgl(N (n1) i , n1)gs(N (m1) i ,m1)gt(N (n2) j , n2)gr(N (m2) j ,m2) = E [GPKE(F , φ,X, Y )]\nA.3 PROOF OF THEOREM 3.\nLemma 4. Let SNi,n1 := (N (n1) i , n1) denotes the collection of N (n1) i and n1. The same follows for Xn2 , Ym1 and Ym2 . Also for notation simplicity, let Si,n1 := SNi,n1 For the plugin-estimator\nGPKE [F , φ,X, Y ] = ∑ i∈[k] ∑ j∈[k] fφ (SNi,n1 , SNi,m1)Kijfφ ( SNj ,m2 , SNj ,n2 ) with mapping function fφ(SNi,n1 , SNi,m1) having the following properties:\n• fφ(SNi,n1 , SNi,m1) is a monotonic function related to N (n1) i and N (m1) i :\n∀Ni > N ′i or ∀Ni < N ′i : fφ(SNi,n1 , SNi,m1) > fφ(SN ′i ,n1 , SNi,m1)\nThe same follows for N (m1)i\n• |fφ(SNi±1,n1 , SNi,m1)− fφ(SNi,n1 , SNi,m1)| ≤ τn1\nand\n|fφ(SNi,n1 , SNi±1,m1)− fφ(SNi,n1 , SNi,m1)| ≤ τm1 where τn1 is a constant related to sample size n1, τm1 is a constant related to sample size m1, the same follows for τn2 and τm2\nWe have:\n∀a > 0,Pr(|GPKE [F , φ,X, Y ]− E[GPKE [F , φ,X, Y ]]| ≥ a) ≤ 2e −2a2 Z\nwhere\nZ = (( n1τ 2 n1 + n2τ 2 n2 ) Φ22 + ( m1τ 2 m1 +m2τ 2 m2 ) Φ21 ) K2max\nΦ1 = ∑ j∈[k] ∣∣fφ(SNj ,n1 , SNj ,m1)∣∣ ,Φ2 = ∑ j∈[k] ∣∣fφ(SNj ,n2 , SNj ,m2)∣∣ Kmax is the largest value of entries in K\nProof. recall McDiamid’s inequality\nTheorem 5. Let Y1, . . . , Ym be independent random variables taking values in ranges R1, . . . , Rm and let F : R1 × . . . × Rm → C with the property that if one freezes all but the wth coordinate of F (y1, . . . , ym) for some 1 ≤ w ≤ m, then F only fluctuates by most cw > 0, thus | F (y1, . . . , yw−1, yw, yw+1, . . . , ym) − F (y1, . . . , yw−1, y′w, yw+1, . . . , ym) |≤ cw for all yj ∈ Rj and y′w ∈ Rw for 1 ≤ j ≤ m\nThen for any a > 0, one has Pr(|F (Y )− E[F (Y )]| ≥ a) ≤ 2e − 2a2∑n i=1 c2 i\nconsidering the plugin-estimator of GPK family:\nGPKE [F , φ,X, Y ] = ∑ i∈[k] ∑ j∈[k] fφ (Si,n1 , Si,m1)Kijfφ (Sj,m2 , Sj,n2)\nWithout loss of generality, we rewrite the function fφ as:\nfφ (SNi,n1 , SNi,m1) = F (x1, x2, ..., xs, ..., xn1 , N (m1) i ,m1) = FNi\nAssume we freeze all but one element in Xn1 := {x1, x2, ..., xs, ..., xn1}, and only xs is allowed to change its value.\nobviously, no matter how this element change, it always lies in the finite set of support {v1, v2, ..., vk}, without loss of generality, we assume xs changes its value from vi to vii, thus the corresponding count of occurrence Ni changes to Ni − 1 and Nii changes to Nii + 1\nwe have for xs ∈ Xn1 cs = supxs ∣∣GPKE(x1, x2, ..., xn)−GPKE(x1, x2, ..., xs, ..., xn)∣∣ = supxs ∣∣∣∣∣∣ ∑ i,j∈[k] ( F (x1, x2, ..., xs, ..., xn1 , N (m1) i ,m1)− F (x1, x2, ..., xs, ..., xn1 , N (m1) i ,m1) ) Kijfφ (Sj,m2 , Sj,n2)\n∣∣∣∣∣∣ = supi,ii∈[k]\n∣∣∣∣∣∣ ∑ j∈[k] ((FNi−1 − FNi)Ki,j + (FNii+1 − FNii,1)Kii,j) fφ(SNj ,n2 , SNj ,m2) ∣∣∣∣∣∣ ≤\n∣∣∣∣∣∣ ∑ j∈[k] τn1(−Ki,j +Kii,j)fφ(SNj ,n2 , SNj ,m2) ∣∣∣∣∣∣ ≤ |τn1Kmax| ∑ j∈[k]\n∣∣fφ(SNj ,n2 , SNj ,m2)∣∣ = τn1KmaxΦ2\nwhere Φ2 = ∑ j∈[k] ∣∣fφ(SNj ,n2 , SNj ,m2)∣∣ Note that Ki,j := Kij\nSimilarly, for xs ∈ Xn2 cs ≤ τn2KmaxΦ1\nwhere Φ1 = ∑ j∈[k] |φ (p1j , q1j)|\nfor xs ∈ Ym1 cs ≤ τm1KmaxΦ2\nfor xs ∈ Ym2 cs ≤ τm2KmaxΦ1\nthus according to McDiamid’s inequality, we have\nPr(|GPKE [F , φ,X, Y ]− E [GPKE [F , φ,X, Y ]]| ≥ a) ≤ 2e −2a2∑n1+n2+m1+m2 i=1 c2 i\nWe set ci = τn1KmaxΦ2 for xi ∈ {Xn1},\nci = τm1KmaxΦ2\nfor xi ∈ {Ym1},\nci = τn2KmaxΦ1\nfor xi ∈ {Xn2},\nci = τm2KmaxΦ1\nfor xi ∈ {Ym2} and get\nn1+n2+m1+m2∑ i=1 c2i = n1 · (τn1KmaxΦ2) 2 + n2 · (τn2KmaxΦ1) 2\n+m1 · (τm1KmaxΦ2) 2 +m2 · (τm2(KmaxΦ1) 2 = (( n1τ 2 n1 +m1τ 2 m1 ) Φ22 + ( n2τ 2 n2 +m2τ 2 m2 ) Φ21 ) K2max\nWe are ready to proof theorem 3:\nProof. Define\nDiv (1) n,l,s = |gl(N (n) i + 1, n)gs(N (m) i ,m)− gl(N (n) i , n)gs(N (m) i ,m)|\n= ∣∣∣∣ (Ni + 1)Ni(Ni − 1)...(Ni − ρ+ 2)n1(n1 − 1)...(n1 − ρ+ 1) − Ni(Ni − 1)...(Ni − ρ+ 1)n1(n1 − 1)...(n1 − ρ+ 1) ∣∣∣∣ · ∣∣∣gs(N (m)i ,m)∣∣∣\n= ∣∣∣∣∣ lN (n)i − l + 1gl(N (n)i , n) ∣∣∣∣∣ · ∣∣∣gs(N (m)i ,m)∣∣∣\n= l\nN (n) i − l + 1\ngl(N (n) i , n)gs(N (m) i ,m)\nDiv (2) n,l,s = |gl(N (n) i − 1, n)gs(N (m) i ,m)− gl(N (n) i , n)gs(N (m) i ,m)|\n= ∣∣∣∣ (Ni − 1)(Ni − 2)...(Ni − ρ)n1(n1 − 1)...(n1 − ρ+ 1) − Ni(Ni − 1)...(Ni − ρ+ 1)n1(n1 − 1)...(n1 − ρ+ 1) ∣∣∣∣ · ∣∣∣gs(N (m)i ,m)∣∣∣\n= ∣∣∣∣∣ lN (n)i gl(N (n)i , n) ∣∣∣∣∣ · ∣∣∣gs(N (m)i ,m)∣∣∣\n= l\nN (n) i\ngl(N (n) i , n)gs(N (m) i ,m)\nDivn,l,s = max(Div (1) n,l,s, Div (2) n,l,s) =\nl\nN (n) i\ngl(N (n) i , n)gs(N (m) i ,m)\nRecall the mapping function of default-plugin-estimator of polynomial GPK fφ (Si,n, Si,m) :=∑o l=0 ∑o s=0 αl,sgl(N (n) i , n)gs(N (m) i ,m)\nApparently fφ is a monotonic function with respect to Ni and Mi, thus condition 1. for Theorem 3. is satisfied\nSince we also have:\n|fφ(SN(n)i ±1,n, SN(m)i ,m)− fφ(SNi,n1 , SMi,m1)| ≤ o∑ l=0 o∑ s=0 |αl,sDivn,l,s|\n= o∑ l=0 o∑ s=0 |αl,s| l N (n) i gl(N (n) i , n)gs(N (m) i ,m)\nthus condition 2. for Theorem 3. is satisfied\nA.4 PROOF OF COROLLERY 3.4\nProof. Define τ = supNi∈CN ρ Ni gρ(Ni, n) = ρ SN gρ(SN , n), Φn,m = max{Φn1,m1 ,Φm2,n2}. Since we have gρ(N,n)gρ(M,n) = gρ(N,M), we could get∣∣∣∣GPKE [F , φρ, X, Y ]√Z ∣∣∣∣ ≤ ∣∣∣∣∣∣ ∑ i,j∈[k] ( gρ(N (n1) i , n)− gρ(N (m1) i , n) ) Kij ( gρ(N (n2) j , n)− gρ(N (m2) j , n) ) KmaxτΦn,m √ 2n\n∣∣∣∣∣∣ = ∣∣∣∣∣∣ ∑ i,j∈[k] ( gρ(N (n1) i , n)− gρ(N (m1) i , n) ) Kij ( gρ(N (n2) j , n)− gρ(N (m2) j , n) ) Kmax\nρ SN gρ(SN , n) ∑ i∈[k] ∣∣∣gρ(N (n)i , n)− gρ(N (m)i , n)∣∣∣√2n ∣∣∣∣∣∣\n= ∣∣∣∣∣∣ SN ∑ i,j∈[k] ( gρ(N (n1) i , SN )− gρ(N (m1) i , SN ) ) Kij ( gρ(N (n2) j , SN )− gρ(N (m2) j , SN ) ) Kmaxρ √ 2n · ∑ i∈[k] ∣∣∣gρ(N (n)i , SN )− gρ(N (m)i , SN )∣∣∣ ∣∣∣∣∣∣\nA.5 DETAILS OF REMARK 3.1 GPKE [F , φl, X, Y ] = ∑ i∈[k] ∑ j∈[k] ( g1(N (n1) i , n1)− g1(N (m1) i ,m1) ) Kij ( g1(N (n2) j , n2)− g1(N (m2) j ,m2) )\n= ∑ i∈[k] ∑ j∈[k]\n( N\n(n1) i\nn1 − N\n(m1) i\nm1\n) Kij ( N (n2) j\nn2 − N\n(m2) j\nm2\n)\n= 1\nm1m2 m1∑ i=1 m2∑ j=1 k (xi, xj) + 1 n1n2 n1∑ i=1 n2∑ j=1 k (yi, yj)− 1 m1n2 m1∑ i=1 n2∑ j=1 k (xi, yj)− 1 m2n1 m2∑ i=1 n1∑ j=1 k (xi, yj)\n= MMD2l [F , X, Y ]\nA.6 THE CONVERGENCE BOUNDS OF POWER-MMD WITH ρ = 1\nWhen ρ = 1, τn1 = ρ Ni gρ(N,n1) = 1 Ni · Nin1 = 1 n1 For simplicity consider the case where n1 = n2 = m1 = m2 = n, from Corollary 3.1. we have\nPr(|GPKE [F , φl, X, Y ]− E[GPKE [F , φl, X, Y ]]| ≥ a) ≤ 2e −na2 K2max2Φ 2\nSince Φ = ∑ j∈[k] |fφ(Sj,n, Sj,m)| = ∑ j∈[k] ∣∣∣∣N(n)jn − N(m)jm ∣∣∣∣ ≤∑j∈[k] |N(n)jn | +∑j∈[k] |N(m)jm | = 2\nwe have\nPr(|GPKE [F , φl, X, Y ]− E[GPKE [F , φl, X, Y ]]| ≥ a) ≤ 2e −na2 K2max2Φ 2 ≤ 2e −na2 8K2max\nrecall in (Gretton et al., 2012), the deviation bound for linear time estimator MMD2l [F ,p, q] is\nPr( ∣∣MMD2l [F , X, Y ]− E[MMD2l [F , X, Y ]]∣∣ ≥ a) ≤ 2e −na28K2max\nThus our bound generalize the bound in (Gretton et al., 2012) and provide a tighter version.\nA.7 IS BOUNDS GET TIGHTER WHEN ρ GETTING LARGER?\nAs we’ve already known, ρ = 1 case is equivalent to MMDl in (Gretton et al., 2012), one question rises: Would the performance of cases ρ > 1 better than widely used ρ = 1 case?\nAccording to Corollary 3.2., since e −na2 8K2max ≤ e −na2\n8ρ2K2max , the convergence bounds for ρ > 1 cases seem looser than ρ = 1 case, and this may give a negative answer to the question above.\nHowever, the bound above is based on the worst cases where supi(Ni) = n, such that τn ≤ ρn and Φ ≤ 2. In practice, we are less likely to come across such a phenomena, instead, we may assume the supi(Ni) to be far smaller.\nWithout loss of generality, assume we have max( supi(N (n) i )\nn , supi(N\n(m) i )\nm ) ≤ 1 α , where α ≥ 1, it is\neasily seen:\nτn ≤ ρ\nn Nρ−1i nρ−1 ≤ ρ αρ−1n\nand\nΦn,m ≤ ∑ i∈[k]\n( N\n(n) i\nn )ρ + ∑ i∈[k] ( N (m) i m )ρ ≤ 2 (( 1 α )ρ + ( 1− 1 α )ρ) define τ := max (τn1 , τn2 , τm1 , τm2) and Φ := max (Φn1,m1 ,Φm2,n2)\nWe have\nZ ≤ 4nτ2Φ2K2max ≤ 16 ρ2\nα2ρ−2n\n(( 1\nα\n)ρ + ( 1− 1\nα\n)ρ)2 K2max = Zb\nPlotting the Zb value with respect to variety value of ρ and α in Fig. 1, we can see that for α = 1, the bound will be looser given larger ρ. However, for α larger than around 1.25, which means the supi(Ni) is slightly smaller than the sample size, the bound will become tighter when ρ is large. This illustrate the benefit of using power-MMD with larger ρ in practice.\nWe could also get a tighter bound according to Corollary 3.1. Practically, it will be\nmuch more beneficial to calculate the τn = supi∈[k]\n( ρ\nN (n) i\ngρ(N (n) i , n) ) and Φn,m =∑\ni∈[k] ∣∣∣gρ(N (n)i , n)− gρ(N (m)i ,m)∣∣∣ on the fly. That is to say, we do not estimate the convergence bounds before we receive the samples, instead, the calculation of the bounds is carried out together with the calculation of default-plugin-estimators. Remarkably this is still a distribution-free bounds, since we make no assumptions on the probability functions we apply hypothesis test upon.\nHowever, the issue is although Z decreases when ρ increases, GPKE [F , φρ, X, Y ] also decreases when ρ increases. It is not clear how ρ will influence the value of ∣∣∣GPKE [F,φρ,X,Y ]√ Z ∣∣∣." } ]
2,020
null
SP:5f22f64538ccd28123d51c7f8b16fe056cc5dc0b
[ "The paper presents a generalization of the Gumbel-Softmax gradient estimator. The original Gumbel-Softmax is usually applied to Bernoulli and categorical random variables. The method proposed in the paper attempts to extend it applicability to other discrete distributions, such as Poisson, multinomial, geometric, among others. The main ideas of the approach are: (1) Random variables that may take countably infinite values are truncated, (2) The sampling process of the random variable is converted to a one-hot scheme (where Gumbel-Softmax relaxation is applied), (3) ``One-hot'' samples are reverted to the original sample space." ]
Estimating the gradients of stochastic nodes, which enables the gradient descent optimization on neural network parameters, is one of the crucial research questions in the deep generative modeling community. When it comes to discrete distributions, Gumbel-Softmax trick reparameterizes Bernoulli and categorical random variables by continuous relaxation. However, gradient estimators of discrete distributions other than the Bernoulli and the categorical have not been explored, and the the Gumbel-Softmax trick is not directly applicable to other discrete distributions. This paper proposes a general version of the Gumbel-Softmax estimator with a theoretical basis, and the proposed estimator is able to reparameterize generic discrete distributions, broader than the Bernoulli and the categorical. In detail, we utilize the truncation of discrete random variables and the Gumbel-Softmax trick with a linear transformation for the relaxed reparameterization. The proposed approach enables the relaxed discrete random variable to be reparameterized through a large-scale stochastic computational graph. Our experiments consist of (1) synthetic data analyses and applications on VAE, which show the efficacy of our methods; and (2) topic models, which demonstrate the value of the proposed estimation in practice.
[]
[ { "authors": [ "M. Abadi", "P. Barham", "J. Chen", "Z. Chen", "A. Davis", "J. Dean", "M. Devin", "S. Ghemawat", "Ge. Irving", "M. Isard" ], "title": "Tensorflow: A system for large-scale machine learning", "venue": "USENIX Symposium on Operating Systems Design and Implementation,", "year": 2016 }, { "authors": [ "Y. Bengio", "N. Leonard", "A. Courville" ], "title": "Estimating or propagating gradients through stochastic neurons for conditional computation", "venue": "arXiv preprint arXiv:1308.3432,,", "year": 2013 }, { "authors": [ "D.M. Blei", "A.Y. Ng", "M.I. Jordan" ], "title": "Latent dirichlet allocation", "venue": "Journal of Machine Learning Research,", "year": 2003 }, { "authors": [ "M. Figurnov", "S. Mohamed", "A. Mnih" ], "title": "Implicit reparameterization gradients", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "W. Grathwohl", "D. Choi", "Y. Wu", "G. Roeder", "D. Duvenaud" ], "title": "Backpropagation through the void: Optimizing control variates for black-box gradient estimation", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "S. Gu", "S. Levine", "I. Sutskever", "A. Mnih" ], "title": "Muprop: Unbiased backpropagation for stochastic neural networks", "venue": "International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "E.J. Gumbel" ], "title": "Statistical theory of extreme values and some practical applications: a series of lectures (vol", "venue": "US Government Printing Office,", "year": 1948 }, { "authors": [ "E. Jang", "S. Gu", "B. Poole" ], "title": "Categorical reparameterization with gumbel-softmax", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "M. Jankowiak", "F. Obermeyer" ], "title": "Pathwise derivatives beyond the reparameterization trick", "venue": "International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "W. Joo", "W. Lee", "S. Park", "I.C. Moon" ], "title": "Dirichlet variational autoencoder", "venue": "Pattern Recognition,", "year": 2020 }, { "authors": [ "D.P. Kingma", "M. Welling" ], "title": "Efficient gradient-based inference through transformations between bayes nets and neural nets", "venue": "International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "D.P. Kingma", "M. Welling" ], "title": "Auto-encoding variational bayes", "venue": "International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "W. Kool", "H. van Hoof", "M. Welling" ], "title": "Estimating gradients for discrete random variables by sampling without replacement", "venue": "International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "R. Liu", "J. Regier", "N. Tripuraneni", "M.I. Jordan", "J. McAuliffe" ], "title": "Rao-blackwellized stochastic gradients for discrete distributions", "venue": "International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "C.J. Maddison", "A. Mnih", "Y.W. Teh" ], "title": "The concrete distribution: A continuous relaxation of discrete random variables", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Y. Miao", "L. Yu", "P. Blunsom" ], "title": "Neural variational inference for text processing", "venue": "International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Y. Miao", "E. Grefenstette", "P. Blunsom" ], "title": "Discovering discrete latent topics with neural variational inference", "venue": "International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "A. Mnih", "K. Gregor" ], "title": "Neural variational inference and learning in belief networks", "venue": "International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "A. Mnih", "D.J. Rezende" ], "title": "Variational inference for monte carlo objectives", "venue": "International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "E. Nalisnick", "P. Smyth" ], "title": "Stick-breaking variational autoencoders", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "R. Ranganath", "L. Tang", "L. Charlin", "D. Blei" ], "title": "Deep exponential families", "venue": "Artificial Intelligence and Statistics,", "year": 2015 }, { "authors": [ "R. Ranganath", "A. Perotte", "N. Elhadad", "D. Blei" ], "title": "Deep survival analysis", "venue": "Machine Learning for Healthcare Conference,", "year": 2016 }, { "authors": [ "A. Srivastava", "C. Sutton" ], "title": "Autoencoding variational inference for topic models", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "G. Tucker", "A. Mnih", "C.J. Maddison", "J. Lawson", "J. Sohl-Dickstein" ], "title": "Rebar: Low-variance, unbiased gradient estimates for discrete latent variable models", "venue": "Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "R.J. Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine Learning,", "year": 1992 }, { "authors": [ "J. Wu", "Y. Rao", "Z. Zhang", "H. Xie", "Q. Li", "F.L. Wang", "Z. Chen" ], "title": "Neural mixed counting models for dispersed topic discovery", "venue": "Annual Meeting of the Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "M. Xu", "M. Quiroz", "R. Kohn", "S.A. Sisson" ], "title": "Variance reduction properties of the reparameterization trick", "venue": "International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Miao" ], "title": "super-topics and sub-topics including the vocabularies", "venue": null, "year": 2016 }, { "authors": [ "Stick-breaking VAE (Nalisnick", "Smyth" ], "title": "2017) which utilizes Dirichlet process in the latent variable of VAE, and the authors finitized the number of sticks by the human guidance. Second, while there is no previous work on reparameterization trick for fully-non-parametric categorical selection, if we finitize the number of categories with the truncated distributions suggested", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Stochastic computational graphs, including deep generative models such as variational autoencoders, are widely used for representation learning. Optimizing the network parameters through gradient methods requires an estimation of the gradient values, but the stochasticity requires the computation of expectation, which differentiates this problem from the deterministic gradient of ordinary neural networks. There are two common ways of obtaining the gradients: score function-based methods and reparameterization methods. The score function-based estimators tend to result in unbiased gradients with high variances, while the reparameterization estimators seem to result in biased gradients with low variances (Xu et al., 2019). Hence, the core technique of the score-function based estimators becomes reducing the variances of gradients to achieve stable and fast optimizations. Meanwhile, utilizing the reparameterization estimators requires the differentiable non-centered parameterization (Kingma & Welling, 2014a) of random variables.\nIf we focus on the reparameterization estimators, one of the most popular examples is the reparameterization in the Gaussian variational autoencoder (VAE) (Kingma & Welling, 2014b), which has an exact reparameterization form. Other VAEs with explicit priors suggest the reparameterization tricks with approximations (Nalisnick & Smyth, 2017; Joo et al., 2020). For continuous random variables, it is feasible to estimate gradients with automatic differentiation by utilizing a transport equation (Jankowiak & Obermeyer, 2018) or an implicit reparameterization (Figurnov et al., 2018). However, these methods are not applicable to discrete random variables, due to the non-differentiability.\nRecently, some discrete random variables, such as Bernoulli or categorical random variables, have been well-explored in terms of the reparameterization method by overcoming such difficulty through a continuous relaxation (Jang et al., 2017; Maddison et al., 2017). However, other discrete distributions have not been explored from the learning perspective in the deep generative modeling community, for example, Poisson, binomial, multinomial, geometric, negative binomial distributions, etc. Prior works on graphical models, such as Ranganath et al. (2015; 2016), utilized Poisson latent variables for the latent counting. Another line of work (Wu et al., 2020) utilized the Gaussian approximation on the Poisson latent variable to count the number of words, which can be a poor approximation\nif the rate parameter is small. In this sense, study on the stochastic gradient estimator for discrete distributions is required in the deep generative modeling community, which broadens the choice of prior assumptions and the utilization of various distributions.\nThis paper proposes a generalized version of the Gumbel-Softmax reparameterization trick, which can be utilized to generic discrete random variables through continuous relaxation, namely Generalized Gumbel-Softmax (GENGS). The key ideas of GENGS are (1) a conversion of the sampling process to one-hot categorical selection process; (2) a reversion of the selected category in the one-hot form to the original sample value; and (3) a relaxation of the categorical selection process into the continuous form. To implement these steps, GENGS first truncates discrete random variables to approximate the distribution with the finite number of possible outcomes. Afterward, GENGS utilizes the Gumbel-Softmax trick together with a special form of a linear transformation. Our main theorem supports that the proposed GENGS is applicable to general discrete random variables, other than the Bernoulli and the categorical. The GENGS experiments show the efficacy with synthetic examples and VAEs, as well as the usability in topic model application." }, { "heading": "2 PRELIMINARY: REPARAMETERIZATION TRICK & GUMBEL-SOFTMAX", "text": "" }, { "heading": "2.1 BACKPROPAGATION THROUGH STOCHASTIC NODES WITH REPARAMETERIZATION TRICK", "text": "Suppose we have a stochastic node, or a latent variable, z ∼ p(z|θ), where the distribution depends on parameter θ. The goal is optimizing the loss function L(θ, η) = Ez∼p(z|θ)[fη(z)], where fη is a continuous and differentiable function with respect to η, i.e., neural networks. To optimize the loss function in terms of θ through the gradient methods, we need to find ∇θL(θ, η) = ∇θEz∼p(z|θ)[fη(z)], which can not be directly computed with its original form.\nTo compute ∇θL(θ, η), the reparameterization trick alternatively introduces an auxiliary variable ∼ p( ), which takes over all randomness of the latent variable z, so the sampled value z can be re-written as z = g(θ, ), with a deterministic and differentiable function g in terms of θ. Figure 1(a) illustrates the reparameterization trick: the shaded nodes indicate random nodes, and the dotted lines denote sampling processes. Here, the gradient of the loss function with respect to θ is derived as\n∇θL = ∇θEz∼p(z|θ)[fη(z)] = E ∼p( )[∇θfη(g(θ, ))] = E ∼p( )[∇gfη(g(θ, ))∇θg(θ, )] (1)\nwhere the last term of Equation 1 is now achievable. A condition on enabling the reparameterization trick is the assumption of the continuity of the random variable z, so the distribution of z is limited to a class of continuous distributions. To utilize the differentiable reparameterization trick on discrete random variables, continuous relaxation can be applied: for example, a relaxation from the categorical distribution to the Gumbel-Softmax distribution, described in the next subsection." }, { "heading": "2.2 REPARAMETERIZATION TRICK ON CATEGORICAL RANDOM VARIABLE", "text": "A Gumbel-Max trick (Gumbel, 1948) is a procedure for sampling a one-hot categorical value from the Gumbel distribution, instead of direct sampling from a categorical distribution. This implies that the categorical random variable X ∼ Categorical(π), where π lies on the (n− 1)-dimensional simplex ∆n−1, can be reparameterized by the Gumbel-Max trick: (1) sample uj ∼ Uniform(0, 1)\nto generate a gumbel sample gj = − log(− log uj) for each j = 1, · · · , n; and (2) compute k = argmaxnj=1[log πj + gj ], where π is a categorical parameter. This procedure generates a one-hot sample x such that xj = 0 for j 6= k and xk = 1 with P (Xk = 1) = πk. We denote GM(π) to be the distribution whose samples are generated by the Gumbel-Max trick.\nA Gumbel-Softmax trick (Jang et al., 2017; Maddison et al., 2017) is an alternative to the GumbelMax trick that continuously relaxes a categorical random variable. The Gumbel-Softmax utilizes the softmax with a temperature τ > 0, instead of the argmax in the sampling process, which enables (1) relaxing the discreteness of the categorical random variable to the one-hot-like form x(τ) = softmax ( log π+g\nτ\n) in the continuous domain; and (2) approximating the Gumbel-Max by\ntaking τ small enough. Lately, the Gumbel-Softmax estimator has been widely used to reparameterize categorical random variables, such as RelaxedOneHotCategorical in TensorFlow (Abadi et al., 2016). We denote GS(π, τ) to be the distribution generated by the Gumbel-Softmax trick." }, { "heading": "3 PROCESS OF GENGS REPARAMETERIZATION", "text": "This section discusses the process of GENGS to help understand the concept with minimal theoretical details, and Section 4 provides the theoretical background of GENGS. The three steps of GENGS are the following: (1) approximate a discrete distribution by truncating the distribution; (2) reparameterize the truncated distribution with the Gumbel-Max trick and the linear transformation T , which will be introduced below; and (3) relax the discreteness by replacing the Gumbel-Max trick in Step 2 with the Gumbel-Softmax trick. Figure 1(b) illustrates the full steps of the GENGS trick.\nStep 1. Truncate the discrete distribution to finitize the number of possible outcomes. Suppose X ∼ Poisson(100), which has a mode near at x = 100, and near-zero probabilities at x < 50 and x > 150. The key idea of the first step is ignoring the outcomes of near-zero probabilities at certain levels (ex. x < 50 and x > 150) and only focusing on the probable samples of meaningful probabilities (ex. 50 ≤ x ≤ 150), i.e., truncating the distribution, which finitizes the support of the distribution. Now, suppose we have a discrete random variable X ∼ D(λ), and its truncated random variable Z ∼ TD(λ,R), where R denotes the truncation range that needs to be pre-defined. Proposition 3 in Section 4 provides theoretical reason that Z approximates X . Since we finitized the support by the truncation, we may assume Z has a support C = {c0, · · · , cn−1} of n possible outcomes and its corresponding constant outcome vector c = (c0, · · · , cn−1). Note that the ordering of ck is not significant, and Appendix E provides examples of the setting on c.\nStep 2. Divide sampling process of Z into two-fold: select a one-hot category of Z, and revert the selected one-hot category into the original value. For example, if the sampled value of Z is c2 ∈ C, we will first focus on the one-hot category class vector one hot(c2) = (0, 0, 1, 0, · · · , 0), rather than the sampled value c2. Such a one-hot categorical selection process is possible by utilizing the categorical selection w ∼ Categorical(π) or its reparameterized version, the Gumbel-Max trick GM(π). Here, the categorical parameter π = (π0, · · · , πn−1) can be directly calculated by the explicit probability mass funciton (PMF) of the distribution, i.e., πk = P (Z = ck). However, the PMF of the truncated distribution requires a modification from the PMF of the original distribution, which is determined by how we define Z from X . See Definition 1, 2, and Appendix A for detailed configuration of π. Suppose we now have a one-hot categorical sample w from the categorical parameter π. Afterward, we revert the one-hot selected categorical vector w = (w0, · · · , wn−1) into the original sample value with a linear transformation T (w) = ∑ k wkck = ∑ k w c. Proposition 4 shows the validity of the alternative sampling process in Section 4.\nStep 3. Relax the one-hot categorical selection into the continuous form by utilizing the GumbelSoftmax trick. Up to now, the sole shortage of the reparameterization trick is the differentiability due to the one-hot categorical selecting Gumbel-Max process. Then, as in Section 2.2, the process can be continuously relaxed with the Gumbel-Softmax trick GS(π, τ) for some temperature τ . Theorem 5 in Section 4 shows that the alternative sampling process still holds under the continuous relaxation." }, { "heading": "4 THEORETICAL BACKGROUND OF GENGS", "text": "To support our reparameterization methodology, this section provides the main theorem on the reparameterizations. The first proposition approximates an original discrete distribution with its truncated\nversion. Next, the second proposition enables the truncated distribution to be reparameterized by the Gumbel-Max trick. Finally, the main theorem shows that the Gumbel-Softmax trick converges to the Gumbel-Max trick under an assumption of the linear transformation. Through these steps, we note that our proposed reparameterization trick is generalized and grounded, theoretically." }, { "heading": "4.1 FINITIZING THE SUPPORT BY TRUNCATING DISCRETE RANDOM VARIABLES", "text": "Definition 1 specifies a truncated discrete random variable for truncating the right-hand side. Note that Definition 1 can be easily extended to truncate the left-hand side or both sides of distributions. Definition 2 is truncating both side version, and Appendix A discusses its variation. However, we focus on the non-negative distribution in the remainder of this subsection, since most of the popularly used discrete random variables have the support ofN≥0. Definition 1. (A special case of right-hand-side truncation for non-negative discrete random variables) A truncated discrete random variable Zn of a non-negative discrete random variable X ∼ D(λ) is a discrete random variable such that Zn = X if X ≤ n− 1, and Zn = n− 1 if X > n. The random variable Zn is said to follow a truncated discrete distribution TD(λ,R) with a parameter λ and a truncation range R = [0, n). Alternatively, we write as truncation level R = n if the left truncation is at zero in the non-negative case. Definition 2. (Both-side truncation for general discrete distributions) A truncated discrete random variable Zm,n of a discrete random variable X ∼ D(λ) is a discrete random variable such that (1) Zm,n = X if m < X < n; (2) Zm,n = n − 1 if X ≥ n; and (3) Zm,n = m + 1 if X ≤ n. The random variable Zm,n is said to follow a truncated discrete distribution TD(λ,R = (m,n)) with a parameter λ and a truncation range R = (m,n).\nAs we discussed in Section 3, truncating the distribution intends to finitize the number of possible outcomes to utilize the categorical selection. From the definition, the samples of finitized support can be simply considered ck = k in this special non-negative case. Furthermore, due to the definition, the modification on the PMF only exists in the last category cn−1 = n− 1, and the modified PMF can be computed by injecting the near-zero remaining probability summation to the last category, right before the truncation level. In other words, πk = P (Zn = ck) = P (Zn = k) = P (X = k) for k = 0, · · · , n− 2 and πn−1 = 1− ∑n−2 k=0 πk, hence, the sum-to-one property remains satisfied. Here, the idea which leads to Proposition 3 is that if we take the truncation level far enough from zero, we can cover most of the possible outcomes that can be sampled from the original distribution. Note that the truncation step can be omitted if the original distribution already has a finite support, but one can utilize the truncation to ignore the unlikely samples. Proposition 3. With Definition 1, Zn converges to X almost surely as n→∞. Also, with Definition 2, Zm,n converges to X almost surely as m→ −∞ and n→∞.\nThe almost sure convergence property of Proposition 3 supports the theoretical basis of approximating a discrete random variable D(λ) with the truncated random variable TD(λ, n), and Appendix A shows the detailed proof. Through the truncation, the discrete distribution is approximated with finitized support, and the Gumbel tricks are ready to be utilized." }, { "heading": "4.2 REPARAMETERIZATION BY GENERALIZED GUMBEL-SOFTMAX TRICK", "text": "Next, we select one-hot categorical sample from the finitized categories and revert the one-hot selection to the original sample value. The widely utilized discrete distributions have the explicit forms of PMF, so we can directly compute the PMF values for the truncated support with a pre-defined truncation range. Once the distribution and the truncation range are fixed as D(λ) and R, respectively, we have the corresponding constant outcome vector c = (c0, · · · , cn−1) and the computed PMF value vector π = (π0, · · · , πn−1) of a truncated distribution TD(λ,R), where πk = TD(ck;λ,R). Afterward, we define a transformation T (w) = ∑ k wkck = ∑ k w c. Additionally, we denote the distributions, generated by applying T on the samples of GM and GS, as T (GM) and T (GS), respectively. Then, we can generate a sampled value of TD(λ,R) with a linear transformation and a Gumbel-Max sample, as stated in Proposition 4, proved in Appendix B. Proposition 4. For any truncated discrete random variable Z ∼ TD(λ,R) of discrete distribution D(λ) and a transformation T , Z can be reparameterized by T (GM(π)) if we set πk = P (Z = ck).\nDue to the reparameterization, the randomness of TD(λ,R) with respect to the parameter λmoves into the uniform random variable in the Gumbel-Max, since T is a continuous and deterministic function. Then, TD(λ,R) can be approximated by replacing the Gumbel-Max with the Gumbel-Softmax in T , as stated in Theorem 5, proved in Appendix C. We define T (GS(π, τ)) as GENGS(π, τ). Theorem 5. For a transformation T and a given categorical parameter π ∈ ∆n−1, the convergence property of Gumbel-Softmax to Gumbel-Max still holds under the linear transformation T , i.e., GS(π, τ)→ GM(π) as τ → 0 implies GENGS(π, τ)→ T (GM(π)) as τ → 0.\nTheorem 5 implies that we can relax and approximate the truncated discrete random variable TD(λ,R) by the Gumbel-Softmax and the linear transformation. The assumption of the theorem that GS(π, τ)→ GM(π) as τ → 0 has not been mathematically proven in the original literature (Jang et al., 2017; Maddison et al., 2017). Instead, the authors have empirically shown that GS(π, τ) eventually becomes GM(π) as τ → 0. Figure 2 shows how GENGS gets closer to the original distribution by adjusting the truncation range and the temperature.\nTruncation Range. We can observe that the approximation becomes closer to the original distribution as we widen the truncation range R. However, the increment of R is technically limited due to the finite neural network output for the inference. Note that the choice of truncation range is crucial in terms of covering many probable samples. Therefore, we set the truncation range to cover all but less than 1e-5% of probability with respect to the prior distribution, or arbitrary large. Temperature. The decrement of τ from softmax ( log π+g\nτ\n) results in the closer distribution to the\noriginal distribution. However, the initially small τ leads to high bias and variance of gradients, which becomes problematic at the learning stage on π. Therefore, the annealing of τ from a large value to a small value is necessary to provide a learning chance of π.\nNote that there is no condition on the distribution to be reparameterized with GENGS in the theoretical analysis. Hence, once a discrete distribution has an explicit PMF, GENGS can be easily utilized to the reparameterization. Appendix E suggests examples of GENGS utilization, including one that shows the regular Gumbel-Softmax is a special case of GENGS. Appendix F provides the algorithm of GENGS, and Appendix G gives discrete distributions in TensorFlow, which can utilize GENGS." }, { "heading": "5 EXTENTION OF GENGS: IMPLICIT INFERENCE & DISCRETIZATION", "text": "Implicit Inference. Unlike continuous random variables, discrete random variables have countable outcomes. Instead of inferring the distribution parameter λ and then computing the PMF values through the fixed PMF and λ, we can directly infer the PMF values π of possible outcomes with softmax, which becomes the input of the Gumbel tricks, by loosening the assumption on the approximate posterior PMF shape. This implicit inference on the PMF values becomes possible due to truncating distribution by finitizing the possible outcomes. However, this inference approach needs\na regularizer, such as the KL divergence term in the objective function of VAEs, which ensures the distribution shape to be similar to a prior distribution with a pre-defined parameter. We found that loosening the approximate posterior assumption leads to a significant performance gain in our VAE experiments. See Appendix F for the algorithm of the implicit inference.\nDiscretization of Continuously Relaxed Sample. GENGS outputs a continuously reparameterized sample value since we are relaxing the discrete random variable into a continuous form. Utilizing the Straight-Through (ST) Gumbel-Softmax estimator (Bengio et al., 2013; Jang et al., 2017), instead of the naive Gumbel-Softmax, we can obtain the discrete sample as well. Since ST Gumbel-Softmax discretizes the relaxed Gumbel-Softmax output with argmax, ST Gumbel-Softmax uses the gradients obtained from the relaxed ones, which could result in significant performance degradation." }, { "heading": "6 RELATED WORK", "text": "GENGS is basically a single-sample gradient estimator like other reparameterization gradient estimators. Though GENGS could use multiple samples to obtain the stable gradients, we compare GENGS with the other estimators using a single sample to test the fundamental performance of gradient estimators. RF denotes the basic REINFORCE (Williams, 1992). NVIL (Mnih & Gregor, 2014) utilizes a neural network to introduce the optimal control variate. MUPROP (Gu et al., 2016) utilizes the first-order Taylor expansion on the loss term as a control variate. VIMCO(k) (Mnih & Rezende, 2016) is designed as k-sample gradient estimator. REBAR (Tucker et al., 2017) and RELAX (Grathwohl et al., 2017) utilize reparameterization trick for constructing the control variate. Deterministic RaoBlack (DETRB) (Liu et al., 2019) uses the weighted value of the fixed gradients from m-selected categories and the estimated gradients from the remaining categories with respect to their odds to reduce the variance. The idea of Stochastic RaoBlack (STORB) (Kool et al., 2020) is essentially same as that of DETRB, but STORB randomly chooses the categories at each step, instead of using fixed categories. Kool et al. (2020) also suggested an unordered set gradient estimator (UNORD), which also uses the multiple sampled gradients, utilizing the sampling without replacements. For DETRB, STORB, and UNORD, we use m = 1 category that utilizes the fixed gradient for the fair comparison. Note that if there are K > 1 dimensions to be inferred, the models require computing mK gradient combinations, which has higher complexity than GENGS. The ∗ symbol denotes a variation that utilizes a built-in control variate introduced in the work of Kool et al. (2020)." }, { "heading": "7 EXPERIMENT", "text": "" }, { "heading": "7.1 SYNTHETIC EXAMPLE", "text": "Experimental Setting. In this experiment, we expand the toy experiments from Tucker et al. (2017); Grathwohl et al. (2017) to various discrete distributions. We first fix constant t1, · · · , tk, and then optimize the loss function Ez∼p(z|λ) [∑k i=1(zi − ti)2 ] with respect to λ. Here, we set p(z|λ) as Poisson(20), Binomial(20, .3), Multinomial(3, [.7, .2, .1]), and NegativeBinomial(3, .4) in this experiment. We also adapt the Rao-Blackwellization (RB) idea in GENGS by utilizing m = 1 in calculating the selected gradient, so this adaptation results in GENGS-RB that estimates the remaining gradients by GENGS. See Appendix J for the detailed experimental settings.\nExperimental Result. Figure 3 compares the log-loss and the log-variance of estimated gradients from various estimators. In the figure, the log-loss needs to be minimized to correctly estimate the backpropagated gradient value in the learning process. Additionally, the log-variance requires being minimized to maintain the consistency of the gradients, so the gradient descent can be efficient. GENGS shows the best log-loss and the best log-variance if GENGS keeps the continuous relaxation of the modeled discrete random variable. For the Poisson case, the exact gradient has a closed-form solution, as in Appendix J, and GENGS shows the lowest bias among all gradient estimators. See Appendix J for the curves with confidence intervals and the curves without smoothing." }, { "heading": "7.2 VAE: SYNTHETIC EXPERIMENT ON DEEP GENERATIVE MODELS", "text": "Experimental Setting. We follow the VAE experiments of Figurnov et al. (2018) with discrete priors, which diversifies the choice of prior assumption, as the latent factor count in the discrete case. This\nexperiment utilizes the Poisson, the geometric, and the negative binomial distributions. The evidence lower bound (ELBO) L(x) = Eqφ(z|x) [log pθ(x|z)] − KL(qφ(z|x)||pθ(z)), which consists of the reconstruction part and the KL divergence part, is minimized during the training period. Optimizing the ELBO of VAEs requires computing the KL divergence between the approximate posterior and the prior distributions. In GENGS, by truncating the original distribution, the KL divergence becomes the derivation with categorical distributions. See Appendix H for the detailed statement and proof.\nNote that the purpose of VAE experiments is not to compare the performance across various prior distributions. The VAE is considered as a more challenging task than the synthetic example in the last subsection, since (1) this task requires computing the gradients of the encoder network parameters through the latent distribution parameter λ; and (2) each stochastic gradient of the latent dimension affects every encoder parameter since we are utilizing the fully-connected layers. Hence, a single poorly estimated gradient with respect to the latent distribution parameter λ could negatively affect the learning of encoder parameters. See Appendix K for the detailed experimental settings.\nExperimental Result. Table 1 shows the negative ELBO results of the VAE experiments. We found that some baselines failed to reach the optimal point, so we excluded those estimators in such suboptimal cases. The variants of GENGS showed the lowest negative ELBO in general, and loosening the PMF condition idea (i.e., the implicit inference) reached the better optimal points. Figure 4 shows the reconstructed images by VAEs with various gradient estimators on MNIST and OMNIGLOT. GENGS draws the clearest images and better reconstructions, which aligns with the quantitative result in Table 1. See Appendix K for the full table and additional discussion.\n7.3 TOPIC MODEL APPLICATION\nExperimental Setting. This experiment shows another application of GENGS in the topic modeling. The Poisson distribution is one of the most important distribution for counting the number of outcomes among all discrete distributions. The authors of Deep Exponential Families (DEFs) (Ranganath et al., 2015) utilized the exponential family, including the Poisson distribution, on the stacked latent layers. Therefore, we focus on the Poisson DEF, which assumes the Poisson latent layers to capture the counting\nnumbers of latent super-topics and sub-topics; and we convert the Poisson DEF into a neural variational form, which resembles NVDM (Miao et al., 2016). Figure 5 shows the neural network and its corresponding probabilistic modeling structure. We utilize GENGS on the Poisson DEF to sample the values in the latent variable, namely the neural variational Poisson DEF (NVPDEF). See Appendix L for further description of DEFs, NVPDEF, and detailed experimental settings.\nExperimental Result. We enumerate the baselines and the variants of NVPDEFs in Table 2, and we confirmed that NVPDEF shows the lowest perplexity overall with 20Newsgroups and RCV1-V2. Since NVPDEF and the original DEFs have different training and testing regimes, we compare NVPDEF to representative neural variational topic (document) models, which\nare listed in Table 2. Additionally, Appendix L shows the qualitative results from topic models." }, { "heading": "8 CONCLUSION", "text": "This paper suggests a new gradient estimator for general discrete random variables, a generalized version of the Gumbel-Softmax estimator. To strengthen the practical usage of reparameterization tricks with the Gumbel-Softmax function, we provide a theoretical background of our reparameterization trick. Our finding claims that a discrete random variable can always be reparameterized through the proposed GENGS algorithm. The limitation of GENGS is the setting of the truncation level and the Gumbel-Softmax temperature, which becomes the trade-off between the gradient estimation accuracy and the time budget. Subsequently, we show the synthetic analysis and the VAE experiment, as well as topic model application of GENGS. With the generalization, we expect that GENGS can diversify the options of distributions in the deep generative model community." }, { "heading": "A PROOF OF PROPOSITION 3 & TRUNCATING BOTH SIDES", "text": "" }, { "heading": "A.1 PROOF OF PROPOSITION 3 FOR TRUNCATING RIGHT-HAND-SIDE", "text": "Before reminding Definition 1 and Proposition 3 of the main paper, we first introduce the definition of almost sure convergence of sequence of random variables. Definition. (Almost Sure Convergence) For a sequence of random variables {Xn}∞n=1,Xn converges almost surely to random variable X , if\nP ( lim n→∞ Xn = X) = P ({ω ∈ Ω| lim n→∞ Xn(ω) = X(ω)}) = 1 (2)\nfor a sample space Ω. Definition. (A special case of right-hand-side truncation for non-negative discrete random variables) A truncated discrete random variable Zn of a non-negative discrete random variable X ∼ D(λ) is a discrete random variable such that Zn = X if X ≤ n− 1, and Zn = n− 1 if X > n. The random variable Zn is said to follow a truncated discrete distribution TD(λ,R) with a parameter λ and a truncation range R = [0, n). Alternatively, we write as truncation level R = n if the left truncation is at zero in the non-negative case.\nWith this definition, as we discussed in the main text, the constant outcome vector can be defined as c = (0, 1, 2, · · · , n − 1). Note that for Zn = 0, 1, · · · , n − 2, the sample spaces of Zn and X are equal, hence, P (Zn) = P (X) in such cases. This implies that the modified PMF of Zn = k can be computed by using the PMF of X = k1 in such cases, i.e., P (Zn = k) = P (X = k) for k = 0, 1, · · · , n−2. Then, consequently, P (Zn = n−1) = P (X ≥ n−1) = 1− ∑n−2 k=0 P (Zn = k) due to the sum-to-one property of probability. Hence, the corresponding categorical parameter π of the constant outcome vector c can be computed using the PMF of the original discrete random variableX: πk = P (Zn = k) = P (X = k) for k = 0, 1, · · · , n−2, and πn−1 = P (Zn = n−1) = P (X ≥ n− 1) = 1− ∑n−2 k=0 πk. Proposition. With Definition 1, Zn converges to X almost surely as n→∞.\nProof. Note that {ω ∈ Ω|Zn(ω) = X(ω)} = {ω ∈ Ω|X(ω) < n}. Then, we have the following:\nP ( lim n→∞ Zn = X) = P ({ω ∈ Ω| lim n→∞ Zn(ω) = X(ω)}) (3)\n= P ({ω ∈ Ω| lim n→∞ Zn(w) = lim n→∞ X(w) < lim n→∞ n =∞}}) (4)\n= P ({ω ∈ Ω|X(ω) <∞}) (5) = P (X <∞) (6) = 1 (7)\nsince X is a non-negative discrete random variable, and its cumulative distribution function FX(x) has the following properties: non-decreasing as x→∞, and 0 ≤ FX ≤ 1." }, { "heading": "A.2 PROPOSITION 3 FOR TRUNCATING BOTH SIDES", "text": "The below is truncating both side version of Definition 2 and Proposition 3 in the main paper. For simplicity, we assume the distribution has support Z. Definition. (Both-side truncation for general discrete distributions) A truncated discrete random variable Zm,n of a discrete random variable X ∼ D(λ) is a discrete random variable such that\nZm,n = X , if m < X < n n− 1, if X ≥ n m+ 1, if X ≤ m . The random variable Zm,n is said to follow a truncated\ndiscrete distribution TD(λ,R = (m,n)) with a parameter λ and a truncation range R = (m,n).\nWith this definition, the constant outcome vector can be defined as c = (m + 1, · · · , n − 1). Note that for Zm,n = m + 2, · · · , n − 2, the sample spaces of Zm,n and X are equal, hence,\n1Note that we are using the distribution with eplicitly known PMF for discrete distribution X .\nP (Zm,n) = P (X) in such cases. This implies that the modified PMF of Zm,n = k can be computed by using the PMF of X = k in such cases, i.e., P (Zm,n = k) = P (X = k) for k = m + 2, · · · , n − 2. Then, with the definition, P (Zm,n = m + 1) = P (X ≤ m + 1) and P (Zm,n = n − 1) = P (X ≥ n − 1). However, during the implementation, this definition can be a problem since the configuration implies that there might be a case that we have to compute two infinite sums, which are P (Zm,n = m + 1) = P (X ≤ m + 1) = ∑ k≤m+1 P (X = k) and\nP (Zm,n = n− 1) = P (X ≥ n− 1) = ∑ k≥n−1 P (X = k)\n2. Hence, we also provide an alternative configuration of Zm,n in the later of this section.\nProposition. With definition above for the truncating both sides, Zm,n converges to X almost surely as m→ −∞ and n→∞.\nProof. Note that {ω ∈ Ω|Zm,n(ω) = X(ω)} = {ω ∈ Ω|m < X(ω) < n}. Then, we have the following:\nP ( lim n→∞ m→−∞ Zm,n = X) = P ({ω ∈ Ω| limn→∞ m→−∞ Zm,n(ω) = X(ω)}) (8)\n= P ({ω ∈ Ω| −∞ = lim m→−∞ m < X(w) < lim n→∞ n =∞}}) (9)\n= P ({ω ∈ Ω| −∞ < X(ω) <∞}) (10) = P (−∞ < X <∞) (11) = P (X <∞)− P (−∞ < X) (12) = 1 (13)\nsince X is a discrete random variable, and its cumulative distribution function FX(x) has the following properties: non-decreasing as x→∞, non-increasing as x→ −∞, and 0 ≤ FX ≤ 1.\nAs we discussed above, during the implementation, computing the small probabilities of both left and right tail can cause a problem, either it can have high complexity3 or even impossible4. Hence, when we can consider the alternative definition such as Zm,n = { X , if m < X < n n− 1, if X ≥ n or X ≤ m ,\nwhich has a simpler PMF computation. In this case, we can simply add the remaining probability of unlikely samples at the right-most value, and moreover, the alternative configuration does not harm the proof of the proposition. Hence, with the alternative defitnition, the the corresponding categorical parameter π = (πm+1, · · · , πn−1) of the constant outcome vector c = (m+ 1, · · · , n− 1) can be computed using the PMF of the original discrete random variable X: πk = P (Zn = k) = P (X = k) for k = m + 1, · · · , n − 2, and πn−1 = P (Zn = n − 1) = P (X ≥ n − 1) + P (X < m + 1) = 1− ∑n−2 k=m+1 πk." }, { "heading": "B PROOF OF PROPOSITION 4", "text": "Proposition. For any truncated discrete random variable Z ∼ TD(λ,R) of discrete distribution D(λ) and a transformation T , Z can be reparameterized by T (GM(π)) if we set πk = P (Z = ck).\nProof. Note that Z has two parameters: the distribution parameter λ and the truncation range R. Assume that we have possible outcome set C = {c0, · · · , cn−1} of n possible outcomes by truncating the distribution with truncation range R. Note that the transformation is defined as T (w) = ∑n−1 k=0 wkck = ∑ k w c. By pre-defining the truncation range R as a hyper-parameter, the randomness of Z is fully dependent on the distribution parameter λ. Now, we introduce the Gumbel random variable G = − log(− logU) where U ∼ Uniform(0, 1) as an auxiliary random variable. Then, given a categorical parameter π ∈ ∆n−1, any n-dimensional one-hot vector ej =\n2Or, there might be a case that one of P (Zm,n = m + 1) = P (X ≤ m + 1) and P (Zm,n = n − 1) = P (X ≥ n− 1) requires summation of high complexity, even though it is a finite summation.\n3For example, if we truncate Poisson(1000) with truncation range R = [900, 1100], we have to compute PMF values for 0 ≤ x < 900 to sum-up the left-hand meaningless probability, and it causes high complexity.\n4For example, the case when the support has infinite support for both left (−∞) and right (∞) sides.\n(0, · · · , 0, 1, 0, · · · , 0), which has 1 in jth entry and 0 in all other entries, can be reparameterized by Gumbel-Max trick.\nSuppose we have a sample cm from Z, and note that we have known PMF values π = (π0, · · · , πn−1) of Z by the definition of Z. Then, with the transformation T and the constant outcome vector c = (c0, · · · , cn−1), the following holds:\ncm = n−1∑ k=0 ckek = T (em) . (14)\nSince the transformation T is also a deterministic function, by introcuding the Gumbel random variable as an auxiliary random variable, we can replace the randomness of Z from λ (or π in the implicit inference case) with the uniform random variable composing the Gumbel random variable. Hence, the truncated discrete random variable Z can be reparameterized by the Gumbel-Max trick and the transformation T ." }, { "heading": "C PROOF OF THEOREM 5", "text": "Theorem. For a transformation T and a given categorical parameter π ∈ ∆n−1, the convergence property of Gumbel-Softmax to Gumbel-Max still holds under the linear transformation T , i.e., GS(π, τ)→ GM(π) as τ → 0 implies GENGS(π, τ)→ T (GM(π)) as τ → 0.\nProof. Suppose we have given a categorical parameter π ∈ ∆n−1. Define fM be a Gumbel-Max trick function, and fτS be a Gumbel-Softmax trick function with a temperature τ > 0 that both functions take the categorical parameter and a Gumbel sample as inputs. Note that fM returns a one-hot vector which has 1 in the argmax entry after the Gumbel perturbation, and fτS returns a one-hot-like softmax activation value with the temperature τ with the Gumbel perturbation.\nDraw a random sample u ∼ Uniform(0, 1)n which defines the Gumbel sample g for the Gumbel perturbation. Assume that log πm − log (− log um) > log πj − log (− log uj) for all j 6= m, i.e., the selected sample as a category is cm out of possible outcome set {c0, · · · , cn−1}. Therefore, for the Gumbel-Max trick, it is clear that T (fM (π, g)) = ∑n−1 k=0 [em]k · ck = ∑n−1 k=0 em c = cm for c = (c0, · · · , cn−1) where ej = (0, · · · , 0, 1, 0, · · · , 0) is a n-dimensional one-hot vector, which has 1 in the mth entry and 0 in all other entries. Then, the statement that GS(π, τ)→ GM(π) as τ → 0 implies fτS(π, g)→ fM (π, g) , i.e., the following:[\nfτS(π, g) ] j = exp\n( log πj−log(− log uj) τ )∑n−1 k=0 exp ( log πk−log(− log uk) τ\n) (15) → {\n1 if j = m 0 if j 6= m as τ → 0 . (16)\nThen, fτS(π, g) = ẽm for some relaxed one-hot vector of em by introducing the softmax relaxation. As a consequence,[\nfτS(π, g) ] j × cj = [∑ exp ( log πi−log(− log ui)τ )∑n−1 k=0 exp ( log πk−log(− log uk) τ )] j × cj (17)\n→ { cm if j = m 0 if j 6= m as τ → 0 , (18)\nsince the constant multiplication gives no harm to the approximation. Hence, by taking the summation, which also gives no harm to the approximation, T (fτS(π, g)) = ∑n−1 k=0 [ẽm]k·ck → cm = ∑n−1 k=0 ẽm c→ cm = T (fM (π, g))." }, { "heading": "D PMF SHAPE OF GENGS", "text": "Note that Figure 2(b) in the main paper is drawn by rounding-up continuous values into integers. Since PMF for discrete distributions (Poisson(7) in Figure 2(b)) and PDF for continuous distributions (GENGS for Poisson(7) in Figure 2(b)) cannot be directly compared within the same figure due to their scale, we provide Figure 6 of the fine-grained PMF by rounding-up in small decimals." }, { "heading": "E EXAMPLES OF GENGS", "text": "" }, { "heading": "E.1 TRUNCATING RIGHT-HAND SIDE FOR POISSON DISTRIBUTION", "text": "For the distributions that the left-side truncation needs to be at zero, such as the Poisson with small rate parameter, the geometric, the negative binomial, etc., as we mentioned in the main text, we can simply set the constant outcome vector c = (0, 1, · · · , n− 1). Note that the ordering of c is not crucial, for example, we can also set c = (2, 1, 0, 3, 4, · · · , n− 1). Then, the corresponding PMF value needs to be computed as π = ( P (Z = 2), P (Z = 1), P (Z = 0), P (Z = 3), P (Z = 4), · · · , P (Z = n−1) ) , where Z is the truncated discrete random variable." }, { "heading": "E.2 TRUNCATING BOTH SIDES FOR POISSON DISTRIBUTION", "text": "For the case when the distribution requires left-side truncation, for example, Poisson(100)5, we can set the constant outcome vector such as c = (50, 51, · · · , 149, 150). Again, the ordering of c is not crucial, hence, we can also set c = (148, 150, 149, 147, · · · , 53, 50, 52, 51), for example. Afterward, the PMF value π is naturally computed in the same order as the constant outcome vector c." }, { "heading": "E.3 GUMBEL-SOFTMAX IS A SPECIAL CASE OF GENGS.", "text": "The Gumbel-Softmax estimator of categorical random variables is a trivial case of GENGS. Assume the number of dimensions n = 3 and a categorical parameter π = (0.5, 0.3, 0.2) in this example. Then, the poissble outcomes of Categorical(π) are c0 = [1, 0, 0]T , c1 = [0, 1, 0]T , and c2 = [0, 0, 1]T . Afterward, draw a sample w from GS(π, τ) for some temperature τ > 0, and the value will become a relaxed one-hot form, for example, (0.95, 0.04, 0.01). If we construct\nc = [c0, c1, c2] (19)\n= [ 1 0 0 0 1 0 0 0 1 ] = I3 , (20)\nthen T (w) = ∑ k w c = ∑ k wkck ≈ w. Hence, the Gumbel-Softmax trick can be written in the form of GENGS with the identity matrix in the linear transformation. Note that we abuse the symbol of Hadamard product ( ) in terms of the dimension, where the last term of T (w) = ∑ k w c =∑\nk wkck is actually the multiplication of a scalar wk ∈ [0, 1] and a vector ck ∈ R3 in this case." }, { "heading": "E.4 GENGS CAN BE APPLIED TO MULTINOMIAL DISTRIBUTION.", "text": "If we go one step forward from the example above, we can reparameterize multinomial distribution with GENGS trick, also. For example, assume the number of trial m = 3 and the probability vector p = [p1, p2, p3] = [.7, .2, .1]. Then, the possible outcomes are c0 = [3, 0, 0]T , c1 = [0, 3, 0]T , c2 = [0, 0, 3]\nT , c3 = [2, 1, 0]T , c4 = [2, 0, 1]T , c5 = [1, 2, 0]T , c6 = [1, 0, 2]T , c7 = [0, 2, 1]T , c8 = [0, 1, 2]T , and c9 = [1, 1, 1]T , where the corresponding probability is\n(n1+n2+n3)! n1!n2!n3! p1 n1p2 n2p3 n3 for\n5Note that Poisson(100) has inprobable samples at x < 50 and x > 150.\noutcome [n1, n2, n3]. Construct the linear transformation constant c as the following:\nc = [c0 c1 c2 c3 c4 c5 c6 c7 c8 c9] (21)\n= [ 3 0 0 2 2 1 1 0 0 1 0 3 0 1 0 2 0 2 1 1 0 0 3 0 1 0 2 1 2 1 ] , (22)\nwhere the categorical parameter π = (\n(n1+n2+n3)! n1!n2!n3! p1 n1p2\nn2p3 n3 )\n[n1,n2,n3] . If we sample a GumbelSoftmax sample w from GS(π, τ) and compute T = ∑ k w c, the result will be in the relaxed form of selected samples cm. This example shows how the linear transformation constant c can be generalized to the matrix form. However, if we recall that the equation n0 + · · · + nk−1 = n has( n+k−1\nk\n) solutions of non-negative tuple (n0, · · · , nk−1), where ( n+k−1\nk\n) ≤ O(nmin{k,n−1}), the\nrelaxed categorical selection through the Gumbel-Softmax can become problematic due to the high complexity when n or k has large value. Hence, in this situation, reducing the possible outcomes by disregarding unlikely samples by user guidance can be a remedy, but the treatment can not be a fundamental solution, and handling such case can be an open research question.\nF INFERENCE STEP, ALGORITHM & COMPLEXITY OF GENGS\nF.1 VISUALIZATION OF GENGS REPARAMETRIZATION STEPS\nFigure 7 and Figure 8 represent the reparameterization stpes of explicit inference and implicit inference of GENGS, respectively. For both figures, the shaded nodes indicate the auxiliary Gumbel samples, composed of uniform samples, which enable the reparameterized variable to be deterministic with respect to the parameter of the target distribution." }, { "heading": "F.2 ALGORITHM & ADDITIONAL COMPUTATIONAL COMPLEXITY OF GENGS", "text": "Algorithm 1 and Algorithm 2 provides the algorithm of explicit and implicit inference steps of GENGS, respectively. The explicit inference has further computation on PMF value calculation in Line 3-4 of Algorithm 1 and the linear transformation computation in Line 8 of 1 compared to the original Gumbel-Softmax reparameterizer of the categorical distribution. Note that this additional computation complexity may not be represented as O(n) if we assume that there are n possible outcomes by truncating the distribution, since the computation on PMF values from the inferred distribution parameter λ depends on the PMF of distribution. Meanwhile, the implicit inference only has extra linear transformation computation in Line 5 of Algorithm 2 compared to the original Gumbel-Softmax reparameterizer of the categorical distribution, by inferring the logit of the PMF values directly in Line 3 of Algorithm 2. Hence, in this case, the additional computational complexity\nof GENGS is O(n), compared to the original Gumbel-Softmax. Note that this complexity does not affect on the total complexity under the O-notation, since the inference step of the original GumbelSoftmax also contains the inference on the logit of the probability which has Ω(n) complexity.\nAlgorithm 1 Explicit GENGS: Inference step on the parameter λ of distribution pλ 1: Input: PMF pλ(·) of distribution D(λ), loss function f(·), Gumbel-Softmax trick GS, tem-\nperature τ > 0, truncation range R, linear transformation T (·), constant outcome vector c. 2: Infer distribution parameter λ̂. 3: for k = 0 to n− 2 do 4: Compute πk = pλ(ck). 5: end for 6: Compute πn−1 = 1− ∑n−2 k=0 πk. 7: Sample n-dimensional one-hot-like w ∼ GS(π, τ). 8: Compute transformation z = T (w) = ∑ j wjcj .\n9: Compute loss f(z). 10: Update λ̂ via stochastic gradient method.\nAlgorithm 2 Implicit GENGS: Inference step on the PMF values π of distribution pλ 1: Input: PMF pλ(·) of distribution D(λ), loss function f(·), Gumbel-Softmax trick GS, tem-\nperature τ > 0, truncation range R, linear transformation T (·), constant outcome vector c. 2: Infer PMF logit-value ν̂. 3: Compute π = softmax(ν). 4: Sample n-dimensional one-hot-like w ∼ GS(π, τ). 5: Compute transformation z = T (w) = ∑ j wjcj . 6: Compute loss f(z). 7: Update ν̂ via stochastic gradient method." }, { "heading": "G DISTRIBUTIONS IN TENSORFLOW PROBABILITY", "text": "We provide the table of discrete distributions where GENGS can be applied in Table 3. We list up the discrete distributions which are available in TensorFlow Probability 0.8.06 in lexicographical order. Note that Bernoulli and Categorical are different from OneHotCategorical and RelaxedOneHotCategorical. The original Gumbel-Softmax are available only for\n6https://www.tensorflow.org/probability/api docs/python/tfp/distributions\n(1) OneHotCategorical ,which is relaxed as RelaxedOneHotCategorical; and (2) Bernoulli, which is relaxed as RelaxedBernoulli, but can not be applied to Categorical. However, by generalizing the Gumbel-Softmax, i.e., GENGS, other distributions as listed below, including Categorical. Also, Empirical, which is a user-defined distribution in TensorFlow Probability, can utilize GENGS which is not in the list. Note that for Binomial, if n is large or p has extreme value close to 0 or 1, one can truncate left, right, or both. Also, for Multinomial, as an extension of the Binomial case, one can disregard unlikely samples." }, { "heading": "H KL DIVERGENCE BETWEEN TWO TRUNCATAED DISTRIBUTIONS", "text": "Theorem. Assume two truncated distributions X ∼ TD(λ, n) and Y ∼ TD(λ̂, n) where πk = P (X = k), π̂k = P (Y = k). Then, the KL divergence between X and Y can be represented in the KL divergence between the categorical distributions where KL(Y ||X) = KL(Categorical(π̂)||Categorical(π)).\nProof.\nKL(Y ||X) = ∑ k P (Y = k) log ( P (Y = k) P (X = k) ) (23)\n= ∑ k π̂k log ( π̂k πk ) (24)\n= KL(Categorical(π̂)||Categorical(π)) (25)" }, { "heading": "I EXPERIMENT: GENERAL SETTING", "text": "For all experiments, we use Intel Core i7-6700K CPU, 32GB RAM, and Titan X. For the dependency, we use TensorFlow version 1.15.0, TensorFlow Probability version 0.8.0, and PyTorch version 1.0.1. Also, we run the experiments over 10 times for each experiment." }, { "heading": "J SYNTHETIC EXAMPLE", "text": "" }, { "heading": "J.1 EXPERIMENTAL SETTING", "text": "In this experiment, we first sample t1, · · · , tk i.i.d. from a discete distribution D(θ) for a fixed θ > 0, and optimize the loss function Ez∼p(z|λ) [∑k i=1(zi − ti)2 ] with respect to λ where p(z|λ) is D(λ). We use Poisson(20), Binomial(20, .3), Multinomial(3, [.7, .2, .1]), and NegativeBinomial(3, .4) in this experiment, and the distribution parameter which we want to infer in each distribution is λ in Poisson(λ), Binomial(20, λ), Multinomial(3, λ), and NegativeBinomial(3, λ). For GENGS, we use truncation level (7, 36) and 12 for the Poisson and the negative binomial, respectively. Note that the binomial case does not require truncation of the distribution. We use k = 5 sampled targets for the Poisson and the binomial cases, and k = 1 for the negative binomial case. In this experiment, we separately utilize the temperature τ as τ = 1. for the high-temperature case, and τ = .25 for the low-temperature case. To compute the variance of gradients, we sampled 100 gradients for the Poisson and binomial, and 500 gradients for the negative binomial. For fair comparisons, we use m = 1 fixed category for gradients for the RBs. Whereas it is able to use more than one fixed gradient in the synthetic example, if there is more than one latent dimension, K, it requires to compute mK gradient combinations, which has high complexity. We also adapt the Rao-Blackwellization idea in GENGS, which is utilizing m = 1 fixed gradient and utilizing GENGS for the remainings, namely GENGS-RB. We exclude UNORD snice UNORD fails to converge to the optimal parameter because of its approximation accuracy problem with single gradient sample.\nClosed-form True Gradient Derivation for the Poisson Synthetic Example. Throughout the synthetic example, we compare the quality of gradient estimators by the convergence of losses, variances of estimated gradients, and biases between true gradient and estimated gradient. To compute the bias between the true gradient and the estimated gradient, we need the closed-form solution of the true gradient. We find that the Poisson case has the closed-form true gradient, and the derivation is as follows.\nProposition. If p(z|λ) is a Poisson distribution with a rate parameter λ, the true gradient of L = Ez∼p(z|λ)[(z − t)2] with respect to λ has a closed-form solution, ∂L∂λ = 2λ− 2t+ 1.\nProof. Note that Poisson distribution with the rate parameter λ has a mean λ and a variance λ. Hence, the Poisson distribution has the first moment µ1 = λ, and the second moment µ2 = λ2 + λ.\n∂L ∂λ = ∂ ∂λ Ez ∼p(z|λ)[(z − t)2] (26)\n= ∂\n∂λ ∑ z≥0 p(z|λ)(z − t)2 (27)\n= ∑ z≥0 ∂ ∂λ [ p(z|λ)(z − t)2 ] (28)\n= ∑ z≥0 ∂ ∂λ [λze−λ z! (z − t)2 ]\n(29)\n= ∑ z≥0 (z − t)2 ∂ ∂λ [λze−λ z! ] (30)\n= t2 ∂\n∂λ e−λ + ∑ z≥1 (z2 − 2tz + t2) (zλz−1e−λ − λze−λ z! ) (31)\n= −t2e−λ + ∑ z≥0 ( (z + 1)2 − 2t(z + 1) + t2 )(λze−λ z! ) − ∑ z≥1 (z2 − 2tz + t2) (λze−λ z! ) (32)\n= −t2e−λ + ∑ z≥0 ( z2 − 2(t− 1)z + (t− 1)2 ) p(z|λ)− ∑ z≥1 (z2 − 2tz + t2)p(z|λ) (33)\n= −t2e−λ + ( µ2 − 2(t− 1)µ1 + (t− 1)2 ) − ( µ2 − 2tµ1 + t2(1− e−λ) ) (34)\n= 2λ− 2t+ 1 (35)" }, { "heading": "J.2 EXPERIMENTAL RESULT", "text": "We compare the log-loss and the log-variance of estimated gradients from various estimators in this experiment. We also compare the log-bias in the Poisson case. We additionally provide Figure 9 to report the confidence interval, and Figure 10 to show the convergence of loss which may not be seen in Figure 3 of the main paper.\nK VAE: SYNTHETIC EXPERIMENT ON DEEP GENERATIVE MODELS" }, { "heading": "K.1 EXPERIMENTAL SETTING", "text": "We utilize the (truncated) Poisson, the (truncated) geometric, and the (truncated) negative binomial distributions in this experiment. Both MNIST and OMNIGLOT7 are hand-written gray-scale datasets of size 28× 28. We split MNIST dataset into {train : validation : test} = {45, 000 : 5, 000 : 10, 000}, and OMNIGLOT dataset into {22, 095 : 2, 250 : 8, 070}. We construct two fully-connected hidden layers of dimension 500 for the encoder and the decoder, and we set the latent dimension K = 50 for both MNIST and OMNIGLOT datasets. we use tanh activation function, learning rate 5e-4, training epoch 500, and batch size 100, 45 for MNIST and\n7https://github.com/yburda/iwae/tree/master/datasets/OMNIGLOT\nOMNIGLOT, repectively. For GENGS, we use exponential temperature annealing8 from 1. to .5, and truncation levels (1) 12 for Poisson(2); (2) 15 for Poisson(3); (3) 25 for Geometric(.25); (4) 15 for Geometric(.5); (5) 30 for NegativeBinomial(3, .5); and (6) 30 for NegativeBinomial(5, .3)." }, { "heading": "K.2 EXPERIMENTAL RESULT", "text": "Table 4 shows the negative ELBO results on the VAE experiments with a full range of gradient estimators. The variants of GENGS show the lowest negative ELBO in general. We empirically found that the extreme probability imbalance, due to the explicit PMF restriction, induces unstable learning which leads to the performance degradation of RELAX or REBAR. We found that utilizing Straight-Through for the discretization degrades the performance. Meanwhile, the implicit inference methodology further leads to better optimal points, enabled by loosening the PMF condition. The empirical reason why the implicit version is better than the explicit version is that the inferred PMF shape is thinner in the implicit case.. Hence, the implicit distribution has lower variance than the explicit one, and consequently samples consistent values that lead to better trained neural network parameters, although it looses the original PMF shape." }, { "heading": "L TOPIC MODEL APPLICATION", "text": "" }, { "heading": "L.1 EXPERIMENTAL SETTING", "text": "Deep Exponential Families (DEFs) (Ranganath et al., 2015) are probabilistic graphical model which utilize the stacks of exponentail family distributions. If we assume the Poisson distribution, which is included in the exponential family, each kth Poisson latent variable counts the number of sub-topics occurrence.\nThe relationship between the super-topic and the sub-topic is modeled with the linked weights, which has positive values. Hence, with Poisson DEF, we can model hierarchical relations between\n8For GENGS ST, the temperature annealing is unnecessary as the ST Gumbel-Softmax estimator does (Jang et al., 2017).\nsuper-topics and sub-topics including the vocabularies. Here, we utilize the idea of Miao et al. (2016; 2017); Srivastava & Sutton (2017), the neural variational architecture, to extract the latent document representation as (relaxed) counts of supermost-topic, and consequently capture sub-topic counts. To ensure the positive linked weights between super-topics and sub-topics, we utilize absolute value function.\nThe generative process of NVPDEF is\nz1 ∼ Poisson(λ0), z2 ∼ Poisson(λ1), · · · , zK ∼ Poisson(λK−1), (36) x ∼ MultinomialLogisticRegression(λK) (37)\nwhere we adopt multinomial logistic regression from NVDM (Miao et al., 2016), and the inference process of NVPDEF is\nλ̂0 = MLP(x), λ̂1 = W1λ̂1, · · · , λ̂K = WK−1λ̂K−1 (38)\nso that the approximate Poisson posterior q(zk|zk−1) has λ̂k as distribution parameter. Here, each zk ∼ Poisson(λk−1) represents the count distribution of topics from the super-topic. Each component of Wk, wk,i,j is positive, and wk,i,j captures the positive weight of relationship between super-topic i of the kth layer and sub-topic j of the (k + 1)th layer. The objective function, ELBO, of NVPDEF is\nL = Eq(zK ,··· ,z1)[log p(x|zK , · · · , z1)]− K∑ k=1 KL(q(zk|zk−1)||p(zk)) (39)\nwhere z0 = x for simplifying the equation.\n20Newsgroups9 and RCV1-V210 datasets are used in this experiment. 20Newsgroups dataset has {train : test} = {11, 258 : 7, 487} split with the vocabulary size of 2, 000, and RCV1-V2 has {train : test} = {794, 414 : 10, 000} split with the vocabulary size of 10, 000. For the data pre-processing, stopwords are removed and the most frequent vocabularies are chosen. Especially for 20Newsgroups, we use the vocabulary from Srivastava & Sutton (2017).\nFor the single-stacked version of NVPDEF, we do not anneal the temperature, instead, we set temerature τ = .5. For the multi-stacked version of NVPDEF, i.e., MULTI-STACKED NVPDEF, we utilized 10-sample on the latent layers for the stable optimization of consecutive sampling. Also, to have better chances of learning, we utilize linear temperature annealing from τ = 3. to τ = .5 during the training period. For all neural network models, we utilize two 500-dimensional hidden layers for the encoders. We use 50, 20-50 stacked layers for 20Newsgroups, 200, 50-200 stacked layers for RCV1-V2 dataset. We set λ1 = .75 with truncation level 15 for the single-stacked case, and λ1 = 1.1, λ2 = 1. with truncation level 15 for the multi-stacked case. We train NVPDEF for 100 epochs with batch size 256 and learning rate 1e-3. We also iteratively update encoder parameters and each linked weight parameter of latent variable. As a performance measure, we utilize perplexity perp = exp(− 1D ∑ d log p(d) Nd\n) where Nd is the number of words in document d, and D is the total number of documents." }, { "heading": "L.2 EXPERIMENTAL RESULT", "text": "We provide super-topic, sub-topic, and word relationship obtained from two-layer-stacked NVPDEF in 20Newsgroups dataset by listing up the top-weighted sub-topics and words in Table 5." }, { "heading": "M OPEN RESEARCH QUESTION: NON-PARAMETRIC REPARAMETERIZATION TRICK", "text": "In GENGS, to reparameterize discrete distributions, we convert sampling process to categorical selection process by finitizing the support of the distribution with truncation. Here, truncating distribution converts categorical selection on countably finite number of categories to categorical selection on finite number of categories, i.e., turn non-parametric problem into parametric problem.\n9http://qwone.com/∼jason/20Newsgroups/ 10https://trec.nist.gov/data/reuters/reuters.html\nThe categorical selection on finite number of categories by disregarding the samples of extremely small probability might cause a problem if we need to utilize full range of possible outcomes. For example, in the multinomial case in Appendix E.4, as n and k grows, proposed GENGS should ignore numerous probable samples due to the high complexity on the number of possible outcomes.\nA* sampling (Maddison et al., 2014) is a non-parametric version of Gumbel-Max trick which we also utilize in GENGS, since A* sampling searches maximum Gumbel sample among countably infinite Gumbel samples by A* algorithm. Utilizing A* sampling concept in reparameterizing the distribution with countably infinite support could lead to better reparameterization in terms of reparameterizing the exact distribution instead of approximate distribution. However, we utilize the truncated distribution in proposed GENGS to convert countably infinite categorical selection into finite categorical selection for the following reasons.\nFirst, while adapting non-parametric methodology into a neural network, which has a fixed number of parameters, people usually give a limit as a certain point by utilizing the truncation level. An example of such a case is a Stick-breaking VAE (Nalisnick & Smyth, 2017) which utilizes Dirichlet process in the latent variable of VAE, and the authors finitized the number of sticks by the human guidance. Second, while there is no previous work on reparameterization trick for fully-non-parametric categorical selection, if we finitize the number of categories with the truncated distributions suggested in the paper, we can utilize Gumbel-Softmax reparameterizer which already verified in deep generative model community and widely used by the implementation in the deep learning framework such as TensorFlow, i.e., RelaxedOneHotCategorical. Finally, if we have to choose one between the parametric model and the non-parametric model, the choice depends on the situation that we face up to. For example, we can compare Gaussian mixture model (GMM) and Dirichlet process Gaussian mixture model (DPGMM). If we have a clue on the number of clusters, we could directly apply GMM instead of DPGMM. However, if we know nothing about the data, utilizing DPGMM can be a good choice. Also, we can not directly compare GMM and DPGMM along the same line, since the experimental result differs from data to data.\nIn summary, as other non-parametric models do, we turn the non-parametric problem into the parametric problem, especially by utilizing the truncated distribution, and this kind of treatment is a natural way of solving such difficulty. However, we believe that investigating non-parametric reparameterizer, particularly utilizing A* sampling which is theoretically solid, is a crucial and open research question in the deep generative model community." } ]
2,020
null
SP:94c1fa434cf2eb8f4f762cd06cf838b0018c6fa0
[ "This paper presents a text generation model conditioned on desired structures. The proposed method is essentially a translation model from structure information (represented with multiple sequences of tokens) to a text. This study converts a text into structure information such as part of speech (POS) and participial construction (PC). Then, this paper proposes Structure Aware Transformer (SAT), which is essentially the same as the Transformer architecture. The experiments use datasets of Chinese lyrics and English Penn Treebank. This paper reports that giving structure information improved the performance in PPL and BLEU compared with GPT-2." ]
Controlling the presented forms (or structures) of generated text are as important as controlling the generated contents during neural text generation. It helps to reduce the uncertainty and improve the interpretability of generated text. However, the structures and contents are entangled together and realized simultaneously during text generation, which is challenging for the structure controlling. In this paper, we propose an efficient, straightforward generation framework to control the structure of generated text. A structure-aware transformer (SAT) is proposed to explicitly incorporate multiple types of multi-granularity structure information to guide the text generation with corresponding structure. The structure information is extracted from given sequence template by auxiliary model, and the type of structure for the given template can be learned, represented and imitated. Extensive experiments have been conducted on both Chinese lyrics corpus and English Penn Treebank dataset. Both automatic evaluation metrics and human judgement demonstrate the superior capability of our model in controlling the structure of generated text, and the quality ( like Fluency and Meaningfulness) of the generated text is even better than the state-of-the-arts model.
[]
[ { "authors": [ "Yoshua Bengio", "Réjean Ducharme", "Pascal Vincent", "Christian Jauvin" ], "title": "A neural probabilistic language model", "venue": "Journal of machine learning research,", "year": 2003 }, { "authors": [ "Mingda Chen", "Qingming Tang", "Sam Wiseman", "Kevin Gimpel" ], "title": "Controllable paraphrase generation with a syntactic exemplar", "venue": "arXiv preprint arXiv:1906.00565,", "year": 2019 }, { "authors": [ "Sumanth Dathathri", "Andrea Madotto", "Janice Lan", "Jane Hung", "Eric Frank", "Piero Molino", "Jason Yosinski", "Rosanne Liu" ], "title": "Plug and play language models: a simple approach to controlled text generation", "venue": null, "year": 1912 }, { "authors": [ "Liming Deng", "Jie Wang", "Hang-Ming Liang", "Hui Chen", "Zhiqiang Xie", "Bojin Zhuang", "Shaojun Wang", "Jing Xiao" ], "title": "An iterative polishing framework based on quality aware masked language model for chinese poetry generation", "venue": "Association for the Advancement of Artificial Intelligence”,", "year": 2020 }, { "authors": [ "Jessica Ficler", "Yoav Goldberg" ], "title": "Controlling linguistic style aspects in neural language generation", "venue": "arXiv preprint arXiv:1707.02633,", "year": 2017 }, { "authors": [ "Jie Hao", "Xing Wang", "Shuming Shi", "Jinfeng Zhang", "Zhaopeng Tu" ], "title": "Towards better modeling hierarchical structure for self-attention with ordered neurons", "venue": null, "year": 1909 }, { "authors": [ "Zhiting Hu", "Zichao Yang", "Xiaodan Liang", "Ruslan Salakhutdinov", "Eric P. Xing" ], "title": "Toward controlled generation of text", "venue": "In Proceedings of the 34th International Conference on Machine Learning - Volume 70,", "year": 2017 }, { "authors": [ "Chloé Kiddon", "Luke Zettlemoyer", "Yejin Choi" ], "title": "Globally coherent text generation with neural checklist models", "venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Yuta Kikuchi", "Graham Neubig", "Ryohei Sasano", "Hiroya Takamura", "Manabu Okumura" ], "title": "Controlling output length in neural encoder-decoders", "venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Rémi Lebret", "David Grangier", "Michael Auli" ], "title": "Neural text generation from structured data with application to the biography", "venue": "domain. pp. 1203–1213,", "year": 2016 }, { "authors": [ "Piji Li", "Haisong Zhang", "Xiaojiang Liu", "Shuming Shi" ], "title": "Rigid formats controlled text generation", "venue": "arXiv preprint arXiv:2004.08022,", "year": 2020 }, { "authors": [ "Xu Lu", "Jie Wang", "Bojin Zhuang", "Shaojun Wang", "Jing Xiao" ], "title": "A syllable-structured, contextuallybased conditionally generation of chinese lyrics", "venue": null, "year": 1906 }, { "authors": [ "Kishore Papineni", "Salim Roukos", "Todd Ward", "Wei Jing Zhu" ], "title": "Bleu: a method for automatic evaluation of machine translation", "venue": "In Proc Meeting of the Association for Computational Linguistics,", "year": 2002 }, { "authors": [ "Hao Peng", "Ankur P Parikh", "Manaal Faruqui", "Bhuwan Dhingra", "Dipanjan Das" ], "title": "Text generation with exemplar-based adaptive decoding", "venue": null, "year": 1904 }, { "authors": [ "Matthew E. Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations", "venue": "Annual Conference of the North American Chapter of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Peng Qi", "Yuhao Zhang", "Yuhui Zhang", "Jason Bolton", "Christopher D. Manning" ], "title": "Stanza: A Python natural language processing toolkit for many human languages", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations,", "year": 2020 }, { "authors": [ "Alec Radford", "Karthik Narasimhan", "Tim Salimans", "Ilya Sutskever" ], "title": "Improving language understanding by generative pre-training", "venue": "URL https://s3-us-west-2. amazonaws. com/openaiassets/researchcovers/languageunsupervised/language understanding paper", "year": 2018 }, { "authors": [ "Xiaoyu Shen", "Jun Suzuki", "Kentaro Inui", "Hui Su", "Dietrich Klakow", "Satoshi Sekine" ], "title": "Select and attend: Towards controllable content selection in text generation", "venue": "arXiv preprint arXiv:1909.04453,", "year": 2019 }, { "authors": [ "Yikang Shen", "Shawn Tan", "Alessandro Sordoni", "Aaron Courville" ], "title": "Ordered neurons: Integrating tree structures into recurrent neural networks", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Lifu Tu", "Xiaoan Ding", "Dong Yu", "Kevin Gimpel" ], "title": "Generating diverse story continuations with controllable semantics", "venue": "In Proceedings of the 3rd Workshop on Neural Generation and Translation,", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Xing Wang", "Zhaopeng Tu", "Longyue Wang", "Shuming Shi" ], "title": "Self-attention with structural position representations", "venue": "arXiv preprint arXiv:1909.00383,", "year": 2019 }, { "authors": [ "Kento Watanabe", "Yuichiroh Matsubayashi", "Satoru Fukayama", "Masataka Goto", "Kentaro Inui", "Tomoyasu Nakano" ], "title": "A melody-conditioned lyrics language model", "venue": "In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers),", "year": 2018 }, { "authors": [ "Thomas Wolf", "Lysandre Debut", "Victor Sanh", "Julien Chaumond", "Clement Delangue", "Anthony Moi", "Pierric Cistac", "Tim Rault", "Rmi Louf", "Morgan Funtowicz", "Jamie Brew" ], "title": "Transformers: Stateof-the-art natural language processing, 2019", "venue": null, "year": 2019 }, { "authors": [ "Zhuosheng Zhang", "Yuwei Wu", "Hai Zhao", "Zuchao Li", "Shuailiang Zhang", "Xi Zhou", "Xiang Zhou" ], "title": "Semantics-aware bert for language understanding", "venue": null, "year": 1909 } ]
[ { "heading": "1 INTRODUCTION", "text": "Natural language is not just a sequence collections of tokens but a structure well-organized sequence expressing understandable information. The structure of language usually obeys a set of grammatical rules, which helps beginners grasp the language with less efforts. Similarly, incorporating the structure into neural language model can obtain an increasing abstract level of representation and improves the generalization which may potentially reduce the need of large amount of training data (Shen et al., 2019b). The incorporations of structure information demonstrates considerable improvements in many language understanding tasks (Zhang et al., 2019; Hao et al., 2019; Wang et al., 2019).\nIn text generation, it cares about not only the generated contents (i.e., what to say) but also the presented structure forms (i.e., how to say) (Peng et al., 2019). Similar contents or meanings can be presented with different structure forms. The structures and contents can be considered and planned separately to achieve a highly informative generated text. From an empirical view, controlling or planning the generated structure may be helpful in several aspects: i) reducing the uncertainty of the generated contents with specific structure conditions, which may contribute to a good quality of generated text; ii) enhancing the interpretability of the generated text since more controlling attributes can be realized during the generation; iii) improving the structure, format or style consistence in specific structure-constraint generation task or specific domain generation with particular formats, such as style or paraphrase generation (Chen et al., 2019; Ficler & Goldberg, 2017), poetry generation (Deng et al., 2020; Li et al., 2020), and lyric generation (Watanabe et al., 2018; Lu et al., 2019).\nThe language structures determined by the set of grammatical rules vary from different granularity levels, such as participial construction (pc) is character-level, part of speech (pos) is word/phrase level, and sequence length is sentence level. These kinds of structure are coupled and nested together, which are realized with the contents simultaneously in most of the token by token generation. It is difficult to disentangle the contents and the text structure, and even harder to discriminate and control the different granularity level of structure during text generation. Individually controlling some specific types of structure like sequence length (Kikuchi et al., 2016), verbal predicate (Tu et al., 2019) have been investigated in text generation. These works design specific structure representation and are inappropriately for controlling other types of structure, let alone controlling multiple\ntypes of structure simultaneously. Directly embedding the structure and adding them into the word embeddings can achieve considerable controlling capability in character-level structure during text generation, such as tone level and rhyme (Deng et al., 2020) controlling in Chinese poetry generation. While this method may fail when the controlled structure (such as phrase level or sentence level) needs to aware the subsequent structure during the generation process. In addition to summarizing the structure embeddings and word embeddings, SongNet (Li et al., 2020) designs another structure embeddings which are queried and incorporated globally by the summarized embeddings to renew the representation. With pre-training and fine-tuning, the SongNet (Li et al., 2020) can also achieve good controllability in tailor-designed formats 1 (sentence level structure). The symbol sets for this format are particular designed and may not applicable for other type of structure.\nContrast to the above works, in this paper, we are not focus on controlling specific type of structure or format, instead we propose a framework to control more general types of structure in text generation. This framework allows for controlling individual type of structure, multiple or multigranularity types of structure during text generation. The controlled types of structure are extracted from sequence templates (any valid sentence is a valid template) by one or several auxiliary models. The extracted structure information are regarded as conditions, and the auxiliary model can be any credible model or tool that can extract soundable structure information from template. Since we want the generation of the current token or word can aware the global structures, the bi-directional transformer encoder is adopted for structure representation and learning. The learned structure representations are further incorporated into the decoder to guide the realization of the controlled structure. The main contributions of this work are summarized as follows:\n• A straightforward, interpretable structure controlling text generation framework is proposed, which is capable of controlling multi-granularity sequence structure from characterlevel to sentence-level structure by explicitly incorporating the corresponding structure information.\n• A simple alignment method and structure embedding, representation and learning method are proposed, which are utilized for representing the multi-granularity and multiple types of structure.\n• A structure-aware transformer language model is proposed, and the structure representation and token representation can be learned simultaneously. The structure information are queried globally and incorporated into the token representation with attention mechanism, which contribute to controlling the generated structure.\n• Extensive experiments in controlling different individual type of structure and multigranularity types of structure have been conducted on Chinese lyrics corpus. The structure controllability is effective and the quality of the generated lyrics is favorable. We also conduct controlling experiments on English Penn Treebank dataset, which demonstrates similar structure controlling capability with this proposed framework." }, { "heading": "2 RELATED WORKS", "text": "Controllable text generation has received much attention recently. Many efforts are devoted to controlling the content of the generated text (Kiddon et al., 2016; Lebret et al., 2016; Shen et al., 2019a). Based on conditioned RNN language model, stylistic parameters are further incorporated as conditioning context to control stylistic aspects of the generated text (Ficler & Goldberg, 2017). Basing generator on VAEs, Hu et al. (2017) proposes a generative model to generate plausible sentences with designated semantics. A simple plug and play language model is proposed in Dathathri et al. (2019) to guide controlling attributes (e.g. topic or sentiment) in text generation, without further training of the pre-trained language model. None of these work attempts to control the structure of the generated text. A similar approach, exemplar-based text generation, is proposed in Peng et al. (2019), where for each input text, an exemplar text is retrieved from the training data and is then used to construct a customized decoder for outputting a target. It is ambiguously to discriminate how much the exemplar contributes to the generated structure or contents. Another similar work is SongNet (Li et al., 2020), which are proposed to control the so called rigid formats. The rigid for-\n1This format or structure is more about the length of each sentence within one paragraph or passage.\nmats are specifically designed with a sequence of placeholder symbols, which are utilized to control the sentence (or sub-sentence) length.\nOur method is different from all the previous methods in fourfold: 1) we focus on a general structure controlling framework in text generation instead of controlling a specific type of structure;2) both individual type of structure and multiple or multi-granularity types of structure can be controlled;3) instead of designing the structure symbols by ourself, we adopt the most representative structure symbols as extracted by external models to increase the applicability of our framework;4) the extracted structure information decoupled from the sequence information are learned and represented fully before them are incorporated into word information to guide the text generation." }, { "heading": "3 MODEL DESCRIPTION", "text": "" }, { "heading": "3.1 STRUCTURE CONDITIONAL LANGUAGE MODEL", "text": "Given a natural language sequence denoted by x = [x1, ..., xT ], each word denoted as xt, t = 1, ..., T . The sequence joint distribution p(x) can be factorized into the product of conditional distributions p(xt|x<t) as follows:\np(x) =p(x1, ..., xT )\n= T∏ t=1 p(xt|x<t). (1)\nA standard language model is modeling the above distribution and maximizing the corresponding likelihood accordingly (Bengio et al., 2003; Peters et al., 2018; Shen et al., 2019b). The above distribution considers the order structure of natural language sequence explicitly, and the conditional distribution are based on the previous word tokens.\nAlthough the standard language model can generate sentence with high quality, the generated structure is inexplicable and cannot be controlled to satisfy specific generation task. Therefore, we incorporate the structure information explicitly into language model, and guide the structure generation. The joint distribution of sequence x can be reformulated as shown in Equation equation 2:\np(x) =p(x1, ..., xT )\n=p(s) T∏ t=1 p(xt|x<t, s) (2)\nwhere, s represents the global structure of the natural language sequence x, the global structure can be any of the structure information like pos tags or semantic roles of the sequence, and p(s) is the prior distribution of the global structure. We extract the structure information with auxiliary model, and this structure information is considered as prior knowledge, which will not be optimized by the language model.\nThe model parameters are learned by maximizing the objective function of SCLM, which is to maximize the likelihood as shown in Equation equation 3:\nmax θ log pθ(x) = T∑ t=1 log pθ(xt|x<t, s) (3)\nWe utilize the Transformer (Vaswani et al., 2017) as the backbone for implementing our SCLM. The structure information is first extracted by auxiliary model and then encoded into transformer encoder. The structure information can be learned and represented fully, which can be further incorporated to contribute the aware of the structure for sequence token representation with attention mechanism. The reason why both the transformer encoder and decoder are adopted here is that we want each token in sequence to aware its local and global structure information. Only the Transformer decoder, like GPT (Radford et al., 2018) ignores the subsequent structure information of the token. The Transformer architecture is well designed and suitable for the implementation of the structure conditional language model. We only modified the input representation and few parameters of transformer." }, { "heading": "3.2 STRUCTURE EXTRACTION", "text": "We use auxiliary model (such as lexical tool) g(•) to extract the structure information s from natural language sequence x as shown in Equation equation 4. The auxiliary model can be regarded as prior knowledge and will not be optimized.\ns = g(x). (4)\nThe structure can be any sounded structure information of language sequence vary from characterlevel structure (like participial construction), word-level structure (like part of speech) to sentencelevel structure (positions for example).\nThe multi-granularity types of sequence structure s1, s2, .., si can be extracted by different auxiliary models g1(•), g2(•), ..., gi(•) respectively. Since each structure unit (especially for word-level and sentence-level structure) may contain several characters, we assign these characters with the same symbol of this kind structure. We keep the length of the structure the same with the sequence tokens.\nTo be specific, we use the part of speech (pos) and participial construction (pc) as examples to illustrate the alignment of multi-granularity types of structure. The pos information can be extracted by many lexical analyzer tools like Jieba analyzer and Stanza (Qi et al., 2020) for Chinese and English sequence respectively. In Chinese, the pos is a type of word-level structure, and the participial construction is the character-level structure for each segmented word. We utilize the symbol collections Cpos = {n, v, r, ...} 2 from lexical analyzer (like Jieba) to represent the pos for each word. The symbol collections Cpc = {P, S,B,M,E} 3 are utilized to represent the pc for each character within each word. Suppose we have two levels (word-level and character-level) structure information for a sequence x = [x1, ..., xi, ..., xn], we can also present the word-level form of the sequence with w = [w1, ..., wj , ..., wnw ], nw ≤ n, and the pos structure can be represented with s′w = [pos1, ..., posj , ..., posnw ], posj ∈ Cpos; each word contains several characters wj = [..., xj,k, ...], k ∈ [1,mj ], and the pc structure for each word are sc,j = [..., pcj,k, ...], pcj,k ∈ Cpc where ∑nw j=1mj = n. Therefore, we can obtain the word-level structure (pos) and character-level structure (pc) with the same length with the original sequence as can be shown in the following expressions:\nsw = [..., posj , .., posj︸ ︷︷ ︸ mj , ...], j ∈ [1, nw] (5)\nsc = [..., pcj,1, ..., pcj,k, ..., pcj,mj︸ ︷︷ ︸ mj , ...] (6)\nThe sentence level structure like positions have unique representation for each token and do not need any further processing for the alignment. With the alignment process, multi-granularity and multi-type of sequence structure can be incorporated and controlled in the generation.\nAn illustration of multi-granularity structure information for a natural language sentence can be shown in Fig. 1." }, { "heading": "3.3 STRUCTURE AWARE TRANSFORMER", "text": "We propose a Structure Aware Transformer (SAT) to implement the multi-granularity structure controlling in text generation. The encoder stacks multi-layer Transformer encoder (Vaswani et al., 2017) with Multi-Head Self Attention in each layer to represent the extracted structure. The extracted structure information are first embedded and then summarized together as the structure input representation H0 4, which allows for controlling multiple types of structure in text generation simultaneously. The structure representation for each layer Hle , le = 1, ..., Ne can be obtained\n2n, v, r represent the noun, verb, pronoun respectively; for complete symbols can refer to https://github.com/fxsjy/jieba.\n3P represent the pc structure of special token, S represent a word only contains a single character, B,M,E represent the beginning, middle and ending of the word respectively.\n4positions are regarded as sentence-level structure and are also added into the structure representation.\naccording to the following formulas:\nH0 = m∑ i=0\nEsi(si) (7)( Qs Ks Vs ) = Hle−1 W qsW ks W vs (8) As = softmax( QsKs T\n√ d )Vs (9)\nH ′le = LN(As +Hle−1) (10) Hle = LN(FFN(H ′ le) +H ′ le) (11)\nwhere Es is the structure embedding matrix, m is the number of structure types, le is the number of encoder layers, and Hle is the output structure representation for layer le. softmax(•), LN(•), FFN(•) represent the softmax function, layer normalization and feed-forward network respectively. The final layer output of structure encoder HNe is then utilized by the decoder, and the decoder is similar to the Transformer decoder (Vaswani et al., 2017) with two attention blocks in each layer. The below attention block is a Masked Multi-Head Self Attention, which obtains the token xt representation without considering the information from its subsequent tokens x>t. The upper attention block is the Structure-Aware Attention, which incorporates the structure information (HNe) into the token representation.\nF0 = Ex(x) +Ep (12)\nF ′ld = Mask-Att(Fld−1) (13) Q = F ′ldW q (14)\nK,V = HNeW k,HNeW v (15)\nAsx = softmax( QKT√\nd )V (16)\nF ′′ld = LN(Asx + F ′ ld ) (17) Fld = LN(FFN(F ′′ ld ) + F ′′ld) (18)\nwhere Ex is the token embedding matrix, Ep is the position embedding matrix, Mask-Att represents the Masked Multi-Head Self Attention mechanism, ld ∈ [1, Nd] is the number of decoder layer, Fld is regarded as the structure-aware token representation.\nThe final output of the decoder FNd can be utilized to calculate the probabilities pθ(xt|x<t, s), and the parameters in the architecture can be learned by maximizing the likelihood in Equation equation 3." }, { "heading": "3.4 STRUCTURE CONTROLLABLE GENERATION", "text": "With our proposed SAT, we can controlling multi-granularity and multiple types of structure in text generation simultaneously. Both the specified structure information s and the template sequence x\ncan be utilized to control the structure of the generated text. The input context xc, which can be any precede words for continue generation (or topic words for topic related text generation), is utilized to guide the content generation. If no input context is specified, the model will start the generation from start token until the end token is generated." }, { "heading": "4 EXPERIMENTS AND EVALUATIONS", "text": "" }, { "heading": "4.1 SETUP", "text": "We follow the GPT2 source code from huggingface repository (Wolf et al., 2019) and add an additional structure encoder with multi-head self attention to implement our proposed SAT. The number of encoder layer Ne is 2,5 and the number of decoder layer Nd is 6. The other configurations are the same with the GPT2 except vocabulary size and structure size for embedding matrix. The structure information is extracted by Jieba 6 and Stanza (Qi et al., 2020) for Chinese and English text respectively. The extracted structure information is reagrded as the conditional structure, which are not optimized by our language model. However, the structure embeddings (Es) or representation vector (Hle ) can be learned by the proposed SCLM." }, { "heading": "4.2 DATASETS", "text": "We conduct the experiments on both Chinese lyrics corpus and English Penn Treebank (PTB) dataset. Over 80,000 Chinese lyrics are crawled from a set of online music websites, and the number of lyrics sentences without repetition is about 1.38 million. Every two adjacent lyric lines within one song are concatenated with comma to increase the structure complexity, which is prepared for the generation task. We randomly split them into three parts for model training(90%), validation(5%) and testing(5%). The statistics of data corpus for Chinese Lyrics and PTB dataset are shown in Appendix." }, { "heading": "4.3 MODEL COMPARISONS", "text": "We conduct the model comparisons on both Chinese lyrics corpus and English PTB dataset. The pos structure is considered as the mainly structure for the structure conditional language model, and we compare the GPT2 and SAT-pos on the continue text generation with both Chinese lyrics corpus and PTB dataset. The continue text generation utilizes the prompt words to guide the following sequence generation. The length of each prompt is randomly varied from 0 7 to the half length of the whole template sequence.\nWe also investigate multi-granularity types of structure individually and simultaneously for the SCLM on Chinese lyrics corpus. The additional structure is the participial construction (pc), which can also be extracted by Jieba analyzer. Two other models SAT-pc (conditioned with pc structure) and SAT-p2 (conditioned with both pc and pos structure) are also compared on Chinese lyrics corpus. To better compare the generation capability of these language models, a topic related generation task are also performed based on Chinese lyrics corpus. The topic words are extracted by Jieba with TF-IDF method. For fair comparisons, we train the SAT and GPT2 from scratch without utilizing any pre-trained model." }, { "heading": "4.4 EVALUATION METRICS", "text": "Both automatic evaluation metrics and human evaluations are adopted for model comparisons. The PPL is to evaluate the performance of language model, and the BLEU score (Papineni et al., 2002) is utilized to measure the content similarity of the generated text with its referred sequence text.\nThe structure controlling capability, like the sentence length, the pos and participial construction are also compared. The length controllability is measured by the prediction accuracy. Assume the length\n5We have conduct experiments on different number of encoder layers, and the gain of larger number of layer is trivial, please refer to Appendix for the result and analysis.\n6https://github.com/fxsjy/jieba 7Indicators no prompt word is specified, and the generation starts from the start token.\nof the input template is l and the predicted sequence length is l′. If the length difference δ = |l−l′| is within specified threshold, we regard the predicted length is accurate with this tolerance. We report the Accuracy of length control with tolerance δ ≤ 0, δ ≤ 2 and δ ≤ 4. The BLEU score can also be utilized to measure the pos and pc controllability. We extract the pos and pc structure from both test template and predicted sequence with the same lexical tool (Jieba or Stanza), and the BLEU score of pos or pc can be calculated accordingly.\nHuman evaluation is inevitable for evaluating the quality of the generated text, especially in the meaningfulness and fluency. However, human evaluation is time-consuming and costing. We conduct the human evaluation for model comparisons on the continue generation task of Chinese lyrics. Four well educated annotators are recruited to evaluate the continue generation of Chinese lyrics sentence in three dimensions, namely Fluency, Meaningfulness and Structure Matching. The Fluency and Meaningfulness are easy to understand and have been utilized by many previous works Deng et al. (2020). The Structure Matching is to evaluate the matching degree of generated text structure and template structure in several aspects, which considers the global structure (like subjective, predicates and objective structure) matching , constitute structure matching and pos matching for local words. The rating scores are 1 to 5 to represent the quality from bad to excellent for all the criteria. Each model generates 500 lyric lines and with the same random length prompt. Total 1000 lyric lines are generated and randomly shuffled, and the four annotators rated on the shuffled lyrics lines. Therefore, we can obtain 4000 (4 × 2 × 500) ratings." }, { "heading": "4.5 RESULTS & DISCUSSIONS", "text": "Table 1 shows the perplexity of the language models on both Chinese lyrics corpus and English PTB dataset. The results demonstrates that the pos structure can improve the language modeling performance on both Chinese and English sequence. And the language model performance can be further improved when additional structures are also incorporated, as shown in the table that the PPL of SAT-p2 with the lowest scores. We can observe that the pos condition gains more improvements than the pc structure condition when compared SAT-pos model with SAT-pc model on Chinese lyrics corpus. The probably reason is that the pos structure (with dictionary size 58) contains richer structure information than pc structure (with dictionary size 5).\nThe text generation performance can also be improved by our proposed model, as demonstrated in Table 5, and 2. The generation performance of our proposed structure conditional models obtains obvious improvements on the BLEU scores of text sequence. The improvements of text BLEU scores are with similar improvement paradigms as the PPL scores, which are 1) the prior structure information is useful for the modeling and generation; 2) the more the structure information incorporated, the better the modeling performance and generation results. Our proposed model SAT shows the superior structure controllability as demonstrated by the BLEU scores on pc and pos structure. The BLEU scores of structure can be significantly improved when the corresponding structures are conditioned and incorporated into the language model.\nIt is interesting to observe that the pos structure can improve the BLEU scores on pc significantly (SAT-pos versus GPT2), while the pc structure only slightly improves the BLEU scores on pos (SAT-pc versus GPT2). These phenomena are consistent with the fact that the pos structure can reflect the segmentation border of words. The pc structure is more coarse structure information than pos. We also observe that the pos structure can not improve the BLEU scores on pc when the pc structure is already incorporated (SAT-p2 versus SAT-pc), while the pc structure can further improve BLEU scores when the pos structure is already incorporated (SAT-p2 versus SAT-pos).\nThe probably explanation is that the pos structure is a type of fine-grained (or micro scale) structure compares to the pc structure, and the fine-grained information is too details for clarifying coarse information 8.\nThe length controllability of our proposed model is demonstrated by Table 6 (in Appendix). Although the text length is not explicitly incorporated as the condition, the generated text length is controlled effectively by the sequence length of conditioned pos and pc.\nThe Human evaluation results, as shown in Table 3, also demonstrate that the proposed model is superior in controlling the structure of the generated text. Although the strict structure constraints, our model can also achieve even better performance in terms of Fluency and Meaningfulness. As for the case and ablation studies please refer to the Appendix." }, { "heading": "5 CONCLUSION", "text": "In this paper, we propose a straightforward, interpretability and effective framework to control a wide range of language structures from character-level, word-level to sentence-level structure in text generation. These kinds of structure regarded as prior knowledge are explicitly extracted by external models and aligned together, which allows for both individually and simultaneously controlled in text generation. The structures are decoupled from word information and the structure representations are learned by bi-directional transformer encoder, which is powerful to learn the structure representations sufficiently. Subsequently, the structure representations are globally queried by the transformer decoder and are incorporated into contextualized word representations to guide the text generation with corresponding types of structure.\nExtensive experiments on both Chinese lyrics corpus and English Penn Treebank dataset have been conducted. Without pre-training on large amount of dataset, the results demonstrate the powerful structure controllability of our method in terms of the sequence length, pos, and pc. The superior performance of text quality with respect to fluency and meaningfulness are also achieved significant improvements than the free text generation model. Our method can be easily applied to control other kinds of structure in text generation and may even reduce the uncertainty and improve the quality of the generated text.\n8Let’s analogy this with an example, the micro-scale shape of an object is not helpful or may even disturbing the identification of a macro-scale shape of the object." }, { "heading": "A DATA DESCRIPTION", "text": "The statistic of utilized data is summarized in Table 4. The size of pos structure for Chinese lyrics datasets extracted from Jieba is 58, and the size of participial construction for lyrics words is 5. The vocabulary size of the lyrics dataset including the special tokens (like [PAD], [START], [END], [TOPIC]) is 4102, and some low frequency characters are replaced with [UNK]. The special tokens that indicate the start, end or pad of the sentence are regarded as a special structure. The vocabulary size of PTB is 10005 (with some special tokens), and the structure size of pos extracted by Stanza is 43." }, { "heading": "B SUPPLEMENTARY RESULTS", "text": "" }, { "heading": "C CASE STUDY", "text": "Figure 2 and 3 compare several cases generated by GPT2 and our proposed SAT on Chinese lyrics corpus and PTB dataset. We can notice that our model is capable of controlling the sentence-level structure (length), word-level structure (pos) (and character-level structure pc for lyrics generation) simultaneously, and the quality of the generated texts are also qualified and understandable. It is should be noted that the auxiliary model or tool utilized for extracting the structure is not optimized by our language model, and the accuracy of these tool will affect the quality of the generated text." }, { "heading": "D ABLATION STUDY", "text": "We conduct ablation study experiments on Chinese lyrics corpus to investigate the effects of encoder layer number. The PPL scores are compared for layer 2, 4, and 6 for different types of structure information. As shown in table 7, we cannot observe the obvious improvement due to the larger number of encoder layer. The probably explanation is that the structure information is comparative small (with dictionary size 5 for pc structure and 58 for pos, while vocabulary size is 4102) and 2 layer encoder is enough to process nd represent the information." } ]
2,020
null
SP:2ffc4cfa0da20b936bb4abee091f2d056dc12dfc
[ "Successor representations are an old idea that has seem recent interest in the ML community. The idea is conceptually straightforward, by assuming the rewards are linear in some space $r = \\vect{\\phi}(s, a) \\cdot \\vect{w}$ then we learn something analogous to an action-value function for the discounted expected features under a policy so that the action-value on task $\\vect{w}$ is $Q^\\pi_{\\vect{w}}(s, a) = \\psi^\\pi(s, a) \\cdot \\vect{w}$. This allows computing the action value for the policy under a new task $\\vect{w}'$ straight." ]
Reinforcement Learning algorithms have reached new heights in performance, often overtaking humans on several challenging tasks such as Atari and Go. However, the resulting models typically learn fragile policies that are unable to transfer between tasks without full retraining. Successor features aim to improve this situation by decomposing the policy into two components: one capturing environmental dynamics and the other modelling reward. Under this framework, transfer between related tasks requires only training the reward component. However, successor features builds upon the assumption that the current reward can be predicted from a linear combination of state features; an assumption with no guarantee. This paper proposes a novel improvement to the successor feature framework, where we instead assume that the reward function is a non-linear function of the state features, thereby increasing its representational power. After derivation of the new state-action value function, the decomposition includes a second term that learns the auto-correlation matrix between state features. Experimentally, we show this term explicitly models the environment’s stochasticity and can also be used in place of -greedy exploration methods during transfer. The performance of the proposed improvements to the successor feature framework is validated empirically on navigation tasks and control of a simulated robotic arm. Recently, Reinforcement Learning (RL) algorithms have achieved superhuman performance in several challenging domains, such as Atari (Mnih et al., 2015), Go (Silver et al., 2016), and Starcraft II (Vinyals et al., 2019). The main driver of these successes has been the use of deep neural networks, which are a class of powerful non-linear function approximators, with RL algorithms (LeCun et al., 2015). However, this class of Deep Reinforcement Learning (Deep RL) algorithms require immense amounts of data within an environment, often ranging from tens to hundreds of millions of samples (Arulkumaran et al., 2017). Furthermore, commonly used algorithms often have difficulty in transferring a learned policy between related tasks, such as where the environmental dynamics remain constant, but the goal changes. In this case, the model must either be retrained completely or fine-tuned on the new task, in both cases requiring millions of additional samples. If the state dynamics are constant, but the reward structure varies between tasks, it is wasteful to retrain the entire model. A more pragmatic approach would be to decompose the RL agent’s policy such that separate functions can learn the state dynamics and the reward structure; doing so enables reuse of the dynamics model and only requires learning the reward component. Successor features (Dayan, 1993) do precisely this; a model-free policy’s action-value function is expressed as the dot product between a vector of expected discounted future state occupancies, the successor features, and another vector representing the immediate reward in each of those successor states. The factorization follows from the assumption that reward can be predicted as the dot product between a state representation vector and a learned reward vector. Therefore, transfer to a new task requires relearning only the reward parameters instead of the entire model and amounts to the supervised learning problem of predicting the current state’s immediate reward. This factorization can be limiting because it is assumed that the reward is a linear function of the current state, which might not always be the case as the encoded features might not capture the required quantity for accurate reward modelling (Eysenbach et al., 2018; Hansen et al., 2019). Therefore, this paper introduces a new form for the reward function: non-linear with respect to the current state. We assume that the learned features are not optimal and the reward cannot be predicted directly from the raw features, which is not a strong assumption. This form increases the reward function’s representational power and makes it possible to incorporate the current state into reward
[]
[ { "authors": [ "Kai Arulkumaran", "Marc Peter Deisenroth", "Miles Brundage", "Anil Anthony Bharath" ], "title": "A brief survey of deep reinforcement learning", "venue": "arXiv preprint arXiv:1708.05866,", "year": 2017 }, { "authors": [ "André Barreto", "Will Dabney", "Rémi Munos", "Jonathan J Hunt", "Tom Schaul", "Hado P van Hasselt", "David Silver" ], "title": "Successor features for transfer in reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Peter Dayan" ], "title": "Improving generalization for temporal difference learning: The successor representation", "venue": "Neural Computation,", "year": 1993 }, { "authors": [ "Benjamin Eysenbach", "Abhishek Gupta", "Julian Ibarz", "Sergey Levine" ], "title": "Diversity is all you need: Learning skills without a reward function", "venue": "arXiv preprint arXiv:1802.06070,", "year": 2018 }, { "authors": [ "Gregory Farquhar", "Tim Rocktäschel", "Maximilian Igl", "Shimon Whiteson" ], "title": "Treeqn and atreec: Differentiable tree-structured models for deep reinforcement learning", "venue": "arXiv preprint arXiv:1710.11417,", "year": 2017 }, { "authors": [ "Akira Fukui", "Dong Huk Park", "Daylen Yang", "Anna Rohrbach", "Trevor Darrell", "Marcus Rohrbach" ], "title": "Multimodal compact bilinear pooling for visual question answering and visual grounding", "venue": "arXiv preprint arXiv:1606.01847,", "year": 2016 }, { "authors": [ "Steven Hansen", "Will Dabney", "Andre Barreto", "Tom Van de Wiele", "David Warde-Farley", "Volodymyr Mnih" ], "title": "Fast task inference with variational intrinsic successor features", "venue": null, "year": 1906 }, { "authors": [ "Tejas D Kulkarni", "Ardavan Saeedi", "Simanta Gautam", "Samuel J Gershman" ], "title": "Deep successor reinforcement learning", "venue": "arXiv preprint arXiv:1606.02396,", "year": 2016 }, { "authors": [ "Chen Ma", "Dylan R Ashley", "Junfeng Wen", "Yoshua Bengio" ], "title": "Universal successor features for transfer reinforcement learning", "venue": "arXiv preprint arXiv:2001.04025,", "year": 2020 }, { "authors": [ "Marlos C Machado", "Clemens Rosenbaum", "Xiaoxiao Guo", "Miao Liu", "Gerald Tesauro", "Murray Campbell" ], "title": "Eigenoption discovery through the deep successor representation", "venue": "arXiv preprint arXiv:1710.11089,", "year": 2017 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518(7540):529–533,", "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Junhyuk Oh", "Xiaoxiao Guo", "Honglak Lee", "Richard L Lewis", "Satinder Singh" ], "title": "Action-conditional video prediction using deep networks in atari games", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Martin L Puterman" ], "title": "Markov decision processes: discrete stochastic dynamic programming", "venue": null, "year": 2014 }, { "authors": [ "David Silver", "Aja Huang", "Chris J Maddison", "Arthur Guez", "Laurent Sifre", "George Van Den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot" ], "title": "Mastering the game of go with deep neural networks and tree", "venue": "search. nature,", "year": 2016 }, { "authors": [ "Csaba Szepesvári" ], "title": "Algorithms for reinforcement learning", "venue": null, "year": 2009 }, { "authors": [ "Norman Tasfi", "Miriam Capretz" ], "title": "Dynamic planning networks", "venue": "arXiv preprint arXiv:1812.11240,", "year": 2018 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Oriol Vinyals", "Igor Babuschkin", "Wojciech M Czarnecki", "Michaël Mathieu", "Andrew Dudzik", "Junyoung Chung", "David H Choi", "Richard Powell", "Timo Ewalds", "Petko Georgiev" ], "title": "Grandmaster level in starcraft ii using multi-agent reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Tianhe Yu", "Deirdre Quillen", "Zhanpeng He", "Ryan Julian", "Karol Hausman", "Chelsea Finn", "Sergey Levine" ], "title": "Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning", "venue": null, "year": 1910 }, { "authors": [ "Zhou Yu", "Jun Yu", "Jianping Fan", "Dacheng Tao" ], "title": "Multi-modal factorized bilinear pooling with co-attention learning for visual question answering", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Jingwei Zhang", "Jost Tobias Springenberg", "Joschka Boedecker", "Wolfram Burgard" ], "title": "Deep reinforcement learning with successor features for navigation across similar environments", "venue": "IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),", "year": 2017 } ]
[ { "heading": null, "text": "Recently, Reinforcement Learning (RL) algorithms have achieved superhuman performance in several challenging domains, such as Atari (Mnih et al., 2015), Go (Silver et al., 2016), and Starcraft II (Vinyals et al., 2019). The main driver of these successes has been the use of deep neural networks, which are a class of powerful non-linear function approximators, with RL algorithms (LeCun et al., 2015). However, this class of Deep Reinforcement Learning (Deep RL) algorithms require immense amounts of data within an environment, often ranging from tens to hundreds of millions of samples (Arulkumaran et al., 2017). Furthermore, commonly used algorithms often have difficulty in transferring a learned policy between related tasks, such as where the environmental dynamics remain constant, but the goal changes. In this case, the model must either be retrained completely or fine-tuned on the new task, in both cases requiring millions of additional samples. If the state dynamics are constant, but the reward structure varies between tasks, it is wasteful to retrain the entire model.\nA more pragmatic approach would be to decompose the RL agent’s policy such that separate functions can learn the state dynamics and the reward structure; doing so enables reuse of the dynamics model and only requires learning the reward component. Successor features (Dayan, 1993) do precisely this; a model-free policy’s action-value function is expressed as the dot product between a vector of expected discounted future state occupancies, the successor features, and another vector representing the immediate reward in each of those successor states. The factorization follows from the assumption that reward can be predicted as the dot product between a state representation vector and a learned reward vector. Therefore, transfer to a new task requires relearning only the reward parameters instead of the entire model and amounts to the supervised learning problem of predicting the current state’s immediate reward.\nThis factorization can be limiting because it is assumed that the reward is a linear function of the current state, which might not always be the case as the encoded features might not capture the required quantity for accurate reward modelling (Eysenbach et al., 2018; Hansen et al., 2019). Therefore, this paper introduces a new form for the reward function: non-linear with respect to the current state. We assume that the learned features are not optimal and the reward cannot be predicted directly from the raw features, which is not a strong assumption. This form increases the reward function’s representational power and makes it possible to incorporate the current state into reward\nestimation; lessening the burden on the encoder components. Under the new reward formulation, a secondary term emerges, which learns the future expected auto-correlation matrix of the state features. This new secondary term, referred to as Λ, can be exploited as a possible avenue for directed exploration. Exploring the environment using Λ allows us to exploit and reuse learned environmental knowledge instead of relying on a purely random approach for exploration, such as -greedy.\nFollowing this, the contributions of this research are as follows:\n• A novel formulation of successor features that uses a non-linear reward function. This formulation increases the representational power of the reward function. • Under the new reward formulation, a second term appears that models the future expected\nauto-correlation matrix of the state features. • We provide preliminary results that show the second term can be used for guided exploration\nduring transfer instead of relying on -greedy exploration.\nAfter the introduction of relevant background material in Section 1, we introduce the successor feature framework with a non-linear reward function in Section 2, Section 3 provides experimental support and provides an analysis of the new term in the decomposition. The paper concludes with a final discussion and possible avenues for future work in Section 4." }, { "heading": "1 BACKGROUND", "text": "" }, { "heading": "1.1 REINFORCEMENT LEARNING", "text": "Consider the interaction between an agent and an environment modelled by a Markov decision process (MDP) (Puterman, 2014). An MDP is defined as a set of states S , a set of actionsA, a reward function R : S → R, a discount factor γ ∈ [0, 1], and a transition function T : S ×A → [0, 1]. The transition function gives the next-state distribution upon taking action a in state s and is often referred to as the dynamics of the MDP.\nThe objective of the agent in RL is to find a policy π, a mapping from states to actions, which maximizes the expected discounted sum of rewards within the environment. One solution to this problem is to rely on learning a value function, where the action-value function of a policy π is defined as:\nQπ(s, a) = Eπ [ ∞∑ t=0 γtR(st)|St = s,At = a ]\nwhere Eπ[. . . ] denotes the expected value when following the policy π. The policy is learned using an alternating process of policy evaluation, given the action-value of a particular policy and policy improvement, which derives a new policy that is greedy with respect to Qπ(s, a) (Puterman, 2014)." }, { "heading": "1.2 SUCCESSOR FEATURES", "text": "Successor Features (SF) offer a decomposition of the Q-value function and have been mentioned under various names and interpretations (Dayan, 1993; Kulkarni et al., 2016; Barreto et al., 2017; Machado et al., 2017). This decomposition follows from the assumption that the reward function can be approximately represented as a linear combination of learned features φ(s; θφ) extracted by a neural network with parameters θφ and a reward weight vector w. As such, the expected one-step reward can be computed as: r(s, a) = φ(s; θφ)>w. Following from this, the Q function can be rewritten as:\nQ(s, a) ≈ Eπ [ rt+1 + γrt+2 + . . . |St = s,At = a ] = Eπ [ φ(st+1; θφ) >w + φ(st+2; θφ) >w + . . . |St = s,At = a\n] Q(s, a) = ψπ(s, a)> · w\nwhere ψ(s, a) are referred to as the successor features under policy π. The ith component of ψ(s, a) provides the expected discounted sum of φ(i)t when following policy π starting from state s and\naction a. It is assumed that the features φ(s; θφ) are representative of the state s, such that ψ(.) can be turned into a function ψπ(φ(st; θφ), at). For brevity, φ(st; θφ) is referred to simply as φt and ψπ(s, a) as ψ(s, a).\nThe decomposition neatly separates the Q-function into two learning problems, for ψπ and w: estimating the features under the current policy dynamics, and estimating the reward given a state. Because the decomposition still has the same form as the Q-function, the successor features are computed using a Bellman equation update in which the reward function is replaced by φt:\nψπ(φt, at) = φt + γE [ ψπ(φt+1, at+1) ] such that approximate successor features can be learned using an RL method, such as Q-Learning (Szepesvári, 2009).\nFollowing from this, the approximation of the reward vector w becomes a supervised learning problem. Often, this weight is learned using ordinary least squares from the sampled environmental data. One benefit of having a decoupled representation is that only the relevant function must be relearned when either the dynamics or the reward changes. Therefore, if the task changes, but the environmental dynamics remain constant, only the reward vector parameters w must be relearned, which are minimal compared to the total number of parameters in the full model." }, { "heading": "2 MODEL, ARCHITECTURE, AND TRAINING", "text": "The Successor Feature framework has several limitations, primarily stemming from the assumptions around its derivation, such as constant environmental structure between tasks or that the reward can be linearly predicted from state features. Work towards solving the former has been developed by Zhang et al. (2017) whereby they learn a linear mapping between task state features. The latter assumption, whereby the reward is assumed to be a linear mapping of state features, is not guaranteed and, as we show, the Successor Feature framework fails in such cases. Therefore, the method presented in this section aims to provide a stronger guarantee of the framework’s performance in such cases by developing a more robust reward component.\nThis section discusses our change to the successor feature framework, which adjusts reward function, from a linear function, to a non-linear function. First, a discussion of the new decomposition is given with the full derivation provided in Appendix A. Then experimental support for this change will be presented and analyzed to examine what the new term in the decomposition learns." }, { "heading": "2.1 NON-LINEAR REWARD FUNCTION", "text": "The successor feature framework builds upon the assumption that the current reward rt can be represented by the linear combination of the current state representation φt ∈ Rz and a learned reward vector w ∈ Rz , such that rt = φ>t w. This form is limiting because there is no guarantee that the reward will be a linear combination of the state features or that the required state features can be learned by the encoder (Eysenbach et al., 2018; Hansen et al., 2019). In practice the optimal state features are often not learned; therefore, we build on the basis that the state features are sub-optimal, which in itself is not a strong assumption. To increase the flexibility of this reward model, let us consider the following form:\nrt = φ > t o + φ > t Aφt (1)\nwhere {φt,o} ∈ Rz , and A ∈ Rz×z . Both o and A are learnable parameters modelling the reward structure of the environment. Equation 1 shows that the formulation introduces a non-linear transformation with respect to φ. Comparing this with the original formulation, it is evident that this is equivalent to setting w = o + Aφ. The state-action value function Q(s, a), under this new reward structure, can be derived to yield:\nQπ(st, a) = ψ π(st, a) >o + βtr(AΛπ(st, a)) (2)\nwhere β ∈ {0, 1} controls the inclusion of Λ and tr is the trace operator. It can now be shown that ψ and Λ satisfy the Bellman equation (Bellman, 1966):\nψπ(st, a) = Eπ[φt+1 + γψ(st+1, π(st+1))|St = s,At = a] (3)\nΛπ(st, a) = Eπ[φt+1φ>t+1 + γΛ(st+1, π(st+1))|St = s,At = a] (4)\nwhere for ψ and Λ, φ and φφ> respectively play the role of rewards. In addition to ψ, it is now necessary to model Λ, which outputs an Rz×z matrix per action. The quantity φtφ>t can be interpreted as an auto-correlation matrix of the state features. We can see that this form allows the Λ term to model some form of future expected stochasticity of the environment. For example, the diagonal of Λ will model a second order moment capturing each feature’s change with respect to itself φ1. We provide analysis and further discussion of Λ in Section 3.5." }, { "heading": "2.2 MODEL STRUCTURE AND TRAINING", "text": "The proposed model, shown in Figure 1a, uses an encoder to produce a state embedding φt consumed by downstream modelling tasks. Figure 1b shows how the current reward rt is predicted using w, with w = o + Aφ, and current state representation φt; this process is defined in Equation 1. Similar to previous work with successor features, the structure includes pathways for an encode-decode task and successor feature prediction ψ (Machado et al., 2017; Kulkarni et al., 2016; Zhang et al., 2017). The decoder network ensures that the features learned by the encoder, which produces φ, contain useful information for prediction. Furthermore, only the gradients from the state-dependent and reward prediction tasks modify the encoder parameters, and therefore φ. An additional branch is added, by way of the non-linear reward function, to model the quantity Λ(s, a). This branch’s output is a matrix, which differs from the vector predicting branches ψ and φ.\nThe encode-decode task is trained by minimizing the mean squared difference between the input st and the decoder’s reconstructed version x̂t from φ:\nLd(st; θφ, θ̂φ) = [st − g(φt; θ̂φ)]2 (5)\nwhere φ is the output of the encoder with parameters θφ and g(·; θ̂φ) produces the output of the decoder with parameters θ̂φ. As mentioned previously, we train ψ and Λ, parameterized with θψ and θΛ respectively, using the Bellman equations to minimize the following losses:\nLψ(st, at; θψ) = E[(φ−t + γψ(st+1, a∗; θ−ψ)− ψ(st, at; θψ))2] (6) LΛ(st, at; θΛ) = E[(φ−t φ−>t + γΛ(st+1, a∗; θ−Λ)− Λ(st, at; θΛ))2] (7)\nwhere a∗ = maxa∗Q(s, a∗). To help stabilize learning, we use lagged versions of θφ, θψ , and θΛ as done by Mnih et al. (2015); the lagged version is signified with the − symbol in the exponent. Unfortunately, as the dimensionality of z grows, the number of parameters needed by Λ grows quadratically. However, by identifying the φtφ>t term in Λ(s, a) as a symmetric matrix, it is possible to model only the upper triangular portion of the matrix2, requiring about half the number of\n1A quantity close to the variance, but not zero mean. 2It would still be necessary to manipulate this matrix so that it forms a full matrix.\nparameters. To further reduce parameters, each ψ and Λ pathways have two hidden layers before their outputs, reflected in Figure 1a. In this way, the parameters are shared amongst pathways, which contrasts with other works with multiple sets of layers per action a ∈ A (Kulkarni et al., 2016; Zhang et al., 2017). To learn the reward parameters A and o, which are the parameters of the approximated non-linear reward function, the following squared loss function is used:\nLr(st; o,A) = [rt − φ>t o + βφ>t Aφt]2 (8)\nWe found that the loss Lr is not enough to train the parameters A and o alone, and that also regressing towards the Q-function target to be more informative. Therefore, similar to Ma et al. (2020) we use the additional loss:\nLQ(st, at; o,A, θψ, θΛ) = [Q̂t − ψ(st, at; θψ)>o+ βtr(AΛ(st, at; θΛ))]2 (9)\nwhere Q̂t = rt + γQ(st+1, a∗) is the target term3. The additional LQ loss can be treated as an auxiliary loss that forces the agent to learn values relevant to the actual quantity used for decision making. The LQ loss is additively included and scaled with a hyperparameter λ, which, in this work, is typically set between 0.01 and 0.1. It does not adjust the feature parameters involved in the prediction of φt.\nHowever, similar to Λ, as the dimensionality of z increases, so does the number of parameters needed for modelling matrix A ∈ Rz×z . Therefore, in the interest of reducing the number of parameters we use a factorization that splits the matrix A ∈ Rz×z into two parts with a smaller inner dimension f , A = L ·R>, where {L,R} ∈ Rz×f . By factoring the matrix in this way, we require 2× z × f parameters instead of z × z. If we use values for f smaller than z2 , we reduce the number of parameters required by matrix A. A similar factorization was suggested in the context of visual question answering (Yu et al., 2017; Fukui et al., 2016). The factorization of A was primarily done to reduce the total number of parameters in our model. In Appendix E we show that using the full matrix is still a tractable learning problem and differs only slightly in overall performance. Combining our losses, the composite loss function is the sum of the four losses given above:\nL(θφ, θ̂φ, θψ, θΛ, o, A) = Ld + Lψ + βLΛ + Lr + λLQ (10)\nIn practice, to optimize Equation 10 with respect to its parameters, (θψ, θΛ) and (θφ, θ̂φ, o,A) are iteratively updated. Doing so increases the stability of the approximations learned by the model and ensures that the branches modelling ψ and Λ do not backpropagate gradients to affect θφ (Machado et al., 2017; Zhang et al., 2017; Kulkarni et al., 2016). Additionally, by training in this way, the state representation φ can learn features that are both a good predictor of the reward rt and useful in discriminating between states (Kulkarni et al., 2016)." }, { "heading": "3 EXPERIMENTS", "text": "This section examines the properties of the proposed approach on Axes, a navigation task, and on Reacher, a robotic control task built using the MuJoCo engine (Todorov et al., 2012). The environments are shown in Figure 2; they each contain several tasks specified by goal location and are split between training and test distributions. The experiments examine what the model learns and give a preliminary examination of possible uses for the Λ component in the proposed model.\nSpecifically, we show that the non-linear assumption increases the representational power of the reward function, provide evidence that the model can learn non-linear rewards, examine the learned Λ function to understand if it captures environmental stochasticity, and evaluate different guided exploration strategies using the Λ term on new tasks. Additional environment details are provided in Appendix B.\n3This term is written to be generally applicable to different training methodologies. As we train with an A3C-type framework, we regress towards an estimate of the return from the environment trajectories." }, { "heading": "3.1 ENVIRONMENTS", "text": "Axes: In this environment, shown in Figure 2a, the agent, shown by the red square, must traverse the map to reach a goal location using four actions: up, down, left, and right. The agent receives a reward equal to the negative distance between itself and the target goal at each step. The agent’s state space contains the (x, y) location of itself and the current target goal within the environment such that the state st ∈ R4. With this state space the agent must learn a reward function that can approximate the distance between itself and the goal location, d(a, b) = √ (bx − ax)2 + (by − ay)2, a non-linear function. We expect that the linear variant will fail in this task as it cannot model a non-linear reward function. This experiment shows that the non-linear reward function can compensate for a weak state representation.\nReacher: The second environment is a control task defined in the MuJoCo physics engine (Todorov et al., 2012), shown in Figure 2b. A modified version of the robotic model provided by Metaworld (Yu et al., 2019) was used. This environment was chosen to show that the proposed method can scale to difficult control tasks. In this environment, the agent must move a simulated robotic arm to a specific 3D point in space by activating four torque controlled motors. Similarly to Axes, the environment has predefined tasks split between training and test distributions, with the eight goals shown in Figure 2b as green and red balls, respectively. The state space is represented as the (x, y, z) location of the current goal and the end effector, such that s ∈ R6. Reacher is similar to the Axes environment, in that the agent must compute the distance between the two points. The agent receives a reward equal to the negative distance between the end-effector and the current target goal at each step. We discretize the actions such that the agent has nine discrete actions that control the arms movements." }, { "heading": "3.2 EXPERIMENTAL SETUP", "text": "The train and test setup was similar to those used in previous studies. The agent is trained on a distribution of randomly sampled train tasks; then, during testing, we change to a distribution of unseen tasks. A single policy π is trained over all tasks. During transfer to unseen tasks, the model re-learns only the reward parameters, with the remainder of the model frozen (Zhang et al., 2017; Kulkarni et al., 2016). The newly learned policy will vary from the original but is able to exploit previously learned environment knowledge for the new tasks.\nOur method is compared against a baseline version of the successor feature framework, most similar to that of Kulkarni et al. (2016). This baseline is identical in all ways to the proposed method except for the exclusion of terms containing Λ, specifically Equations 2 and 10, which can be obtained by setting β = 0. A uniform random action baseline was considered in all environments to act as a floor.\nThe number of fully connected layers within the encoder of each model varied between environments. The Axes environment used one layer while the Reacher environment used two layers in their encoders. Initially, on the Axes environment, the models all used the raw features with no encoder\nsuch that φt = I(st). We found this led to worse performance for the linear model as it now had no chance to learn a suitable encoding of the features for reward prediction. In Appendix E, we performed an ablation within the Axes environment that controls for the number of parameters in both variants of the model; we found even with equal parameters there is little impact on overall performance.\nWe report each model’s mean performance on all plots as the average over three runs with varied seeds. Each plot includes the standard deviation over all runs as a shaded area. Additional experiment and model details can be found in Appendix C." }, { "heading": "3.3 ENVIRONMENT PERFORMANCE", "text": "The first step was to examine the performance of the proposed method against various baselines. The primary point of comparison was between the proposed method and the original formulation of the successor feature framework, which can be recovered exactly by setting β = 0 in Equations 2 and 10 of the proposed model. The result of these experiments are shown in Figure 3 for both Axes, on the left, and Reacher, on the right. As shown, all versions of the proposed model using the non-linear reward outperformed the other baseline methods in either convergence speed or overall performance. Clearly, using a non-linear reward function greatly improved flexibility and performance, enabling the proposed method to learn a more accurate reward model for the task.\nThe failure of the linear reward model, SF β = 0 of Figure 4a and 4b, in both environments is expected. The model, with the linear reward function, is not able to appropriately model the environments reward structure as the reward is a non-linear function of the state; in this case, the agent’s coordinates and the current goal location. It was found that increasing the representational power of the encoder with additional fully connected layers helps but does not allow the model to match performance of the non-linear variant (see Appendix E). We also hypothesize that the poor performance of the linear model, leading it to eventually perform worse than random, is due to the model learning degenerate features." }, { "heading": "3.4 TASK TRANSFER", "text": "An important property of the successor feature framework is the ability to adapt rapidly to new tasks within the same environment. Adaption, or transfer, is accomplished by freezing the model’s state-dependent components, such as ψ, and learning only the reward parameters w. Because w is often a small vector equal to the embedding dimensionality of φ, it can be quickly learned. This section examines how well the proposed method, with the additional branch for Λ and the reward parameter A, can transfer to new tasks in both the Axes and Reacher environment.\nAfter training the models to convergence, we change the task distribution within the environment. After the task distribution change, denoted by the dashed vertical line in Figure 4, all model parameters were frozen except those pertaining to the reward functions, which were then trained using samples\nfrom this new task. During transfer, we scaled the learning rate by a fixed factor, randomly initialize the learned weights, and re-decay the value so the method has a chance to explore. Full details are provided in Appendix D.\nFigure 4 clearly shows that our proposed method converged quickly to the new test tasks and does so faster than training from scratch on both the Axes and Reacher environments. Faster learning indicates that the model is reusing previously learned task knowledge. We see that the linear variant is still unable to model the reward within the environment, leading to poor performance." }, { "heading": "3.5 MODELLING ENVIRONMENTAL STOCHASTICITY", "text": "This section examines the Λ function to determine whether it can capture stochasticity in the environment. We use the Axes environment as it is easy to examine the representations learned by the model. However, to do so, we modify the Axes environment to include some randomness. The modified version, referred to as half-random, is identical in all aspects to the base version except for a location-based conditional that affects the agent’s actions. In other words, if the agent is within the positive x quadrant of the map, x > 0, then actions are randomly perturbed with a fixed probability. Otherwise, they are fully deterministic. This randomness is shown in Figure 5a by the red shaded area.\nAfter training to convergence on the half-random variant, we examine the Λ function that the model has learned. Because the Λ function is modelling the auto-correlation matrix, the future expected correlation of each feature with itself is found by looking along the diagonal. Figure 5b shows the result of plotting a diagonal value, in this case, feature 1, of the Λ-matrix over the entire state space. We can see that one of the diagonal components of Λ did indeed learn to approximately model the conditional random field within the environment." }, { "heading": "3.6 GUIDED EXPLORATION WITH Λ", "text": "This section presents preliminary results on possible uses for the Λ-function. Specifically, whether it is possible to use the Λ-function for guided exploration during transfer within the Axes and Reacher environments.\nThe Successor Features, given in Equation 2, can be interpreted as predicting the future expected path taken by the policy π in an environment. Under this interpretation, ψ can be seen as capturing the expected features of the states and Λ the expected variance between state features along these pathways. Under an -greedy policy, the captured variance would be induced by the random actions taken. Adding noise to the Λ component would then perturb around the expected path, the one captured by ψ, to nearby states as done by a -greedy policy. Therefore, instead of using -greedy exploration, it is possible to add noise to Λ during transfer, such that Λ̂(s, a) = Λ(s, a) + Λ(s, a), where is sampled from some distribution. During learning, the variance of the sampling distribution, controlled by α, can be annealed to some final value. The actions are then sampled from the model at time t as:\nat = argmaxa∗ { ψ(st, a ∗)>o+ tr(AΛ̂(st, a ∗)) }\n(11)\nVarious permutations of sampling distributions and structures were examined. Experimentally, it was found that using a scalar value sampled from uniform noise, that is ∼ U(−α, α) where ∈ [−α, α], provides the best performance. From Figure 6, we see that using Λ for directed exploration is a viable alternative. In Figure 6a the directed exploration method manages to converge faster than -greedy. We believe that using Λ lets the model efficiently explore in directions along expected state pathways and reuses previously gained knowledge towards new tasks." }, { "heading": "4 CONCLUSION & FUTURE WORK", "text": "In this paper, we have derived a novel formulation of successor features with a non-linear reward. We have shown that the agent can model reward non-linear reward structure, not possible under the old linear formulation. Further, we have shown the utility of the Λ term that appears in the derivation of the new state-action function. Experimentally, we have shown that the Λ term is able to capture the stochastic nature of an environment and can be used for directed exploration.\nIn future work, we aim to explore the Λ function deeply. Specifically, what Λ learns and if we can find a formulation that learns the future expected variance. Other possible avenues of future work include improvements to directed exploration with different annealing and whether our finding with the Λ function can help improve the Q-function." }, { "heading": "A NON-LINEAR REWARD DERIVATION", "text": "Here we provide the derivation for the non-linear reward in the successor framework. First, we start by assuming the reward rt has the following form:\nrt = φ > t o + φ > t Aφt (12)\nwhere {φt,o} ∈ Rz×1, and A ∈ Rz×z and both o and A are learnable parameters. Following from the definition of the state-action value function Q(s, a), the adjusted reward function can be substituted to yield:\nQπ(s, a) = Eπ[rt+1 + γrt+2 + . . . |St = s,At = a] (13)\n= Eπ[φ>t+1o + φ>t+1Aφt+1 + γφ>t+2o + γφ>t+2Aφt+2 + . . . |St = s,At = a] (14)\nDropping the conditional portion of the expectation for brevity, linearity of expectation can be used to split apart the terms containing A and o. Then o is pulled out from the first term:\n= Eπ[φ>t+1o + γφ>t+2o + . . . ] + Eπ[φ>t+1Aφt+1 + γφTt+2Aφt+2 + . . . ] (15)\n= Eπ[φt+1 + γφt+2 + . . . ]>o + Eπ[φ>t+1Aφt+1 + γφ>t+2Aφt+2 + . . . ] (16)\nBy recognizing the first expectation term as the successor features ψ(s, a), Equation 16 can be rewritten as\n= ψπ(s, a)>o + Eπ[φ>t+1Aφt+1 + γφ>t+2Aφt+2 + . . . ] (17)\nBecause φ>Aφ results in a scalar, the trace function tr(·) can be used inside the right-hand term:\n= ψπ(s, a)>o + Eπ[tr(φ>t+1Aφt+1) + tr(γφ>t+2Aφt+2) + . . . ] (18)\nBy exploiting the fact that tr(AB) = tr(BA), the terms inside the trace function can be swapped to yield:\n= ψπ(s, a)>o + Eπ[tr(Aφt+1φ>t+1) + tr(γAφt+2φ>t+2) + . . . ] (19)\nBecause both tr(·) and A are linear, they can be pulled out of the expectation, giving:\n= ψπ(s, a)>o + tr(Eπ[Aφt+1φ>t+1 + γAφt+2φ>t+2 + . . . ]) (20)\n= ψπ(s, a)>o + tr(AEπ[φt+1φ>t+1 + γφt+2φ>t+2 + . . . ]) (21)\nFinally, the remaining expectation can be expressed as a function:\nQπ(s, a) = ψπ(s, a)>o + βtr(AΛπ(s, a)) (22)\nβ ∈ {0, 1} is a hyperparameter that controls the inclusion of the non-linear component. We define ψπ and Λπ as:\nψπ(s, a) = Eπ[φt+1 + γψ(st+1, π(st+1))|St = s,At = a] (23)\nΛπ(s, a) = Eπ[φt+1φ>t+1 + γΛ(st+1, π(st+1))|St = s,At = a] (24)" }, { "heading": "B ENVIRONMENTS", "text": "B.1 AXES\nIn the Axes environment, each action moves the agent by 0.01 units in the desired direction with the traversable space defined by a 15× 15 unit box centered at the origin (0, 0). Within this environment eight separate goal locations exist split between train and test distributions. An episode ends when either the agent reaches the goal or more than 225 steps have elapsed. The agent’s starting location is randomly sampled from a grid of 3× 3 step units, centered at (0, 0).\nB.2 REACHER\nIn the Reacher environment, an episode ends when 150 steps have elapsed or the agent is within 7cm of the goal.\nBecause the models can be used only with discrete actions, it was necessary to transform the environmental actions. Therefore, the four-dimensional continuous action space A was discretized using two values per dimension: the maximum positive and maximum negative torque for each actuator. An all-zero option was included that applies zero torque along all actuators, resulting in a total of nine discrete actions." }, { "heading": "C EXPERIMENTS", "text": "We use the factorization of matrix A with the two separate matrices, L and R, with an inner dimension equal to f = z2 − 1. All models, unless specified otherwise, were trained using a synchronous version of A3C (Mnih et al., 2016). We trained all components on n-step trajectories, with n = 5, generated by 16 separate threads. The targets for ψ and Λ were estimated using an n-step return across latent states φ. All models used an annealed -greedy method for exploration. Within both the Axes and Reacher environment, -greedy was annealed from 1.0 to a final over the first 250k steps.\nIn the Axes and Reacher environments the encoder and decoder each respectively contain one and two hidden layers with an embedding size equal to the double the raw state size. Both ψ and Λ increase the hidden dimension z by a fixed factor before output. This factor zfactor depends on the environment. All environments used a discount factor of γ = 0.99, λ = 0.1, and updated the parameters every 25k steps.\nC.1 AXES\nWe used an embedding size of z = 8 in the Axes environment. The final value used in -greedy was 0.1 with a learning rate of α = 2.5e− 4 was used.\nC.2 REACHER\nWe used an embedding size of z = 12 in the Reacher environment. The final value used in -greedy was 0.05 with a learning rate of α = 5e− 4 was used." }, { "heading": "D TRANSFER", "text": "During transfer we reinitialize the reward specific parameters A and o using an orthogonal initialization with a gain of 1. All other model parameters are held frozen and do not change. The learning rate is increased by a factor of 2× and 10× on the Axes and Reacher environments respectively. We anneal the exploration parameter from 1 to 0.1 in Axes and to 0.05 on the Reacher environments over the first 200k steps." }, { "heading": "E ADDITIONAL EXPERIMENTS", "text": "In this section we discuss additional experiments performed within the Axes environment. The Axes environment was used as it has simple dynamics and representation such that the resulting\nperformance is This primary motivation was to understand if there are any confounding factors in the performance of the linear and non-linear models.\nE.1 STRONGER STATE ENCODER\nIn this experiment we add additional fully connected layers to the encoder of the linear model. From Figure 7a we see that as the strength of the encoder increases so does the model’s overall performance. This is unsurprising as the encoder is able to learn a better encoding of the state for reward prediction with additional parameters. To near the performance of the non-linear model requires three additional, four total, fully connected layers each with z hidden units. This is a significant increase in parameters when compared to the non-linear variant which only has a single fully connected layer in its encoder. Even with the additional layers the linear version does not match the performance of the non-linear model.\nE.2 A FACTORIZATION\nIn this experiment we show the factorization of the parameter A has little effect on the model’s performance. We examined using a full matrix of parameters, such that A ∈ Rz×z and a factorization where A = L ·R> where {L,R} ∈ Rz×f . From Figure 7b we see that the final performance of both methods are identical and they roughly converge at the same speed. The full matrix variant converges faster than the factorized version, but has nearly double the number of parameters. Learning the full matrix variant is a tractable learning problem and learning matrices of such scale has been done in other work, such those in state-transition models (Oh et al., 2015; Farquhar et al., 2017; Tasfi & Capretz, 2018). In the case of our model, we chose the factorized version to reduce the total number of parameters.\nE.3 EQUAL PARAMETERS\nTo ensure that the non-linear model’s performance is not simply due to a greater number of parameters, we examine a linear variant with an approximately equal number of parameters. The parameters of the linear variant are increased by adjusting the number of hidden units, controlled by the hyperparameter z. From Figure 7c, we see that setting the number of parameters roughly equal to that of the non-linear model does help performance slightly but the linear model still cannot solve the task. We conclude that the new reward structure introduced by our paper, with its ability to model a non-linear reward, is the primary reason for the increase in performance." } ]
2,020
NON-LINEAR REWARDS FOR SUCCESSOR FEATURES
SP:1e11e9ad7288da902ed69a7735d1d89e81692b54
[ "The paper proposed to cluster the data using k different VAEs’. The method is different from the existing VAE-based deep clustering method (VaDE), which uses only one VAE but employs a Gaussian mixture prior to achieve the clustering goal. The difficulties of the proposed model lie at how to train the model efficiently. To this end, some approximations are made to the ELBO by using the MAP value to replace the expectation as well as dropping some KL term. The approximations are the key to the training, but not justified well. Experiments are conducted on several image and text datasets, and show superior performance comparing to existing deep clustering methods." ]
In this study, we propose a deep clustering algorithm that utilizes a variational autoencoder (VAE) framework with a multi encoder-decoder neural architecture. This setup enforces a complementary structure that guides the learned latent representations towards a more meaningful space arrangement. It differs from previous VAE-based clustering algorithms by employing a new generative model that uses multiple encoder-decoders. We show that this modeling results in both better clustering capabilities and improved data generation. The proposed method is evaluated on standard datasets and is shown to outperform state-of-the-art deep clustering methods significantly.
[ { "affiliations": [], "name": "A MIXTURE" } ]
[ { "authors": [ "Xi Chen", "Yan Duan", "Rein Houthooft", "John Schulman", "Ilya Sutskever", "Pieter Abbeel" ], "title": "Infogan: Interpretable representation learning by information maximizing generative adversarial nets", "venue": "In Proceedings of the International Conference on Neural Information Processing Systems (NIPS),", "year": 2016 }, { "authors": [ "Nat Dilokthanakul", "Pedro AM Mediano", "Marta Garnelo", "Matthew CH Lee", "Hugh Salimbeni", "Kai Arulkumaran", "Murray Shanahan" ], "title": "Deep unsupervised clustering with gaussian mixture variational autoencoders", "venue": "arXiv preprint arXiv:1611.02648,", "year": 2016 }, { "authors": [ "Sharon Fogel", "Hadar Averbuch-Elor", "Daniel Cohen-Or", "Jacob Goldberger" ], "title": "Clustering-driven deep embedding with pairwise constraints", "venue": "IEEE Computer Graphics and Applications,", "year": 2019 }, { "authors": [ "Kamran Ghasedi Dizaji", "Amirhossein Herandi", "Cheng Deng", "Weidong Cai", "Heng Huang" ], "title": "Deep clustering via joint convolutional autoencoder embedding and relative entropy minimization", "venue": "In Proceedings of the IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "David Gotz", "Jimeng Sun", "Nan Cao", "Shahram Ebadollahi" ], "title": "Visual cluster analysis in support of clinical decision intelligence", "venue": "In AMIA Proceedings of the Annual Symposium on American Medical Informatics Association (AMIA),", "year": 2011 }, { "authors": [ "Mark S Handcock", "Adrian E Raftery", "Jeremy M Tantrum" ], "title": "Model-based clustering for social networks. Journal of the Royal Statistical Society: Series A (Statistics in Society)", "venue": null, "year": 2007 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition (ICCV),", "year": 2016 }, { "authors": [ "Weihua Hu", "Takeru Miyato", "Seiya Tokui", "Eiichi Matsumoto", "Masashi Sugiyama" ], "title": "Learning discrete representations via information maximizing self-augmented training", "venue": "In Proceedings of the International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Zhuxi Jiang", "Yin Zheng", "Huachun Tan", "Bangsheng Tang", "Hanning Zhou" ], "title": "Variational deep embedding: An unsupervised and generative approach to clustering", "venue": "arXiv preprint arXiv:1611.05148,", "year": 2016 }, { "authors": [ "Jean-Michel Jolion", "Peter Meer", "Samira Bataouche" ], "title": "Robust clustering with applications in computer vision", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 1991 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational Bayes", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "Harold W Kuhn" ], "title": "The hungarian method for the assignment problem", "venue": "Naval Research Logistics Quarterly,", "year": 1955 }, { "authors": [ "Xiaopeng Li", "Zhourong Chen", "Leonard KM Poon", "Nevin L Zhang" ], "title": "Learning latent superstructures in variational autoencoders for deep multidimensional clustering", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Erxue Min", "Xifeng Guo", "Qiang Liu", "Gen Zhang", "Jianjing Cui", "Jun Long" ], "title": "A survey of clustering with deep learning: From the perspective of network architecture", "venue": "IEEE Access,", "year": 2018 }, { "authors": [ "Axel-Cyrille Ngonga Ngomo", "Frank Schumacher" ], "title": "Borderflow: A local graph clustering algorithm for natural language processing", "venue": "In International Conference on Intelligent Text Processing and Computational Linguistics,", "year": 2009 }, { "authors": [ "Yaniv Opochinsky", "Shlomo E Chazan", "Sharon Gannot", "Jacob Goldberger" ], "title": "K-autoencoders deep clustering", "venue": "In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Jost Tobias Springenberg" ], "title": "Unsupervised and semi-supervised learning with categorical generative adversarial networks", "venue": "arXiv preprint arXiv:", "year": 2015 }, { "authors": [ "Bo Yang", "Xiao Fu", "Nicholas D. Sidiropoulos", "Mingyi Hong" ], "title": "Towards K-means-friendly spaces: Simultaneous deep learning and clustering", "venue": "In International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Jianwei Yang", "Devi Parikh", "Dhruv Batra" ], "title": "Joint unsupervised learning of deep representations and image clusters", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Linxiao Yang", "Ngai-Man Cheung", "Jiaying Li", "Jun Fang" ], "title": "Deep clustering by gaussian mixture variational autoencoders with graph embedding", "venue": "In Proceedings of the IEEE International Conference on Computer Vision (ICCV),", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Clustering is one of the most fundamental techniques used in unsupervised machine learning. It is the process of classifying data into several classes without using any label information. In the past decades, a plethora of clustering methods have been developed and successfully employed in various fields, including computer vision (Jolion et al., 1991), natural language processing (Ngomo & Schumacher, 2009), social networks (Handcock et al., 2007) and medical informatics (Gotz et al., 2011). The most well-known clustering approaches include the traditional k-means algorithm and the generative model, which assumes that the data points are generated from a Mixture-of-Gaussians (MoG), and the model parameters are learned via the Expectation-Maximization (EM) algorithm. However, using these methods over datasets that include high-dimensional data is problematic since, in these vector spaces, the inter-point distances become less informative. As a result, the respective methods have provided new opportunities for clustering (Min et al., 2018). These methods incorporate the ability to learn a (non-linear) mapping of the raw features in a low-dimensional vector space that hopefully allow a more feasible application of clustering methods. Deep learning methods are expected to automatically discover the most suitable non-linear representations for a specified task. However, a straightforward implementation of “deep” k-means algorithm by jointly learning the embedding space and applying clustering to the embedded data, leads to a trivial solution, where the data feature vectors are collapsed into a single point in the embedded space, and thus, the k centroids are collapsed into a single spurious entity. For this reason, the objective function of many deep clustering methods is composed of both a clustering term computed in the embedded space and a regularization term in the form of a reconstruction error to avoid data collapsing.\nOne broad family of successful deep clustering algorithms, which was shown to yield state-ofthe-art results, is the generative model-based methods. Most of these methods are based on the Variational Autoencoder framework (Kingma & Welling, 2014), e.g., Gaussian Mixture Variational Autoencoders (GMVAE) (Dilokthanakul et al., 2016) and Variational Deep Embedding (VaDE). Instead of using an arbitrary prior to the latent variable, these algorithms proposed using specific distributions that will allow clustering at the bottleneck, such as MoG distributions. This design results in a VAE based training objective function that is composed of a significant reconstruction term and a second parameter regularization term, as discussed above. However, this objective seems to miss the clustering target since the reconstruction term is not related to the clustering, and actual clustering is only associated with the regularization term optimization. This might result in inferior clustering performance, degenerated generative model, and stability issues during training.\nWe propose a solution to alleviate the issues introduced by previous deep clustering generative models. To that end, we propose the k-Deep Variational Auto Encoders (dubbed k-DVAE). Our\nk-DVAE improves upon the current state-of-the-art clustering methods in several facets: (1) A novel model that outperforms the current methods in terms of clustering accuracy. (2) A novel Variational Bayesian framework to balance the data reconstruction and actual clustering that differs from the previous methods. (3) A network architecture that allows better generative modeling and thus more accurate data generation. Importantly, this architecture uses a lower amount of parameters compared to previous models. We implemented the k-DVAE algorithm on various standard document and image corpora and obtained improved results for all the datasets we experimented with compared to state-of-the-art clustering methods." }, { "heading": "2 RELATED WORK", "text": "Deep clustering has been studied extensively in the literature. The most common deep clustering methods aim to project the data into a non-linear, low-dimensional feature space, where the task of clustering appears to be feasible. Then, traditional clustering methods are further applied to perform the actual clustering. Previous works have employed autoencoders (Yang et al., 2016; Ghasedi Dizaji et al., 2017; Yang et al., 2017; Fogel et al., 2019; Opochinsky et al., 2020), Variational Autoencoders (VAEs) (Jiang et al., 2016; Dilokthanakul et al., 2016; Yang et al., 2019; Li et al., 2019) and Generative Adversarial Networks (GANs) (Springenberg, 2015; Chen et al., 2016). IMSAT (Hu et al., 2017), is another recent method that augmented the training data. Our method does not make any use of augmented data during training and therefore, we do not consider IMSAT to be an appropriate or fair baseline for comparison. Additionally, the GMVAE method has shown to yield inferior performance results compared to the rest of VAE-based deep clustering, hence we do not present it in our evaluations.\nAmong the aforementioned work, VaDE (Jiang et al., 2016) and k-DAE (Opochinsky et al., 2020) are most relevant to our work. Both VaDE and our work utilize the Varitional Bayes framework, and use a probabilistic generative process to determine the data generation model. Yet, the difference lies in both the generative process and the use of several autoencoders: our network consists of a set of k autoencoders, where each specializes on encoding and reconstructing a different cluster. The k-DAE architecture consists of a set of k autoencoders, but does not consider generative modelling, which as we show, proved to be more powerful and yields significant clustering performance results in recent years.\nThe recent, state-of-the-art DGG method (Yang et al., 2019) was built on the foundations of VaDE, and integrates graph embeddings that serves as a regularization over the VaDE objective. Using the DGG revised objective, each pair of samples that are connected on the learned graph, will have similar posterior distributions, using the Jenson-Shannon (JS) divergence similarity metric. The other baselines used in this study are described in Section 4.2.\n3 THE k-DVAE CLUSTERING ALGORITHM\nIn this section, we describe our k-Deep Variational Auto Encoders (dubbed k-DVAE). First, we formulate the generative model that our algorithm is based on. Next, we derive the optimization objective score. Then we discuss the differences between our model and previous VAE based algorithms such as VaDE (Jiang et al., 2016) and illustrate the advantages of our approach." }, { "heading": "3.1 GENERATIVE MODEL", "text": "In our generative modeling, we assume that the data are drawn from a mixture of VAEs, each with a standard Gaussian latent r.v., as follows:\n1. Draw a cluster y by sampling from p(y = i) = αi, i = 1, ..., k. 2. Sample a latent r.v. z from the unit normal distribution, z ∼ N (0, I). 3. Sample an observed r.v. x:\n(a) If x is real-valued vector: sample a data vector using the conditional distribution, x|(z, y = i) ∼ N (µθi(z),Σθi(z)). (b) If x is binary vector: sample a data vector using the conditional distribution, x|(z, y = i) ∼ Ber(µθi(z)).\nθi is the stacked vector of parameters of the i-th neural network (NN). It formulates a decoder NN that corresponds to the i-th cluster, 1 ≤ i ≤ k, assuming that the total number of clusters is k. µθi(z),Σθi(z) are computed by a decoder NN with an input z and parameters θi. We denote the parameter set of all the decoders by θ = {θ1, ..., θk}.\nNote that the latent data representation z is drawn independently of the selected class y, and the class only affects when selecting the sample x." }, { "heading": "3.2 LEARNING THE MODEL PARAMETERS BY OPTIMIZING A VARIATIONAL LOWER BOUND", "text": "Direct optimization of the likelihood function: p(x;θ) = ∑ y ∫ z p(z)p(y)p(x|z, y;θ)dz\nis intractable. Instead, we can use variational approximation methods and learn the model parameters by maximizing the Evidence Lower BOund (ELBO) lower bound. The ELBO(θ,λ) expression is given by:\nELBO(θ,λ) = ∑ y ∫ z q(y, z|x;λ) log p(x|y, z;θ)dz−DKL(q(y, z|x;λ)||p(y, z;θ)), (1)\nwhereDKL is the Kullback Leibler (KL) divergence between two density functions, and q(y, z|x;λ) is a conditional density function parametrized by λ.\nWe use an approximate conditional density q(y, z|x) that mirrors the structure of the generative model. For each cluster we define an encoder that transforms the input x into the latent space of that cluster:\nq(y = i, z|x;λ) = q(y = i|x)q(z|x, y = i;λi),\nsuch that q(z|x, y = i;λi) = N (z;µλi(x),Σλi(x)) where µλi(x),Σλi(x) are computed by an encoder NN with input x and parameter-set λi and we use the notation λ = {λ1, ..., λk}. The first term of the ELBO expression (1) can be written as:∑\ny\n∫ z q(y, z|x;λ) log p(x|y, z;θ)dz = ∑ i q(y = i|x)Eq(z|x,y=i;λi) logN (x;µθi(z),Σθi(z)).\n(2) We next use Monte-Carlo sampling to approximate the expectation in Eq. (2):\nEq(z|x,y=i;λi) logN (x;µθi(z),Σθi(z)) ≈ logN (x;µθi(z),Σθi(z)), (3)\nsuch that z|(x, y = i) is sampled from N (µλi(x),Σλi(x)). Applying the chain rule for KL divergence to the second term of the ELBO expression (1), we get:\nDKL(q(y, z|x;λ)||p(y, z;θ)) = DKL(q(y|x;λ)||p(y;θ)) + ∑ i q(y = i|x)DKL(N (µλi(x),Σλi(x))||N (0, I)). (4)\nWe next replace the soft clustering in Eq. (3) and Eq. (4), by a hard clustering:\nk∑ i=1 q(y = i|x)(logN (x;µθi(zi),Σθi(zi))−DKL(N (µλi(x),Σλi(x))||N (0, I))) (5)\n≈ max i (logN (x;µθi(zi),Σθi(zi))−DKL(N (µλi(x),Σλi(x))||N (0, I))).\nFinally, by neglecting the term DKL(q(y|x)||p(y;θ)) (4) (or equivalently setting q(y|x) = p(y;θ)), we obtain the following objective for optimization:\nELBO(θ,λ) ≈ max i {logN (x;µθi(zi),Σθi(zi))−DKL(N (µλi(x),Σλi(x))||N (0, I))}\ns.t. zi ∼ N (µλi(x),Σλi(x)). (6)\nAlgorithm 1 ELBO score computation Input: Data sample x Output: Estimated score.\nfor i = 1 to k do Compute µλi(x) and Σλi(x) using the i-th encoder. Draw zi ∼ N (µλi(x),Σλi(x)). Compute µθi(zi) and Σθi(zi) using the i-th decoder. end for Compute the ELBO score using Eq. (6).\nAlgorithm 2 Hard clustering Input: Data sample x Output: Estimated cluster ŷ(x) of x.\nfor i = 1 to k do Compute z̄i ← µλi(x) using the i-th encoder. Compute µθi(z̄i) and Σθi(z̄i) using the i-th decoder. end for\nCompute the cluster ŷ(x) using Eq. (8).\nWhen optimizing the ELBO expression, we sample the Gaussian r.v. zi|(x, y = i) using the reparameterization trick. Note that the ELBO objective function (6) consists of a reconstruction term and a regularization term and both are involved in the clustering decision. In the derivation of the objective function above we assumed that x is a real-valued vector. The derivation of the ELBO objective function for the discrete case is similar. The score computation procedure is depicted in Algorithm 1 and the overall architecture of the autoencoder used in the training is depicted in Fig. 1." }, { "heading": "3.3 HARD CLUSTERING OF DATA POINTS", "text": "After the model parameters have been learned we can extract the data clustering. We chose a deterministic version of the clustering procedure (6) that avoids sampling of z and was empirically shown to yield more stable results in our simulations. We used the expectation vector z̄i = µλi(x) instead of a sampled zi. The hard clustering is thus defined as:\nŷ(x) = arg max i (logN (x;µθi(z̄i),Σθi(z̄i))− log p(z̄i|x;λi) p(z̄i;θi) ) (7)\nAccording to our generative model p(z̄i;θi) = N(z̄i;0, I), and\nlog p(z̄i|x;λi) = logN (z̄i;z̄i,Σλi(x)) = d∑ s=1 logN(z̄is;z̄is,Σλi(x)s) = − d∑ s=1 log σ̄is\nwhere d is the dimensionality of the latent r.v. and Σλi(x) = Var(z|x, y = i) = diag(σ̄2i1, ..., σ̄2id). This finally implies:\nŷ(x) = arg max i\n(logN (x;µθi(z̄i),Σθi(z̄i))− 1\n2 ‖z̄i‖2 + d∑ s=1 log σ̄is) (8)\nThe hard clustering procedure is depicted in Algorithm 2." }, { "heading": "3.4 COMPARISON TO THE VADE METHOD", "text": "Our method and the VaDE algorithm (Jiang et al., 2016) are both based on generative models learned by variational autoencoders. We will now briefly describe VaDE and focus on the differences from our model. The VaDE generative process is based on a MoG model combined with a non-linear function (decoder) and is given by:\n1. Draw a cluster y by sampling from p(y = i) = αi, i = 1, ..., k.\n2. Sample a latent r.v. z using the conditional distribution, z|y = i ∼ N (µi(z),Σi(z)). 3. If x is real valued, sample it using the conditional distribution, x|z ∼ N (µθ(z),Σθ(z)).\nIf x is binary valued, sample it using the conditional distribution, x|z ∼ Ber(µθ(z)). µθ(z),Σθ(z) are computed by a decoder NN with an input z and parameters θ.\nNote that unlike our method, this modeling uses the same decoder (parametrized by θ) to construct the observed data for all the different clusters. Hence the decoder is likely to be very complex. In comparing the performance of two methods in the next section, we show that much less number of parameters are needed in our model than in VaDE, and the reconstruction quality of our model is much better.\nThe VaDE ELBO(θ,λ) term can be approximated as follows:\nELBO(θ,λ) ≈ logN (x;µθ(z),Σθ(z))\n− k∑ i=1 pθ(y = i|z)(DKL(N (µλ(x),Σλ(x))||N (µi,Σi))︸ ︷︷ ︸ C + log pθ(y = i|z) αi ), (9)\nwhere z is sampled from N (µλ(x),Σλ(x)). After the VaDE parameters are learned, the soft clustering of x is pθ(y|z) where z is sampled from N (µλ(x),Σλ(x)). For the full derivation, we refer the reader to Jiang et al. (2016).\nNote that the term C in Eq. (9) refers to the actual MoG-based soft clustering performed by VaDE during the learning phase. The clustering is thus performed here only within the ELBO regularization term. In our method, both the reconstruction and regularization parts of the ELBO term are involved in the clustering decision.\nAnother variant of our algorithm is a non-generative approach that do not have a regularization term, and it only minimizes the reconstruction error (Opochinsky et al., 2020). We show in the next section that this results in significant degradation of the clustering performance. Hence, it is required that both the reconstruction term and the regularization term of the ELBO should be involved in the clustering process." }, { "heading": "4 EXPERIMENTS AND RESULTS", "text": "In this section, we present the datasets, hyperparameters, and experiments conducted to evaluate our approach’s clustering results and compare it to other clustering methods." }, { "heading": "4.1 DATASETS", "text": "We used the following datasets in our experiments:\nMNIST: The MNIST dataset consists of 70, 000 handwritten (ten) digits images, of size 28 × 28 pixels. Prepossessing includes centering the pixel values and flattening each image to a 784-dimensional vector.\nSTL-10: The STL-10 dataset consists of RGB colored images of size 96× 96 pixels. This dataset contains a total number of 10 classes. Since clustering directly from raw pixels of highresolution images is rather difficult, Prepossessing includes features extraction by passing the images to a pre-trained ResNet-50 (He et al., 2016) and then applying an average pooling operation to reduce the dimensionality to 2048.\nREUTERS: The REUTERS dataset consists of 10, 000 English news stories that relate to a total number of 4 categories. Prepossessing includes computing of 2000-dimensional TF-IDF feature vectors for the most frequent words in the articles.\nHHAR: The Heterogeneity Human Activity Recognition (HHAR) dataset consists of 10, 200 sample records, where each sample relates to one of 6 different categories. Each sample in this dataset is a 561-dimensional vector.\nNote that we set k to be the actual number of classes of the given datasets during our simulations. The overall datasets statistics are summarized in Table 1." }, { "heading": "4.2 EVALUATED MODELS", "text": "We compared our method to the following state-of-the-art deep clustering algorithms:\nAutoencoder followed by Gaussian Mixture Model (AE+GMM): This method trains a single AE using the reconstruction objective, and then applies GMM-based clustering on the embedding space.\nVariational Deep Embedding (VaDE): Introduces a VAE based generative model that assumes the latent variables follows a mixture of Gaussians, where the means and variances of the Gaussian components are trainable (Jiang et al., 2016).\nLatent Tree Variational Autoencoder (LTVAE): A VAE based model that assumes a tree structure of the latent variables (Li et al., 2019).\nDeep clustering via a Gaussian mixture VAE with Graph embedding (DGG): A recent VAE based model that assumes a tree structure of the latent variables (Yang et al., 2019).\nk-Deep-AutoEncoder (k-DAE): This algorithm uses k-AEs for deep clustering, where k is assumed to be the number of clusters (Opochinsky et al., 2020). This method serves as the ablation study for our method, since it induces the same (reconstruction) objective without the KL term (which stands for regularization).\nk-Deep-Variational AutoEncoder (k-DVAE): Our clustering method.\nThe encoder-decoder structure used for the first four methods is the same (for a fair comparison) and is composed as follows. Each encoder network uses dense layers of sizesD−500−500−2000−10, and each decoder network uses dense layers of sizes 10−2000−500−500−D. All these methods use additional mid-layers (to perform clustering). This setting and the remaining hyperparameters were taken from Jiang et al. (2016), and Yang et al. (2019). For both our method and the k-DAE method, the autoencoders used dense layers of sizes D − 500 − 100 − 10 for the encoder, and 10− 100− 500−D for the decoder. Note that although we needed to allocate one encoder-decoder network to each cluster, the number of parameters was still drastically lower than the compared\nmethods. We tried increasing the number of parameters for each method, but it did not result in any performance gains. Each encoder network outputs mean and variance vectors that form the multivariate normal distribution. The output of the decoder is a single mean vector if the input x is discrete; otherwise, it also outputs a variance vector to form the normal distribution.\nIn our implementation of k-DVAE, Similar to the DGG method (Yang et al., 2019), we first pretrain the VaDE network as initialization. Then, we set the initial clusters by applying the k-means clustering over the VaDE embedded space. In our case, this architecture was used only in this initialization step." }, { "heading": "4.3 CLUSTERING RESULTS", "text": "Clustering performance of all the compared methods was evaluated with respect to the unsupervised clustering accuracy (ACC) measure, given by\nACC , max m∈Sk\n1\nN N∑ i=1 1{yi = m(ŷ(xi)}\nwhere N is the total number of data samples, yi is the ground-truth label that corresponds to that xi sample, ŷ(xi) is the cluster assignment obtained by the model, and m ranges over the set Sk of all possible one-to-one mappings between cluster assignments and labels. This measures the proportion of data points for which the obtained clusters can be correctly mapped to ground-truth classes, where the matching is based on the Hungarian algorithm (Kuhn, 1955). It lies in the range of 0 to 1 where one is a perfect clustering result and zero is worst.\nIn Table 2 we depict the quantitative clustering results over the tested benchmarks compared to clustering methods. We show the mean and average ACC clustering results over ten training sessions with different random parameter initializations. The table shows that our method outperformed the other methods in terms of accuracy. In addition, using the non-variational k-DAE variant yields inferior results compared to our method, which emphasizes the superiority of the variational generative framework in this setup." }, { "heading": "4.4 QUALITATIVE ANALYSIS", "text": "A key modeling difference between our k-DVAE and recent state-of-the-art models is that our model allocates a different decoder to each cluster. We saw that this yields improved clustering results. We show below that we also gain improved generation capability. In classification tasks, it is known that discriminative methods are better than generative ones since the classes are known, and we only need to find the discriminative features. However, in clustering tasks where we need to learn the clusters, there is a tight relationship between a model’s generation capabilities and its clustering performance. To gain insight into our model’s data generation capabilities, we present examples of images generated by the model’s generator network.\nTo generate an example from the i-th clusters, we first sample a random vector z from the unit normal distribution and then feed it to the i-th decoder network, parametrized by θi. The VaDE/DGG algorithm, in contrast, uses a single decoder for all the clusters. Fig. 2 illustrates the generated samples for digits 0 to 9 of MNIST by our method compared to DGG1.Note that unlike the results shown\n1VaDE has a similar generative model as DGG. Thus we choose to depict the results of DGG, which is state-of-the-art.\nin Jiang et al. (2016), we performed the digits generation process without restricting the posterior’s high values. We note in passing that in Jiang et al. (2016), the authors presented generation results only for good cases where the posterior probability of the correct clustering was at least 0.999. While both k-DVAE and DGG were able to generate smooth and diverse digits, the images generated by the DGG are prone to errors. In contrast, each decoder network of the k-DVAE successfully reconstructed its corresponding digit by only using random normal noise as an input." }, { "heading": "5 CONCLUSION", "text": "In this work, we proposed k-Deep Variational AutoEncoder (k-DVAE), a neural generative model for deep clustering. This framework facilitates k encoder-decoder models designed to learn insightful low-dimensional representations for better clustering. The model is optimized by maximizing the evidence lower bound (ELBO) of the data log-likelihood. Using a distinct set of k parametrized models combined with the variational probabilistic framework results in a much richer representation of each cluster than previous methods. Extensive experimental results on four different datasets demonstrate our method’s effectiveness over different state-of-the-art baselines, which require more parameters for training than our proposed architecture. Our qualitative analysis showcases the high quality of the generative model induced by our k-DVAE. Future research can extend our work by utilizing a graph embeddings similarity objective or adding a discriminator network to further regularize the posterior." } ]
2,020
null
SP:b33ac0129381deaa5375b1f6b06b70d58f16a5a9
[ "The authors analyzed how attention and embeddings of Transformers trained on protein sequence correlate with protein properties such as pairwise contacts, binding sites, and post-translational modifications. The paper extends existing papers such as Rives 2020 ‘Biological structure and function emerge…’ by showing that layers learn more and more complex protein features with increasing layer depth and by proposing new visualization techniques. The paper is mostly clearly written while the methodological contributions are incremental. The evaluation needs to be strengthened." ]
Transformer architectures have proven to learn useful representations for protein classification and generation tasks. However, these representations present challenges in interpretability. In this work, we demonstrate a set of methods for analyzing protein Transformer models through the lens of attention. We show that attention: (1) captures the folding structure of proteins, connecting amino acids that are far apart in the underlying sequence, but spatially close in the three-dimensional structure, (2) targets binding sites, a key functional component of proteins, and (3) focuses on progressively more complex biophysical properties with increasing layer depth. We find this behavior to be consistent across three Transformer architectures (BERT, ALBERT, XLNet) and two distinct protein datasets. We also present a three-dimensional visualization of the interaction between attention and protein structure. Code for visualization and analysis is available at https://github.com/salesforce/provis.
[ { "affiliations": [], "name": "Jesse Vig" }, { "affiliations": [], "name": "Ali Madani" }, { "affiliations": [], "name": "Lav R. Varshney" }, { "affiliations": [], "name": "Caiming Xiong" }, { "affiliations": [], "name": "Richard Socher" }, { "affiliations": [], "name": "Nazneen Fatema Rajani" } ]
[ { "authors": [ "Yossi Adi", "Einat Kermany", "Yonatan Belinkov", "Ofer Lavi", "Yoav Goldberg" ], "title": "Fine-grained analysis of sentence embeddings using auxiliary prediction tasks", "venue": "[cs.CL].,", "year": 2016 }, { "authors": [ "Ethan C Alley", "Grigory Khimulya", "Surojit Biswas", "Mohammed AlQuraishi", "George M Church" ], "title": "Unified rational protein engineering with sequence-based deep representation learning", "venue": "Nature Methods,", "year": 2019 }, { "authors": [ "Mohammed AlQuraishi" ], "title": "ProteinNet: a standardized data set for machine learning of protein structure", "venue": "BMC Bioinformatics,", "year": 2019 }, { "authors": [ "Ehsaneddin Asgari", "Mohammad RK Mofrad" ], "title": "Continuous distributed representation of biological sequences for deep proteomics and genomics", "venue": "PLOS One,", "year": 2015 }, { "authors": [ "Tristan Bepler", "Bonnie Berger" ], "title": "Learning protein sequence embeddings using information from structure", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Helen M Berman", "John Westbrook", "Zukang Feng", "Gary Gilliland", "Talapady N Bhat", "Helge Weissig", "Ilya N Shindyalov", "Philip E Bourne" ], "title": "The protein data bank", "venue": "Nucleic Acids Research,", "year": 2000 }, { "authors": [ "Surojit Biswas", "Grigory Khimulya", "Ethan C. Alley", "Kevin M. Esvelt", "George M. Church" ], "title": "Low-n protein engineering with data-efficient deep learning", "venue": "bioRxiv,", "year": 2020 }, { "authors": [ "Ashraf Brik", "Chi-Huey Wong" ], "title": "HIV-1 protease: Mechanism and drug discovery", "venue": "Organic & Biomolecular Chemistry,", "year": 2003 }, { "authors": [ "Gino Brunner", "Yang Liu", "Damian Pascual", "Oliver Richter", "Massimiliano Ciaramita", "Roger Wattenhofer" ], "title": "On identifiability in Transformers", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Kevin Clark", "Urvashi Khandelwal", "Omer Levy", "Christopher D. Manning" ], "title": "What does BERT look at? An analysis of BERT’s attention", "venue": "BlackBoxNLP@ACL,", "year": 2019 }, { "authors": [ "Alexis Conneau", "German Kruszewski", "Guillaume Lample", "Loïc Barrault", "Marco Baroni" ], "title": "What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": null, "year": 2019 }, { "authors": [ "Sara El-Gebali", "Jaina Mistry", "Alex Bateman", "Sean R. Eddy", "Aurélien Luciani", "Simon C. Potter", "Matloob Qureshi", "Lorna J. Richardson", "Gustavo A. Salazar", "Alfredo Smart", "Erik L.L. Sonnhammer", "Layla Hirsh", "Lisanna Paladin", "Damiano Piovesan", "Silvio C.E. Tosatto", "Robert D. Finn. The Pfam protein families database in" ], "title": "Nucleic Acids Research, 47(D1):D427–D432, January 2019a", "venue": "doi: 10.1093/nar/gky995.", "year": 2019 }, { "authors": [ "Sara El-Gebali", "Jaina Mistry", "Alex Bateman", "Sean R Eddy", "Aurélien Luciani", "Simon C Potter", "Matloob Qureshi", "Lorna J Richardson", "Gustavo A Salazar", "Alfredo Smart", "Erik L L Sonnhammer", "Layla Hirsh", "Lisanna Paladin", "Damiano Piovesan", "Silvio C E Tosatto", "Robert D Finn. The Pfam protein families database in" ], "title": "Nucleic Acids Research, 47(D1):D427–D432, 2019b", "venue": "ISSN 0305-1048. doi: 10.1093/nar/gky995.", "year": 2019 }, { "authors": [ "Ahmed Elnaggar", "Michael Heinzinger", "Christian Dallago", "Ghalia Rihawi", "Yu Wang", "Llion Jones", "Tom Gibbs", "Tamas Feher", "Christoph Angerer", "Martin Steinegger", "Debsindhu Bhowmik", "Burkhard Rost" ], "title": "ProtTrans: Towards cracking the language of life’s code through self-supervised deep learning and high performance computing", "venue": "arXiv preprint arXiv:2007.06225,", "year": 2020 }, { "authors": [ "Kawin Ethayarajh" ], "title": "How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Allyson Ettinger" ], "title": "What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Naomi K Fox", "Steven E Brenner", "John-Marc Chandonia" ], "title": "SCOPe: Structural classification of proteins—extended, integrating scop and astral data and classification of new structures", "venue": "Nucleic Acids Research,", "year": 2013 }, { "authors": [ "Yoav Goldberg" ], "title": "Assessing BERT’s syntactic abilities", "venue": "arXiv preprint arXiv:1901.05287,", "year": 2019 }, { "authors": [ "Christopher Grimsley", "Elijah Mayfield", "Julia R.S" ], "title": "Bursten. Why attention is not explanation: Surgical intervention and causal reasoning about neural models", "venue": "In Proceedings of The 12th Language Resources and Evaluation Conference,", "year": 2020 }, { "authors": [ "S Henikoff", "J G Henikoff" ], "title": "Amino acid substitution matrices from protein blocks", "venue": "Proceedings of the National Academy of Sciences,", "year": 1992 }, { "authors": [ "John Hewitt", "Christopher D Manning" ], "title": "A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)", "venue": null, "year": 2019 }, { "authors": [ "Benjamin Hoover", "Hendrik Strobelt", "Sebastian Gehrmann" ], "title": "exBERT: A Visual Analysis Tool to Explore Learned Representations in Transformer Models", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations,", "year": 2020 }, { "authors": [ "Phu Mon Htut", "Jason Phang", "Shikha Bordia", "Samuel R Bowman" ], "title": "Do attention heads in BERT track syntactic dependencies", "venue": null, "year": 1911 }, { "authors": [ "Sarthak Jain", "Byron C. Wallace" ], "title": "Attention is not Explanation", "venue": "In Proceedings of the", "year": 2019 }, { "authors": [ "Ganesh Jawahar", "Benoît Sagot", "Djamé Seddah" ], "title": "What does BERT learn about the structure of language? In ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, July 2019", "venue": "URL https://hal.inria.fr/hal-02131630", "year": 2019 }, { "authors": [ "Akira Kinjo", "Haruki Nakamura" ], "title": "Comprehensive structural classification of ligand-binding motifs in proteins", "venue": null, "year": 2009 }, { "authors": [ "Michael Schantz Klausen", "Martin Closter Jespersen", "Henrik Nielsen", "Kamilla Kjaergaard Jensen", "Vanessa Isabell Jurtz", "Casper Kaae Soenderby", "Morten Otto Alexander Sommer", "Ole Winther", "Morten Nielsen", "Bent Petersen" ], "title": "NetSurfP-2.0: Improved prediction of protein structural features by integrated deep learning. Proteins: Structure, Function, and Bioinformatics, 2019", "venue": null, "year": 2019 }, { "authors": [ "Olga Kovaleva", "Alexey Romanov", "Anna Rogers", "Anna Rumshisky" ], "title": "Revealing the dark secrets of BERT", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP),", "year": 2019 }, { "authors": [ "Keita Kurita", "Nidhi Vyas", "Ayush Pareek", "Alan W Black", "Yulia Tsvetkov" ], "title": "Measuring bias in contextualized word representations", "venue": "In Proceedings of the First Workshop on Gender Bias in Natural Language Processing,", "year": 2019 }, { "authors": [ "Zhenzhong Lan", "Mingda Chen", "Sebastian Goodman", "Kevin Gimpel", "Piyush Sharma", "Radu Soricut" ], "title": "Albert: A lite bert for self-supervised learning of language representations", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Juyong Lee", "Janez Konc", "Dusanka Janezic", "Bernard Brooks" ], "title": "Global organization of a binding site network gives insight into evolution and structure-function relationships of proteins", "venue": "Sci Rep,", "year": 2017 }, { "authors": [ "Yongjie Lin", "Yi Chern Tan", "Robert Frank" ], "title": "Open sesame: Getting inside BERT’s linguistic knowledge", "venue": "In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP,", "year": 2019 }, { "authors": [ "Nelson F. Liu", "Matt Gardner", "Yonatan Belinkov", "Matthew E. Peters", "Noah A. Smith" ], "title": "Linguistic knowledge and transferability of contextual representations", "venue": "Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Ali Madani", "Bryan McCann", "Nikhil Naik", "Nitish Shirish Keskar", "Namrata Anand", "Raphael R. Eguchi", "Po-Ssu Huang", "Richard Socher" ], "title": "Progen: Language modeling for protein generation", "venue": null, "year": 2004 }, { "authors": [ "Timothee Mickus", "Mathieu Constant", "Denis Paperno", "Kees Van Deemter" ], "title": "What do you mean, BERT? Assessing BERT as a Distributional Semantics Model", "venue": "Proceedings of the Society for Computation in Linguistics,", "year": 2020 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "Advances in Neural Information Processing Systems", "year": 2013 }, { "authors": [ "Pooya Moradi", "Nishant Kambhatla", "Anoop Sarkar" ], "title": "Interrogating the explanatory power of attention in neural machine translation", "venue": "In Proceedings of the 3rd Workshop on Neural Generation and Translation,", "year": 2019 }, { "authors": [ "John Moult", "Krzysztof Fidelis", "Andriy Kryshtafovych", "Torsten Schwede", "Anna Tramontano" ], "title": "Critical assessment of methods of protein structure prediction (CASP)-Round XII", "venue": "Proteins: Structure, Function, and Bioinformatics,", "year": 2018 }, { "authors": [ "Hai Nguyen", "David A Case", "Alexander S Rose" ], "title": "NGLview–interactive molecular graphics for Jupyter notebooks", "venue": "Bioinformatics, 34(7):1241–1242,", "year": 2017 }, { "authors": [ "Timothy Niven", "Hung-Yu Kao" ], "title": "Probing neural network comprehension of natural language arguments", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Josh Payne", "Mario Srouji", "Dian Ang Yap", "Vineet Kosaraju" ], "title": "Bert learns (and teaches) chemistry", "venue": "arXiv preprint arXiv:2007.16012,", "year": 2020 }, { "authors": [ "Matthew Peters", "Mark Neumann", "Luke Zettlemoyer", "Wen-tau Yih" ], "title": "Dissecting contextual word embeddings: Architecture and representation", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Danish Pruthi", "Mansi Gupta", "Bhuwan Dhingra", "Graham Neubig", "Zachary C. Lipton" ], "title": "Learning to deceive with attention-based explanations", "venue": "In Annual Conference of the Association for Computational Linguistics (ACL),", "year": 2020 }, { "authors": [ "Alessandro Raganato", "Jörg Tiedemann" ], "title": "An analysis of encoder representations in Transformerbased machine translation", "venue": "In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP,", "year": 2018 }, { "authors": [ "Roshan Rao", "Nicholas Bhattacharya", "Neil Thomas", "Yan Duan", "Xi Chen", "John Canny", "Pieter Abbeel", "Yun S Song" ], "title": "Evaluating protein transfer learning with TAPE", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Emily Reif", "Ann Yuan", "Martin Wattenberg", "Fernanda B Viegas", "Andy Coenen", "Adam Pearce", "Been Kim" ], "title": "Visualizing and measuring the geometry of BERT", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Adam J Riesselman", "Jung-Eun Shin", "Aaron W Kollasch", "Conor McMahon", "Elana Simon", "Chris Sander", "Aashish Manglik", "Andrew C Kruse", "Debora S Marks" ], "title": "Accelerating protein design using autoregressive generative models", "venue": null, "year": 2019 }, { "authors": [ "Alexander Rives", "Siddharth Goyal", "Joshua Meier", "Demi Guo", "Myle Ott", "C Lawrence Zitnick", "Jerry Ma", "Rob Fergus" ], "title": "Biological structure and function emerge from scaling unsupervised learning to 250 million protein", "venue": "sequences. bioRxiv,", "year": 2019 }, { "authors": [ "Anna Rogers", "Olga Kovaleva", "Anna Rumshisky" ], "title": "A primer in BERTology: What we know about how BERT works", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Nathan J Rollins", "Kelly P Brock", "Frank J Poelwijk", "Michael A Stiffler", "Nicholas P Gauthier", "Chris Sander", "Debora S Marks" ], "title": "Inferring protein 3D structure from deep mutation scans", "venue": "Nature Genetics,", "year": 2019 }, { "authors": [ "Alexander S. Rose", "Peter W. Hildebrand" ], "title": "NGL Viewer: a web application for molecular visualization", "venue": "Nucleic Acids Research, 43(W1):W576–W579,", "year": 2015 }, { "authors": [ "Alexander S Rose", "Anthony R Bradley", "Yana Valasatava", "Jose M Duarte", "Andreas Prlić", "Peter W Rose" ], "title": "NGL viewer: web-based molecular graphics for large complexes", "venue": "doi: 10.1093/bioinformatics/bty419. URL https: //doi.org/10.1093/bioinformatics/bty419", "year": 2018 }, { "authors": [ "Charles Rubin", "Ora Rosen" ], "title": "Protein phosphorylation", "venue": "Annual Review of Biochemistry,", "year": 1975 }, { "authors": [ "Philippe Schwaller", "Benjamin Hoover", "Jean-Louis Reymond", "Hendrik Strobelt", "Teodoro Laino" ], "title": "Unsupervised attention-guided atom-mapping", "venue": "ChemRxiv,", "year": 2020 }, { "authors": [ "Sofia Serrano", "Noah A. Smith" ], "title": "Is attention interpretable? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2931–2951, Florence, Italy, July 2019", "venue": "Association for Computational Linguistics. doi: 10.18653/v1/P19-1282. URL https: //www.aclweb.org/anthology/P19-1282", "year": 2019 }, { "authors": [ "Martin Steinegger", "Johannes Söding" ], "title": "Clustering huge protein sequence sets in linear time", "venue": "Nature Communications,", "year": 2018 }, { "authors": [ "Baris E. Suzek", "Yuqi Wang", "Hongzhan Huang", "Peter B. McGarvey", "Cathy H. Wu", "the UniProt Consortium" ], "title": "UniRef clusters: a comprehensive and scalable alternative for improving sequence similarity searches", "venue": "Bioinformatics, 31(6):926–932,", "year": 2014 }, { "authors": [ "Yi Chern Tan", "L. Elisa Celis" ], "title": "Assessing social and intersectional biases in contextualized word representations", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Ian Tenney", "Dipanjan Das", "Ellie Pavlick" ], "title": "BERT rediscovers the classical NLP pipeline", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Shikhar Vashishth", "Shyam Upadhyay", "Gaurav Singh Tomar", "Manaal Faruqui" ], "title": "Attention interpretability across NLP tasks", "venue": "arXiv preprint arXiv:1909.11218,", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Sara Veldhoen", "Dieuwke Hupkes", "Willem H. Zuidema" ], "title": "Diagnostic classifiers revealing how neural networks process hierarchical structure", "venue": "In CoCo@NIPS,", "year": 2016 }, { "authors": [ "Jesse Vig" ], "title": "A multiscale visualization of attention in the Transformer model", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations,", "year": 2019 }, { "authors": [ "Jesse Vig", "Yonatan Belinkov" ], "title": "Analyzing the structure of attention in a Transformer language model", "venue": "In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP,", "year": 2019 }, { "authors": [ "Jesse Vig", "Sebastian Gehrmann", "Yonatan Belinkov", "Sharon Qian", "Daniel Nevo", "Yaron Singer", "Stuart Shieber" ], "title": "Investigating gender bias in language models using causal mediation analysis", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Gregor Wiedemann", "Steffen Remus", "Avi Chawla", "Chris Biemann" ], "title": "Does BERT make any sense? Interpretable word sense disambiguation with contextualized embeddings", "venue": null, "year": 1909 }, { "authors": [ "Sarah Wiegreffe", "Yuval Pinter" ], "title": "Attention is not not explanation", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Yiming Yang", "Jaime Carbonell", "Russ R Salakhutdinov", "Quoc V Le" ], "title": "XLNet: Generalized autoregressive pretraining for language understanding", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ruiqi Zhong", "Steven Shao", "Kathleen McKeown" ], "title": "Fine-grained sentiment analysis with faithful attention", "venue": "arXiv preprint arXiv:1908.06870,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "The study of proteins, the fundamental macromolecules governing biology and life itself, has led to remarkable advances in understanding human health and the development of disease therapies. The decreasing cost of sequencing technology has enabled vast databases of naturally occurring proteins (El-Gebali et al., 2019a), which are rich in information for developing powerful machine learning models of protein sequences. For example, sequence models leveraging principles of co-evolution, whether modeling pairwise or higher-order interactions, have enabled prediction of structure or function (Rollins et al., 2019).\nProteins, as a sequence of amino acids, can be viewed precisely as a language and therefore modeled using neural architectures developed for natural language. In particular, the Transformer (Vaswani et al., 2017), which has revolutionized unsupervised learning for text, shows promise for similar impact on protein sequence modeling. However, the strong performance of the Transformer comes at the cost of interpretability, and this lack of transparency can hide underlying problems such as model bias and spurious correlations (Niven & Kao, 2019; Tan & Celis, 2019; Kurita et al., 2019). In response, much NLP research now focuses on interpreting the Transformer, e.g., the subspecialty of “BERTology” (Rogers et al., 2020), which specifically studies the BERT model (Devlin et al., 2019).\nIn this work, we adapt and extend this line of interpretability research to protein sequences. We analyze Transformer protein models through the lens of attention, and present a set of interpretability methods that capture the unique functional and structural characteristics of proteins. We also compare the knowledge encoded in attention weights to that captured by hidden-state representations. Finally, we present a visualization of attention contextualized within three-dimensional protein structure.\nOur analysis reveals that attention captures high-level structural properties of proteins, connecting amino acids that are spatially close in three-dimensional structure, but apart in the underlying sequence (Figure 1a). We also find that attention targets binding sites, a key functional component of proteins (Figure 1b). Further, we show how attention is consistent with a classic measure of similarity between amino acids—the substitution matrix. Finally, we demonstrate that attention captures progressively higher-level representations of structure and function with increasing layer depth.\n(a) Attention in head 12-4, which targets amino acid pairs that are close in physical space (see inset subsequence 117D-157I) but lie apart in the sequence. Example is a de novo designed TIMbarrel (5BVL) with characteristic symmetry.\n(b) Attention in head 7-1, which targets binding sites, a key functional component of proteins. Example is HIV-1 protease (7HVP). The primary location receiving attention is 27G, a binding site for protease inhibitor small-molecule drugs.\nFigure 1: Examples of how specialized attention heads in a Transformer recover protein structure and function, based solely on language model pre-training. Orange lines depict attention between amino acids (line width proportional to attention weight; values below 0.1 hidden). Heads were selected based on correlation with ground-truth annotations of contact maps and binding sites. Visualizations based on the NGL Viewer (Rose et al., 2018; Rose & Hildebrand, 2015; Nguyen et al., 2017).\nIn contrast to NLP, which aims to automate a capability that humans already have—understanding natural language—protein modeling also seeks to shed light on biological processes that are not fully understood. Thus we also discuss how interpretability can aid scientific discovery." }, { "heading": "2 BACKGROUND: PROTEINS", "text": "In this section we provide background on the biological concepts discussed in later sections.\nAmino acids. Just as language is composed of words from a shared lexicon, every protein sequence is formed from a vocabulary of amino acids, of which 20 are commonly observed. Amino acids may be denoted by their full name (e.g., Proline), a 3-letter abbreviation (Pro), or a single-letter code (P).\nSubstitution matrix. While word synonyms are encoded in a thesaurus, proteins that are similar in structure or function are captured in a substitution matrix, which scores pairs of amino acids on how readily they may be substituted for one another while maintaining protein viability. One common substitution matrix is BLOSUM (Henikoff & Henikoff, 1992), which is derived from co-occurrence statistics of amino acids in aligned protein sequences.\nProtein structure. Though a protein may be abstracted as a sequence of amino acids, it represents a physical entity with a well-defined three-dimensional structure (Figure 1). Secondary structure describes the local segments of proteins; two commonly observed types are the alpha helix and beta sheet. Tertiary structure encompasses the large-scale formations that determine the overall shape and function of the protein. One way to characterize tertiary structure is by a contact map, which describes the pairs of amino acids that are in contact (within 8 angstroms of one another) in the folded protein structure but lie apart (by at least 6 positions) in the underlying sequence (Rao et al., 2019).\nBinding sites. Proteins may also be characterized by their functional properties. Binding sites are protein regions that bind with other molecules (proteins, natural ligands, and small-molecule drugs) to carry out a specific function. For example, the HIV-1 protease is an enzyme responsible for a critical process in replication of HIV (Brik & Wong, 2003). It has a binding site, shown in Figure 1b, that is a target for drug development to ensure inhibition.\nPost-translational modifications. After a protein is translated from RNA, it may undergo additional modifications, e.g. phosphorylation, which play a key role in protein structure and function." }, { "heading": "3 METHODOLOGY", "text": "Model. We demonstrate our interpretability methods on five Transformer models that were pretrained through language modeling of amino acid sequences. We primarily focus on the BERT-Base model from TAPE (Rao et al., 2019), which was pretrained on Pfam, a dataset of 31M protein sequences (ElGebali et al., 2019b). We refer to this model as TapeBert. We also analyze 4 pre-trained Transformer models from ProtTrans (Elnaggar et al., 2020): ProtBert and ProtBert-BFD, which are 30-layer, 16-head BERT models; ProtAlbert, a 12-layer, 64-head ALBERT (Lan et al., 2020) model; and ProtXLNet, a 30-layer, 16-head XLNet (Yang et al., 2019) model. ProtBert-BFD was pretrained on BFD (Steinegger & Söding, 2018), a dataset of 2.1B protein sequences, while the other ProtTrans models were pretrained on UniRef100 (Suzek et al., 2014), which includes 216M protein sequences. A summary of these 5 models is presented in Appendix A.1.\nHere we present an overview of BERT, with additional details on all models in Appendix A.2. BERT inputs a sequence of amino acids x = (x1, . . . , xn) and applies a series of encoders. Each encoder layer ` outputs a sequence of continuous embeddings (h(`)1 , . . . ,h (`) n ) using a multi-headed attention mechanism. Each attention head in a layer produces a set of attention weights α for an input, where αi,j > 0 is the attention from token i to token j, such that ∑ j αi,j = 1. Intuitively, attention weights define the influence of every token on the next layer’s representation for the current token. We denote a particular head by <layer>-<head_index>, e.g. head 3-7 for the 3rd layer’s 7th head.\nAttention analysis. We analyze how attention aligns with various protein properties. For properties of token pairs, e.g. contact maps, we define an indicator function f(i, j) that returns 1 if the property is present in token pair (i, j) (e.g., if amino acids i and j are in contact), and 0 otherwise. We then compute the proportion of high-attention token pairs (αi,j > θ) where the property is present, aggregated over a dataset X:\npα(f) = ∑ x∈X |x|∑ i=1 |x|∑ j=1 f(i, j) · 1αi,j>θ /∑ x∈X |x|∑ i=1 |x|∑ j=1 1αi,j>θ (1)\nwhere θ is a threshold to select for high-confidence attention weights. We also present an alternative, continuous version of this metric in Appendix B.1.\nFor properties of individual tokens, e.g. binding sites, we define f(i, j) to return 1 if the property is present in token j (e.g. if j is a binding site). In this case, pα(f) equals the proportion of attention that is directed to the property (e.g. the proportion of attention focused on binding sites).\nWhen applying these metrics, we include two types of checks to ensure that the results are not due to chance. First, we test that the proportion of attention that aligns with particular properties is significantly higher than the background frequency of these properties, taking into account the Bonferroni correction for multiple hypotheses corresponding to multiple attention heads. Second, we compare the results to a null model, which is an instance of the model with randomly shuffled attention weights. We describe these methods in detail in Appendix B.2.\nProbing tasks. We also perform probing tasks on the model, which test the knowledge contained in model representations by using them as inputs to a classifier that predicts a property of interest (Veldhoen et al., 2016; Conneau et al., 2018; Adi et al., 2016). The performance of the probing classifier serves as a measure of the knowledge of the property that is encoded in the representation. We run both embedding probes, which assess the knowledge encoded in the output embeddings of each layer, and attention probes (Reif et al., 2019; Clark et al., 2019), which measure the knowledge contained in the attention weights for pairwise features. Details are provided in Appendix B.3.\nDatasets. For our analyses of amino acids and contact maps, we use a curated dataset from TAPE based on ProteinNet (AlQuraishi, 2019; Fox et al., 2013; Berman et al., 2000; Moult et al., 2018), which contains amino acid sequences annotated with spatial coordinates (used for the contact map analysis). For the analysis of secondary structure and binding sites we use the Secondary Structure dataset (Rao et al., 2019; Berman et al., 2000; Moult et al., 2018; Klausen et al., 2019) from TAPE. We employed a taxonomy of secondary structure with three categories: Helix, Strand, and Turn/Bend, with the last two belonging to the higher-level beta sheet category (Sec. 2). We used this taxonomy to study how the model understood structurally distinct regions of beta sheets. We obtained token-level binding site and protein modification labels from the Protein Data Bank (Berman et al., 2000). For analyzing attention, we used a random subset of 5000 sequences from the training split of the\nrespective datasets (note that none of the aforementioned annotations were used in model training). For the diagnostic classifier, we used the respective training splits for training and the validation splits for evaluation. See Appendix B.4 for additional details.\nExperimental details We exclude attention to the [SEP] delimiter token, as it has been shown to be a “no-op” attention token (Clark et al., 2019), as well as attention to the [CLS] token, which is not explicitly used in language modeling. We only include results for attention heads where at least 100 high-confidence attention arcs are available for analysis. We set the attention threshold θ to 0.3 to select for high-confidence attention while retaining sufficient data for analysis. We truncate all protein sequences to a length of 512 to reduce memory requirements.1\nWe note that all of the above analyses are purely associative and do not attempt to establish a causal link between attention and model behavior (Vig et al., 2020; Grimsley et al., 2020), nor to explain model predictions (Jain & Wallace, 2019; Wiegreffe & Pinter, 2019)." }, { "heading": "4 WHAT DOES ATTENTION UNDERSTAND ABOUT PROTEINS?", "text": "" }, { "heading": "4.1 PROTEIN STRUCTURE", "text": "Here we explore the relationship between attention and tertiary structure, as characterized by contact maps (see Section 2). Secondary structure results are included in Appendix C.1.\nAttention aligns strongly with contact maps in the deepest layers. Figure 2 shows how attention aligns with contact maps across the heads of the five models evaluated2, based on the metric defined in Equation 1. The most aligned heads are found in the deepest layers and focus up to 44.7% (TapeBert), 55.7% (ProtAlbert), 58.5% (ProtBert), 63.2% (ProtBert-BFD), and 44.5% (ProtXLNet) of attention on contacts, whereas the background frequency of contacts among all amino acid pairs in the dataset is 1.3%. Figure 1a shows an example of the induced attention from the top head in TapeBert. We note that the model with the single most aligned head—ProtBert-BFD—is the largest model (same size as ProteinBert) at 420M parameters (Appendix A.1) and it was also the only model pre-trained on the\n194% of sequences had length less than 512. Experiments performed on single 16GB Tesla V-100 GPU. 2Heads with fewer than 100 high-confidence attention weights across the dataset are grayed out.\nlargest dataset, BFD. It’s possible that both factors helped the model learn more structurally-aligned attention patterns. Statistical significance tests and null models are reported in Appendix C.2.\nConsidering the models were trained on language modeling tasks without any spatial information, the presence of these structurally-aware attention heads is intriguing. One possible reason for this emergent behavior is that contacts are more likely to biochemically interact with one another, creating statistical dependencies between the amino acids in contact. By focusing attention on the contacts of a masked position, the language models may acquire valuable context for token prediction.\nWhile there seems to be a strong correlation between the attention head output and classically-defined contacts, there are also differences. The models may have learned differing contextualized or nuanced formulations that describe amino acid interactions. These learned interactions could then be used for further discovery and investigation or repurposed for prediction tasks similar to how principles of coevolution enabled a powerful representation for structure prediction." }, { "heading": "4.2 BINDING SITES AND POST-TRANSLATIONAL MODIFICATIONS", "text": "We also analyze how attention interacts with binding sites and post-translational modifications (PTMs), which both play a key role in protein function.\nAttention targets binding sites throughout most layers of the models. Figure 3 shows the proportion of attention focused on binding sites (Eq. 1) across the heads of the 5 models studied. Attention to binding sites is most pronounced in the ProtAlbert model (Figure 3b), which has 22 heads that focus over 50% of attention on bindings sites, whereas the background frequency of binding sites in the dataset is 4.8%. The three BERT models (Figures 3a, 3c, and 3d) also attend strongly to binding sites, with attention heads focusing up to 48.2%, 50.7%, and 45.6% of attention on binding sites, respectively. Figure 1b visualizes the attention in one strongly-aligned head from the TapeBert model. Statistical significance tests and a comparison to a null model are provided in Appendix C.3.\nProtXLNet (Figure 3e) also targets binding sites, but not as strongly as the other models: the most aligned head focuses 15.1% of attention on binding sites, and the average head directs just 6.2% of attention to binding sites, compared to 13.2%, 19.8%, 16.0%, and 15.1% for the first four models in Figure 3. It’s unclear whether this disparity is due to differences in architectures or pre-training objectives; for example, ProtXLNet uses a bidirectional auto-regressive pretraining method (see Appendix A.2), whereas the other 4 models all use masked language modeling objectives.\n0%\n10%\n20%\n30%\nHe lix\n0%\n10%\n20%\nTu rn\n/B en\nd\n0%\n10%\n20%\nSt ra\nnd\n0%\n10%\n20%\nBi nd\nin g\nSi te\n1 2 3 4 5 6 7 8 9 10 11 12 Layer\n0%\n5%\n10%\nCo nt\nac t\nFigure 4: Each plot shows the percentage of attention focused on the given property, averaged over all heads within each layer. The plots, sorted by center of gravity (red dashed line), show that heads in deeper layers focus relatively more attention on binding sites and contacts, whereas attention toward specific secondary structures is more even across layers.\n0.6\n0.7\nHe lix\n0.1\n0.2\nTu rn\n/B en\nd\n0.2\n0.4\n0.6\nSt ra\nnd\n0.10\n0.12\n0.14\n0.16\nBi nd\nin g\nSi te\n1 2 3 4 5 6 7 8 9 10 11 12 Layer\n0.05\n0.10\n0.15\nCo nt\nac t Embedding probe Attention probe\nFigure 5: Performance of probing classifiers by layer, sorted by task order in Figure 4. The embedding probes (orange) quantify the knowledge of the given property that is encoded in each layer’s output embeddings. The attention probe (blue), show the amount of information encoded in attention weights for the (pairwise) contact feature. Additional details are provided in Appendix B.3.\nWhy does attention target binding sites? In contrast to contact maps, which reveal relationships within proteins, binding sites describe how a protein interacts with other molecules. These external interactions ultimately define the high-level function of the protein, and thus binding sites remain conserved even when the sequence as a whole evolves (Kinjo & Nakamura, 2009). Further, structural motifs in binding sites are mainly restricted to specific families or superfamilies of proteins (Kinjo & Nakamura, 2009), and binding sites can reveal evolutionary relationships among proteins (Lee et al., 2017). Thus binding sites may provide the model with a high-level characterization of the protein that is robust to individual sequence variation. By attending to these regions, the model can leverage this higher-level context when predicting masked tokens throughout the sequence.\nAttention targets PTMs in a small number of heads. A small number of heads in each model concentrate their attention very strongly on amino acids associated with post-translational modifications (PTMs). For example, Head 11-6 in TapeBert focused 64% of attention on PTM positions, though these occur at only 0.8% of sequence positions in the dataset.3 Similar to our discussion on binding sites, PTMs are critical to protein function (Rubin & Rosen, 1975) and thereby are likely to exhibit behavior that is conserved across the sequence space. See Appendix C.4 for full results." }, { "heading": "4.3 CROSS-LAYER ANALYSIS", "text": "We analyze how attention captures properties of varying complexity across different layers of TapeBert, and compare this to a probing analysis of embeddings and attention weights (see Section 3).\nAttention targets higher-level properties in deeper layers. As shown in Figure 4, deeper layers focus relatively more attention on binding sites and contacts (high-level concept), whereas secondary structure (low- to mid-level concept) is targeted more evenly across layers. The probing analysis of attention (Figure 5, blue) similarly shows that knowledge of contact maps (a pairwise feature)\n3This head also targets binding sites (Fig. 3a) but at a percentage of 49%.\nis encoded in attention weights primarily in the last 1-2 layers. These results are consistent with prior work in NLP that suggests deeper layers in text-based Transformers attend to more complex properties (Vig & Belinkov, 2019) and encode higher-level representations (Raganato & Tiedemann, 2018; Peters et al., 2018; Tenney et al., 2019; Jawahar et al., 2019).\nThe embedding probes (Figure 5, orange) also show that the model first builds representations of local secondary structure in lower layers before fully encoding binding sites and contact maps in deeper layers. However, this analysis also reveals stark differences in how knowledge of contact maps is accrued in embeddings, which accumulate this knowledge gradually over many layers, compared to attention weights, which acquire this knowledge only in the final layers in this case. This example points out limitations of common layerwise probing approaches that only consider embeddings, which, intuitively, represent what the model knows but not necessarily how it operationalizes that knowledge." }, { "heading": "4.4 AMINO ACIDS AND THE SUBSTITUTION MATRIX", "text": "In addition to high-level structural and functional properties, we also performed a fine-grained analysis of the interaction between attention and particular amino acids.\nAttention heads specialize in particular amino acids. We computed the proportion of TapeBert’s attention to each of the 20 standard amino acids, as shown in Figure 6 for two example amino acids. For 16 of the amino acids, there exists an attention head that focuses over 25% of attention on that amino acid, significantly greater than the background frequencies of the corresponding amino acids, which range from 1.3% to 9.4%. Similar behavior was observed for ProtBert, ProtBert-BFD, ProtAlbert, and ProtXLNet models, with 17, 15, 16, and 18 amino acids, respectively, receiving greater than 25% of the attention from at least one attention head. Detailed results for TapeBert including statistical significance tests and comparison to a null model are presented in Appendix C.5.\nAttention is consistent with substitution relationships. A natural follow-up question from the above analysis is whether each head has “memorized” specific amino acids to target, or whether it has actually learned meaningful properties that correlate with particular amino acids. To test the latter hypothesis, we analyze whether amino acids with similar structural and functional properties are attended to similarly across heads. Specifically, we compute the Pearson correlation between the distribution of attention across heads between all pairs of distinct amino acids, as shown in Figure 7 (left) for TapeBert. For example, the entry for Pro (P) and Phe (F) is the correlation between the two heatmaps in Figure 6. We compare these scores to the BLOSUM62 substitution scores (Sec. 2) in Figure 7 (right), and find a Pearson correlation of 0.73, suggesting that attention is moderately\nconsistent with substitution relationships. Similar correlations are observed for the ProtTrans models: 0.68 (ProtBert), 0.75 (ProtBert-BFD), 0.60 (ProtAlbert), and 0.71 (ProtXLNet). As a baseline, the randomized versions of these models (Appendix B.2) yielded correlations of -0.02 (TapeBert), 0.02 (ProtBert), -0.03 (ProtBert-BFD), -0.05 (ProtAlbert), and 0.21 (ProtXLNet)." }, { "heading": "5 RELATED WORK", "text": "" }, { "heading": "5.1 PROTEIN LANGUAGE MODELS", "text": "Deep neural networks for protein language modeling have received broad interest. Early work applied the Skip-gram model (Mikolov et al., 2013) to construct continuous embeddings from protein sequences (Asgari & Mofrad, 2015). Sequence-only language models have since been trained through autoregressive or autoencoding self-supervision objectives for discriminative and generative tasks, for example, using LSTMs or Transformer-based architectures (Alley et al., 2019; Bepler & Berger, 2019; Rao et al., 2019; Rives et al., 2019). TAPE created a benchmark of five tasks to assess protein sequence models, and ProtTrans also released several large-scale pretrained protein Transformer models (Elnaggar et al., 2020). Riesselman et al. (2019); Madani et al. (2020) trained autoregressive generative models to predict the functional effect of mutations and generate natural-like proteins.\nFrom an interpretability perspective, Rives et al. (2019) showed that the output embeddings from a pretrained Transformer can recapitulate structural and functional properties of proteins through learned linear transformations. Various works have analyzed output embeddings of protein models through dimensionality reduction techniques such as PCA or t-SNE (Elnaggar et al., 2020; Biswas et al., 2020). In our work, we take an interpretability-first perspective to focus on the internal model representations, specifically attention and intermediate hidden states, across multiple protein language models. We also explore novel biological properties including binding sites and post-translational modifications." }, { "heading": "5.2 INTERPRETING MODELS IN NLP", "text": "The rise of deep neural networks in ML has also led to much work on interpreting these so-called black-box models. This section reviews the NLP interpretability literature on the Transformer model, which is directly comparable to our work on interpreting Transformer models of protein sequences.\nInterpreting Transformers. The Transformer is a neural architecture that uses attention to accelerate learning (Vaswani et al., 2017). In NLP, transformers are the backbone of state-of-the-art pre-trained language models such as BERT (Devlin et al., 2019). BERTology focuses on interpreting what the BERT model learns about language using a suite of probes and interventions (Rogers et al., 2020). So-called diagnostic classifiers are used to interpret the outputs from BERT’s layers (Veldhoen et al., 2016). At a high level, mechanisms for interpreting BERT can be placed into three main categories: interpreting the learned embeddings (Ethayarajh, 2019; Wiedemann et al., 2019; Mickus et al., 2020; Adi et al., 2016; Conneau et al., 2018), BERT’s learned knowledge of syntax (Lin et al., 2019; Liu et al., 2019; Tenney et al., 2019; Htut et al., 2019; Hewitt & Manning, 2019; Goldberg, 2019), and BERT’s learned knowledge of semantics (Tenney et al., 2019; Ettinger, 2020).\nInterpreting attention specifically. Interpreting attention on textual sequences is a wellestablished area of research (Wiegreffe & Pinter, 2019; Zhong et al., 2019; Brunner et al., 2020; Hewitt & Manning, 2019). Past work has been shown that attention correlates with syntactic and semantic relationships in natural language in some cases (Clark et al., 2019; Vig & Belinkov, 2019; Htut et al., 2019). Depending on the task and model architecture, attention may have less or more explanatory power for model predictions (Jain & Wallace, 2019; Serrano & Smith, 2019; Pruthi et al., 2020; Moradi et al., 2019; Vashishth et al., 2019). Visualization techniques have been used to convey the structure and properties of attention in Transformers (Vaswani et al., 2017; Kovaleva et al., 2019; Hoover et al., 2020; Vig, 2019). Recent work has begun to analyze attention in Transformer models outside of the domain of natural language (Schwaller et al., 2020; Payne et al., 2020).\nOur work extends these methods to protein sequence models by considering particular biophysical properties and relationships. We also present a joint cross-layer probing analysis of attention weights and layer embeddings. While past work in NLP has analyzed attention and embeddings across layers, we believe we are the first to do so in any domain using a single, unified metric, which enables us to\ndirectly compare the relative information content of the two representations. Finally, we present a novel tool for visualizing attention embedded in three-dimensional structure." }, { "heading": "6 CONCLUSIONS AND FUTURE WORK", "text": "This paper builds on the synergy between NLP and computational biology by adapting and extending NLP interpretability methods to protein sequence modeling. We show how a Transformer language model recovers structural and functional properties of proteins and integrates this knowledge directly into its attention mechanism. While this paper focuses on reconciling attention with known properties of proteins, one might also leverage attention to uncover novel relationships or more nuanced forms of existing measures such as contact maps, as discussed in Section 4.1. In this way, language models have the potential to serve as tools for scientific discovery. But in order for learned representations to be accessible to domain experts, they must be presented in an appropriate context to facilitate discovery. Visualizing attention in the context of protein structure (Figure 1) is one attempt to do so. We believe there is the potential to develop such contextual visualizations of learned representations in a range of scientific domains." }, { "heading": "ACKNOWLEDGMENTS", "text": "We would like to thank Xi Victoria Lin, Stephan Zheng, Melvin Gruesbeck, and the anonymous reviewers for their valuable feedback." }, { "heading": "A MODEL OVERVIEW", "text": "" }, { "heading": "A.1 PRE-TRAINED MODELS", "text": "Table 1 provides an overview of the five pre-trained Transformer models studied in this work. The models originate from the TAPE and ProtTrans repositories, spanning three model architectures: BERT, ALBERT, and XLNet." }, { "heading": "A.2 BERT TRANSFORMER ARCHITECTURE", "text": "Stacked Encoder: BERT uses a stacked-encoder architecture, which inputs a sequence of tokens x = (x1, ..., xn) and applies position and token embeddings followed by a series of encoder layers. Each layer applies multi-head self-attention (see below) in combination with a feedforward network, layer normalization, and residual connections. The output of each layer ` is a sequence of contextualized embeddings (h(`)1 , . . . ,h (`) n ).\nSelf-Attention: Given an input x = (x1, . . . , xn), the self-attention mechanism assigns to each token pair i, j an attention weight αi,j > 0 where ∑ j αi,j = 1. Attention in BERT is bidirectional. In the multi-layer, multi-head setting, α is specific to a layer and head. The BERT-Base model has 12 layers and 12 heads. Each attention head learns a distinct set of weights, resulting in 12 x 12 = 144 distinct attention mechanisms in this case.\nThe attention weights αi,j are computed from the scaled dot-product of the query vector of i and the key vector of j, followed by a softmax operation. The attention weights are then used to produce a weighted sum of value vectors:\nAttention(Q,K, V ) = softmax ( QKT√ dk ) V (2)\nusing query matrix Q, key matrix K, and value matrix V , where dk is the dimension of K. In a multi-head setting, the queries, keys, and values are linearly projected h times, and the attention operation is performed in parallel for each representation, with the results concatenated." }, { "heading": "A.3 OTHER TRANSFORMER VARIANTS", "text": "ALBERT: The architecture of ALBERT differs from BERT in two ways: (1) It shares parameters across layers, unlike BERT which learns distinct parameters for every layer and (2) It uses factorized embeddings, which allows the input token embeddings to be of a different (smaller) size than the hidden states. The original version of ALBERT designed for text also employed a sentence-order prediction pretraining task, but this was not used on the models studied in this paper.\nXLNet: Instead of the masked-language modeling pretraining objective use for BERT, XLNet uses a bidirectional auto-regressive pretraining method that considers all possible orderings of the input factorization. The architecture also adds a segment recurrence mechanism to process long sequences, as well as a relative rather than absolute encoding scheme." }, { "heading": "B ADDITIONAL EXPERIMENTAL DETAILS", "text": "" }, { "heading": "B.1 ALTERNATIVE ATTENTION AGREEMENT METRIC", "text": "Here we present an alternative formulation to Eq. 1 based on an attention-weighted average. We define an indicator function f(i, j) for property f that returns 1 if the property is present in token pair (i, j) (i.e., if amino acids i and j are in contact), and zero otherwise. We then compute the proportion of attention that matches with f over a dataset X as follows:\npα(f) = ∑ x∈X |x|∑ i=1 |x|∑ j=1 f(i, j)αi,j(x) /∑ x∈X |x|∑ i=1 |x|∑ j=1 αi,j(x) (3)\nwhere αi,j(x) denotes the attention from i to j for input sequence x." }, { "heading": "B.2 STATISTICAL SIGNIFICANCE TESTING AND NULL MODELS", "text": "We perform statistical significance tests to determine whether any results based on the metric defined in Equation 1 are due to chance. Given a property f , as defined in Section 3, we perform a twoproportion z-test comparing (1) the proportion of high-confidence attention arcs (αi,j > θ) for which f(i, j) = 1, and (2) the proportion of all possible pairs i, j for which f(i, j) = 1. Note that the first proportion is exactly the metric pα(f) defined in Equation 1 (e.g. the proportion of attention aligned with contact maps). The second proportion is simply the background frequency of the property (e.g. the background frequency of contacts). Since we extract the maximum scores over all of the heads in the model, we treat this as a case of multiple hypothesis testing and apply the Bonferroni correction, with the number of hypotheses m equal to the number of attention heads.\nAs an additional check that the results did not occur by chance, we also report results on baseline (null) models. We initially considered using two forms of null models: (1) a model with randomly initialized weights. and (2) a model trained on randomly shuffled sequences. However, in both cases, none of the sequences in the dataset yielded attention weights greater than the attention threshold θ. This suggests that the mere existence of the high-confidence attention weights used in the analysis could not have occurred by chance, but it does not shed light on the particular analyses performed. Therefore, we implemented an alternative randomization scheme in which we randomly shuffle attention weights from the original models as a post-processing step. Specifically, we permute the sequence of attention weights from each token for every attention head. To illustrate, let’s say that the original model produced attention weights of (0.3, 0.2, 0.1, 0.4, 0.0) from position i in protein sequence x from head h, where |x| = 5. In the null model, the attention weights from position i in sequence x in head h would be a random permutation of those weights, e.g., (0.2, 0.0, 0.4, 0.3, 0.1). Note that these are still valid attention weights as they would sum to 1 (since the original weights would sum to 1 by definition). We report results using this form of baseline model." }, { "heading": "B.3 PROBING METHODOLOGY", "text": "Embedding probe. We probe the embedding vectors output from each layer using a linear probing classifier. For token-level probing tasks (binding sites, secondary structure) we feed each token’s output vector directly to the classifier. For token-pair probing tasks (contact map) we construct a pairwise feature vector by concatenating the elementwise differences and products of the two tokens’ output vectors, following the TAPE4 implementation.\nWe use task-specific evaluation metrics for the probing classifier: for secondary structure prediction, we measure F1 score; for contact prediction, we measure precision@L/5, where L is the length of the protein sequence, following standard practice (Moult et al., 2018); for binding site prediction, we measure precision@L/20, since approximately one in twenty amino acids in each sequence is a binding site (4.8% in the dataset).\nAttention probe. Just as the attention weight αi,j is defined for a pair of amino acids (i, j), so is the contact property f(i, j), which returns true if amino acids i and j are in contact. Treating the attention weight as a feature of a token-pair (i, j), we can train a probing classifier that predicts the\n4https://github.com/songlab-cal/tape\ncontact property based on this feature, thereby quantifying the attention mechanism’s knowledge of that property. In our multi-head setting, we treat the attention weights across all heads in a given layer as a feature vector, and use a probing classifier to assess the knowledge of a given property in the attention weights across the entire layer. As with the embedding probe, we measure performance of the probing classifier using precision@L/5, where L is the length of the protein sequence, following standard practice for contact prediction." }, { "heading": "B.4 DATASETS", "text": "We used two protein sequence datasets from the TAPE repository for the analysis: the ProteinNet dataset (AlQuraishi, 2019; Fox et al., 2013; Berman et al., 2000; Moult et al., 2018) and the Secondary Structure dataset (Rao et al., 2019; Berman et al., 2000; Moult et al., 2018; Klausen et al., 2019). The former was used for analysis of amino acids and contact maps, and the latter was used for analysis of secondary structure. We additionally created a third dataset for binding site and post-translational modification (PTM) analysis from the Secondary Structure dataset, which was augmented with binding site and PTM annotations obtained from the Protein Data Bank’s Web API.5 We excluded any sequences for which annotations were not available. The resulting dataset sizes are shown in Table 2. For the analysis of attention, a random subset of 5000 sequences from the training split of each dataset was used, as the analysis was purely evaluative. For training and evaluating the diagnostic classifier, the full training and validation splits were used." }, { "heading": "C ADDITIONAL RESULTS OF ATTENTION ANALYSIS", "text": "" }, { "heading": "C.1 SECONDARY STRUCTURE", "text": "5http://www.rcsb.org/pdb/software/rest.do\nC.2 CONTACT MAPS: STATISTICAL SIGNIFICANCE TESTS AND NULL MODELS\nC.3 BINDING SITES: STATISTICAL SIGNIFICANCE TESTS AND NULL MODEL" }, { "heading": "C.4 POST-TRANSLATIONAL MODIFICATIONS (PTMS)", "text": "" }, { "heading": "C.5 AMINO ACIDS", "text": "" } ]
2,021
BERTOLOGY MEETS BIOLOGY: INTERPRETING ATTENTION IN PROTEIN LANGUAGE MODELS
SP:128ae7bc4a53360e9492783e7430da8c778f3d66
[ "The paper proposes a novel way to automatically tune 3D ConvNet hyper-parameters (learning rate, input clip length, sampling way). This is achieved by decomposing the optimization path into several states and the state transition is triggered when the knee-point on the performance-epoch curve is met. Extensive experiments are conducted on popular video benchmarks and show that the optimization planning is effective to improve the accuracy and requires less time time compared to the hand-tuned procedure." ]
3D Convolutional Neural Networks (3D ConvNets) have been regarded as a powerful class of models for video recognition. Nevertheless, it is not trivial to optimally learn a 3D ConvNets due to high complexity and various options of the training scheme. The most common hand-tuning process starts from learning 3D ConvNets using short video clips and then is followed by learning long-term temporal dependency using lengthy clips, while gradually decaying the learning rate from high to low as training progresses. The fact that such process comes along with several heuristic settings motivates the study to seek an optimal “path” to automate the entire training. In this paper, we decompose the path into a series of training “states” and specify the hyper-parameters, e.g., learning rate and the length of input clips, in each state. The estimation of the knee point on the performance-epoch curve triggers the transition from one state to another. We perform dynamic programming over all the candidate states to plan the optimal permutation of states, i.e., optimization path. Furthermore, we devise a new 3D ConvNets with a unique design of dual-head classifier to improve the spatial and temporal discrimination. Extensive experiments conducted on seven public video recognition benchmarks demonstrate the advantages of our proposal. With the optimization planning, our 3D ConvNets achieves superior results when comparing to the state-of-the-art video recognition approaches. More remarkably, we obtain the top-1 accuracy of 82.5% and 84.3% on the large-scale Kinetics-400 and Kinetics-600 datasets, respectively.
[]
[ { "authors": [ "Mary Ann Branch", "Thomas F. Coleman", "Yuying Li" ], "title": "A subspace, interior, and conjugate gradient method for large-scale bound-constrained minimization problems", "venue": "SIAM Journal on Scientific Computing,", "year": 1999 }, { "authors": [ "Fabian Caba Heilbron", "Victor Escorcia", "Bernard Ghanem", "Juan Carlos Niebles" ], "title": "Activitynet: A large-scale video benchmark for human activity understanding", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Yue Cao", "Jiarui Xu", "Stephen Lin", "Fangyun Wei", "Han Hu" ], "title": "Gcnet: Non-local networks meet squeeze-excitation networks and beyond", "venue": "In ICCV Workshop,", "year": 2019 }, { "authors": [ "João Carreira", "Andrew Zisserman" ], "title": "Quo vadis, action recognition? a new model and the kinetics dataset", "venue": null, "year": 2017 }, { "authors": [ "João Carreira", "Eric Noland", "Andras Banki-Horvath", "Chloe Hillier", "Andrew Zisserman" ], "title": "A short note about kinetics-600", "venue": "ArXiv,", "year": 2018 }, { "authors": [ "Jerry Chee", "Panos Toulis" ], "title": "Convergence diagnostics for stochastic gradient descent with constant learning rate", "venue": "In AISTATS,", "year": 2018 }, { "authors": [ "Quanfu Fan", "Chun-Fu Chen", "Hilde Kuehne", "Marco Pistoia", "David Cox" ], "title": "More is less: Learning efficient video representations by big-little network and depthwise temporal aggregation", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Christoph Feichtenhofer" ], "title": "X3d: Expanding architectures for efficient video recognition", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Christoph Feichtenhofer", "Axel Pinz", "Andrew Zisserman" ], "title": "Convolutional two-stream network fusion for video action recognition", "venue": null, "year": 2016 }, { "authors": [ "Christoph Feichtenhofer", "Haoqi Fan", "Jitendra Malik", "Kaiming He" ], "title": "Slowfast networks for video recognition", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Rong Ge", "Sham M. Kakade", "Rahul Kidambi", "Praneeth Netrapalli" ], "title": "The step decay schedule: A near optimal, geometrically decaying learning rate", "venue": "procedure. ArXiv,", "year": 2019 }, { "authors": [ "Raghav Goyal", "Samira Ebrahimi Kahou", "Vincent Michalski", "Joanna Materzynska", "Susanne Westphal", "Heuna Kim", "Valentin Haenel", "Ingo Fründ", "Peter Yianilos", "Moritz Mueller-Freitag", "Florian Hoppe", "Christian Thurau", "Ingo Bax", "Roland Memisevic" ], "title": "The “something something” video database for learning and evaluating visual common sense", "venue": null, "year": 2017 }, { "authors": [ "Kensho Hara", "Hirokatsu Kataoka", "Yutaka Satoh" ], "title": "Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet", "venue": null, "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Shuiwang Ji", "Wei Xu", "Ming Yang", "Kai Yu" ], "title": "3d convolutional neural networks for human action recognition", "venue": "IEEE Trans. on PAMI,", "year": 2013 }, { "authors": [ "Yangqing Jia", "Evan Shelhamer", "Jeff Donahue", "Sergey Karayev", "Jonathan Long", "Ross Girshick", "Sergio Guadarrama", "Trevor Darrell" ], "title": "Caffe: Convolutional architecture for fast feature embedding", "venue": "In ACM MM,", "year": 2014 }, { "authors": [ "Boyuan Jiang", "Mengmeng Wang", "Weihao Gan", "Wei Wu", "Junjie Yan" ], "title": "Stm: Spatiotemporal and motion encoding for action recognition", "venue": null, "year": 2019 }, { "authors": [ "Andrej Karpathy", "George Toderici", "Sanketh Shetty", "Thomas Leung", "Rahul Sukthankar", "Li FeiFei" ], "title": "Large-scale video classification with convolutional neural networks", "venue": "In CVPR,", "year": 2014 }, { "authors": [ "H. Kuehne", "H. Jhuang", "E. Garrote", "T. Poggio", "T. Serre" ], "title": "HMDB: a large video database for human motion recognition", "venue": "In ICCV,", "year": 2011 }, { "authors": [ "Hunter Lang", "Lin Xiao", "Pengchuan Zhang" ], "title": "Using statistics to automate stochastic optimization", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Ji Lin", "Chuang Gan", "Song Han" ], "title": "Tsm: Temporal shift module for efficient video understanding", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Chenxu Luo", "Alan L. Yuille" ], "title": "Grouped spatial-temporal aggregation for efficient action recognition", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Brais Martı́nez", "Davide Modolo", "Yuanjun Xiong", "Joseph Tighe" ], "title": "Action recognition with spatialtemporal discriminative filter banks", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Andrei Liviu Nicolicioiu", "Iulia Duta", "Marius Leordeanu" ], "title": "Recurrent space-time graph neural networks", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Zhaofan Qiu", "Ting Yao", "Tao Mei" ], "title": "Learning spatio-temporal representation with pseudo-3d residual networks", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Zhaofan Qiu", "Ting Yao", "Chong-Wah Ngo", "Xinmei Tian", "Tao Mei" ], "title": "Learning spatio-temporal representation with local and global diffusion", "venue": null, "year": 2019 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Two-stream convolutional networks for action recognition in videos", "venue": "In NeurIPS,", "year": 2014 }, { "authors": [ "Khurram Soomro", "Amir Roshan Zamir", "Mubarak Shah" ], "title": "UCF101: A dataset of 101 human action classes from videos in the wild", "venue": null, "year": 2012 }, { "authors": [ "Du Tran", "Lubomir Bourdev", "Rob Fergus", "Lorenzo Torresani", "Manohar Paluri" ], "title": "Learning spatiotemporal features with 3d convolutional networks", "venue": "In ICCV,", "year": 2015 }, { "authors": [ "Du Tran", "Heng Wang", "Lorenzo Torresani", "Jamie Ray", "Yann LeCun", "Manohar Paluri" ], "title": "A closer look at spatiotemporal convolutions for action recognition", "venue": null, "year": 2018 }, { "authors": [ "Du Tran", "Heng Wang", "Lorenzo Torresani", "Matt Feiszli" ], "title": "Video classification with channelseparated convolutional networks", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Gül Varol", "Ivan Laptev", "Cordelia Schmid" ], "title": "Long-term temporal convolutions for action recognition", "venue": "IEEE Trans. on PAMI,", "year": 2018 }, { "authors": [ "Limin Wang", "Yuanjun Xiong", "Zhe Wang", "Yu Qiao", "Dahua Lin", "Xiaoou Tang", "Luc Van Gool" ], "title": "Temporal segment networks: Towards good practices for deep action recognition", "venue": null, "year": 2016 }, { "authors": [ "Limin Wang", "Weiqi Li", "Wen Li", "Luc Van Gool" ], "title": "Appearance-and-relation networks for video classification", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Limin Wang", "Yuanjun Xiong", "Zhe Wang", "Yu Qiao", "Dahua Lin", "Xiaoou Tang", "Luc Van Gool" ], "title": "Temporal segment networks for action recognition in videos", "venue": "IEEE Trans. on PAMI,", "year": 2018 }, { "authors": [ "Xiaolong Wang", "Abhinav Gupta" ], "title": "Videos as space-time region graphs", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Xiaolong Wang", "Ross Girshick", "Abhinav Gupta", "Kaiming He" ], "title": "Non-local neural networks", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Chao-Yuan Wu", "Ross Girshick", "Kaiming He", "Christoph Feichtenhofer", "Philipp Krahenbuhl" ], "title": "A multigrid method for efficiently training video models", "venue": null, "year": 2020 }, { "authors": [ "Wenhao Wu", "Dongliang He", "Xiao Tan", "Shifeng Chen", "Shilei Wen" ], "title": "Multi-agent reinforcement learning based frame sampling for effective untrimmed video recognition", "venue": null, "year": 2019 }, { "authors": [ "Saining Xie", "Chen Sun", "Jonathan Huang", "Zhuowen Tu", "Kevin Murphy" ], "title": "Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification", "venue": null, "year": 2018 }, { "authors": [ "Sho Yaida" ], "title": "Fluctuation-dissipation relations for stochastic gradient descent", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "C Zach", "T Pock", "H Bischof" ], "title": "A duality based approach for realtime tv-l1 optical flow", "venue": "Pattern Recognition,", "year": 2007 }, { "authors": [ "Chen Zhu", "Xiao Tan", "Feng Zhou", "Xiao Liu", "Kaiyu Yue", "Errui Ding", "Yi Ma" ], "title": "Fine-grained video categorization with redundancy reduction attention", "venue": null, "year": 2018 }, { "authors": [ "Xinqi Zhu", "Chang Xu", "Langwen Hui", "Cewu Lu", "Dacheng Tao" ], "title": "Approximated bilinear modules for temporal modeling", "venue": "In ICCV,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "The recent advances in 3D Convolutional Neural Networks (3D ConvNets) have successfully pushed the limits and improved the state-of-the-art of video recognition. For instance, an ensemble of LGD3D networks (Qiu et al., 2019) achieves 17.88% in terms of average error in trimmed video classification task of ActivityNet Challenge 2019, which is dramatically lower than the error (29.3%) attained by the former I3D networks (Carreira & Zisserman, 2017). The result basically indicates the advantage and great potential of 3D ConvNets for improving the performance of video recognition. Despite these impressive progresses, learning effective 3D ConvNets for video recognition remains challenging, due to large variations and complexities of video content. Existing works on 3D ConvNets (Tran et al., 2015; Carreira & Zisserman, 2017; Tran et al., 2018; Wang et al., 2018c; Feichtenhofer et al., 2019; Qiu et al., 2017; 2019) predominately focus on the designs of network architectures but seldom explore how to train a 3D ConvNets in a principled way.\nThe difficulty in training 3D ConvNets originates from the high flexibility of the training scheme. Compared to the training of 2D ConvNets (Ge et al., 2019; Lang et al., 2019; Yaida, 2019), the involvement of temporal dimension in 3D ConvNets brings two new problems of how many frames should be sampled from the video and how to sample these frames. First, the length of video clip is a tradeoff to control the balance between training efficiency and long-range temporal modeling for learning 3D ConvNets. On one hand, training with short clips (16 frames) (Tran et al., 2015; Qiu et al., 2017) generally leads to fast convergence with large mini-batch, and also alleviates the overfitting problem through data augmentation brought by sampling short clips. On the other hand, recent works (Varol et al., 2018; Wang et al., 2018c; Qiu et al., 2019) have proven better ability in capturing long-range dependency when training with long clips (over 100 frames) at the expense of training time. The second issue is the sampling strategy. Uniform sampling (Fan et al., 2019; Jiang et al., 2019; Martı́nez et al., 2019) offers the network a fast-forward overview of the entire video,\nwhile consecutive sampling (Tran et al., 2015; Qiu et al., 2017; 2019; Varol et al., 2018; Wang et al., 2018c) can better capture the spatio-temporal relation across frames. Given these complex choices of training scheme, learning a powerful 3D ConvNets often requires significant engineering efforts of human experts to determine the optimal strategy on each dataset. That motivates us to automate the design of training strategy for 3D ConvNets.\nIn the paper, we propose optimization planning mechanism which seeks the optimal training strategy of 3D ConvNets adaptively. To this end, our optimization planning studies three problems: 1) choose between consecutive or uniform sampling; 2) when to increase the length of input clip; 3) when to decrease the learning rate. Specifically, we decompose the training process into several training states. Each state is assigned with the fixed hyper-parameters, including sampling strategy, length of input clip and learning rate. The transition between states represents the change of hyper-parameters during training. Therefore, the training process can be decided by the permutation of different states and the number of epochs for each state. Here, we build a candidate transition graph to define the valid transitions between states. The search of the best optimization strategy is then equivalent to seeking the optimal path from the initial state to the final state on the graph, which can be solved by dynamic programming algorithm. In order to determine the best epoch for each state in such process, we propose a knee point estimation method via fitting the performance-epoch curve. In general, our optimization planning is viewed as a training scheme controller and is readily applicable to train other neural networks in stages with multiple hyper-parameters.\nTo the best of our knowledge, our work is the first to address the issue of optimization planning for 3D ConvNets training. The issue also leads to the elegant view of how the order and epochs for different hyper-parameters should be planned adaptively. We uniquely formulate the problem as seeking an optimal training path and devise a new 3D ConvNets with dual-head classifier. Extensive experiments on seven datasets demonstrate the effectiveness of our proposal, and with optimization planning, our 3D ConvNets achieves superior results than several state-of-the-art techniques." }, { "heading": "2 RELATED WORK", "text": "The early works using Convolutional Neural Networks for video recognition are mostly extended from 2D ConvNets for image classification (Karpathy et al., 2014; Simonyan & Zisserman, 2014; Feichtenhofer et al., 2016; Wang et al., 2016). These approaches often treat a video as a sequence of frames or optical flow images, and the pixel-level temporal evolution across consecutive frames are seldom explored. To alleviate this issue, 3D ConvNets in Ji et al. (2013) is devised to directly learn spatio-temporal representation from a short video clip via 3D convolution. Tran et al. design a widely-adopted 3D ConvNets in Tran et al. (2015), namely C3D, consisting of 3D convolutions and 3D poolings optimized on the large-scale Sports1M (Karpathy et al., 2014) dataset. Despite having encouraging performances, the training of 3D ConvNets is computationally expensive and the model size suffers from a massive growth. Later in Qiu et al. (2017); Tran et al. (2018); Xie et al. (2018), the decomposed 3D convolution is proposed to simulate one 3D convolution with one 2D spatial convolution plus one 1D temporal convolution. Recently, more advanced techniques are presented for 3D ConvNets, including inflating 2D convolutions (Carreira & Zisserman, 2017), non-local pooling (Wang et al., 2018c) and local-and-global diffusion (Qiu et al., 2019).\nOur work expands the research horizons of 3D ConvNets and focuses on improving 3D ConvNets training by adaptively planning the optimization process. The related works for 2D ConvNets training (Chee & Toulis, 2018; Lang et al., 2019; Yaida, 2019) automate the training strategy via only changing the learning rate adaptively. Our problem is much more challenging especially when temporal dimension is additionally considered and involved in the training scheme of 3D ConvNets. For enhancing 3D ConvNets training, the recent works (Wang et al., 2018c; Qiu et al., 2019) first train 3D ConvNets with short input clips and then fine-tune the network with lengthy clips, which balances training efficiency and long-range temporal modeling. The multigrid method (Wu et al., 2020) further cyclically changes spatial resolution and temporal duration of input clips for a more efficient optimization of 3D ConvNets. The research in this paper contributes by studying not only training 3D ConvNets with multiple lengths of input clips, but also adaptively scheduling the change of input clip length through optimization planning." }, { "heading": "3 OPTIMIZATION PLANNING", "text": "" }, { "heading": "3.1 PROBLEM FORMULATION", "text": "The goal of optimization planning is to automate the learning strategy of 3D ConvNets. Formally, the optimization process of 3D ConvNets can be represented as an optimization path P = 〈S0, S1, ..., SN 〉, which consists of one initial state S0 and N intermediate states. Each intermediate state is assigned with the fixed hyper-parameters, and the training is performed with these N different settings one by one. The training epoch on each setting is decided by T = {t1, t2, ..., tN}, in which ti denotes the number of epochs when moving from Si−1 to Si. The hyperparameters include sampling strategy ∈ {cs, us}, length of input clip ∈ {l1, l2, ..., lNl} and learning rate ∈ {r1, r2, ..., rNr}, where cs and us denotes consecutive sampling and uniform sampling, respectively. In this case, there are 2×Nl ×Nr valid types of training states. The objective function of optimization planning is to seek the optimal strategy {P, T } by maximizing the performance of the final state SN :\nmaximize P,T V(SN ), (1)\nwhere V(·) is the target performance, i.e., mean accuracy on validation set in our case." }, { "heading": "3.2 OPTIMIZATION PATH", "text": "To plan the optimal permutation of training states, we first choose a final state SN , which is usually with low learning rate and lengthy input clip. Then, the problem of seeking an optimal optimization path to SN is naturally decomposed to the subproblem of finding the optimization path to an intermediate state Si and the state transition from Si to SN . As such, the problem can be solved by dynamic programming. Specifically, the solution of optimization path P(SN ) can be given in a recursive form:\nP(SN ) = 〈P(Si∗), SN 〉 , i∗ = argmax i {V(Si → SN )} . (2)\nWhen executing the transfer from the state Si to the state SN , we fine-tune the 3D ConvNets at the state Si by using the hyper-parameters at the state SN . We then evaluate such fine-tuned model on the validation set to measure the priority of this transition, i.e., V(Si → SN ). We choose the state S∗i , which achieves the highest priority of transition to the state SN , as the preceding state of SN . In other words, the optimal path for SN derives from the best-performing preceding state Si∗ . Here, we propose to pre-define all the valid transitions in a directed acyclic graph and determine the best optimization path of each state one by one in the topological order. Figure 1(a) shows one example of the pre-defined transition graph. In the example, we set the number of candidate input clip lengths Nl = 3 and the number of candidate learning rates Nr = 3. Hence, there are 2× 3× 3 = 18 candidate states. Then, the possible transitions, i.e., the connections between states, are determined by the following principles:\n(1) The transitions between states with different sampling strategies are forbidden. We choose S9 and S18 as the final states for consecutive sampling and uniform sampling, respectively. (2) The training only starts from high learning rate and short input clips. (3) The intermediate state can be only transferred to a new state, where either the learning rate is decreased or the length of input clip is increased in the new state.\nPlease note that, some very specific learning rate strategies, e.g., schedules with restart or warmup, show that increasing the learning rate properly may benefit training. Nevertheless, there is still no\nclear principle of when to increase the learning rate, and thus it is very difficult to automate these schedules. In the works of adaptively changing the learning rate for 2D ConvNets training (Ge et al., 2019; Lang et al., 2019; Yaida, 2019), such cyclic schedules are also not taken into account. As a result, we only consider the schedule of decreasing learning rate in the transition graph.\nThese principles can simplify the transition graph and reduce the time cost when solving Equ.(2). We take this graph as basic transition graph. Furthermore, we also build an extended transition graph by enabling simultaneously decreasing the input clip length and the learning rate, as shown in Figure 1(b). In such graph, the training strategies are more flexible." }, { "heading": "3.3 STATE TRANSITION", "text": "One state transition from Si to Sj is defined as a training step that starts to optimize the model at Si by using the hyper-parameters at Sj . Then the question is when this training step completes. Here, we derive the spirit from SASA (Lang et al., 2019) that trains the network with constant hyper-parameters until it reaches a stationary condition. SASA presents to adaptively evaluate the convergence of stochastic gradient descent by Yaida’s condition (Yaida, 2019) during training. However, in practice, the thoroughly optimized network does not always perform well on validation set due to overfitting problem. Therefore, we take both convergence and overfitting into account, and propose to estimate the knee point on the performance-epoch curve evaluated on the validation set, which performs more steadily across various datasets. Specifically, we measure the accuracy yt by evaluating the intermediate model after t-th training epoch on validation set. To estimate the knee point given a limited number of observations yt, we fit the curve by a continuous function fα(t) as\nyt = fα(t) + zt, zt ∼ N (0, σ2), (3)\nwhere zt is the stochastic factor following a normal distribution, and α denotes the parameters of function f . Here, we choose fα(t) as a unimodal function to ensure that there is only one maximum value. The curve fitting can be formulated as the optimization of parameters α by minimizing the distance between observed performance and estimated performance:\nminimize α t∑ 0 ‖yt − fα(t)‖2 , s.t. fα(t) is unimodal. (4)\nWe exploit Trust Region Reflective algorithm (Branch et al., 1999) to solve this problem and the algorithm is robust for arbitrary form of function fα(t). To adaptively stop the iteration, we estimate the knee point epoch t∗ by solving Equ.(4) after each training epoch. If the current epoch t is larger than t∗ + T , we will stop the iteration and choose t∗ as the best epoch number. Here, T is a delay parameter which allows the model to have a T -epoch attempt even if t > t∗. We simply fix the delay parameter T to 10 in all the experiments.\nNext, the essential issue is the form of fitting function fα(t). We separate the function into two parts fα(t) = gα(t) + hα(t), where gα(t) is an increasing bounded function to simulate the convergence\nof the model, and hα(t) is a concave function to model the influence of overfitting. Table 1 shows four examples of fitting function fα(t). In the four functions, we fix hα(t) as a quadratic function and choose power, multi-power, exponential and multi-exponential function as gα(t), respectively. Please note that, for each function, some constraints are given to guarantee the properties of gα(t) and hα(t). We empirically validate the functions by pre-collecting 162 performance-epoch curves (Figure 2(a)) from the training processes of different networks on different datasets and employing the four functions to fit the curves by solving Equ.(4). Table 1 compares the average Root Mean Square Error (RMSE) and R-square when using each function. Figure 2(b) and Figure 2(c) further depict a fitting example in the context of model training from scratch and model fine-tuning, respectively. The general observation is that, all the four functions can nicely fit the performanceepoch curve and do not make a major difference on the final performance. Thus, we simply choose the best-performing exponential function in the rest of the paper." }, { "heading": "4 3D CONVNETS ARCHITECTURE", "text": "In this section, we present the proposed Dual-head Global-contextual Pseudo-3D (DG-P3D) network. An overview of the architecture is shown in Figure 3. In particular, the network is originated from the residual network (He et al., 2016) and further extended to 3D manner with three designs, i.e., pseudo-3D convolution, global context and dual-head classifier.\nPseudo-3D convolution. To achieve a good tradeoff between accuracy and computational cost, pseudo-3D convolution is proposed in Qiu et al. (2017) that decomposes 3D learning into 2D convolutions in spatial space plus 1D operations in temporal dimension. The similar idea of decomposing 3D convolution is also presented in R(2+1)D (Tran et al., 2018) and S3D (Xie et al., 2018). To simplify the decomposition, in this paper, we only choose P3D-A block with the highest performance in Qiu et al. (2017), which cascades the spatial convolution and temporal convolution in turn.\nGlobal context. The recent works on non-local networks (Wang et al., 2018c; Cao et al., 2019; Qiu et al., 2019) highlight the drawback of performing convolutions, in which each operation processes only a local window of neighboring pixels and lacks a holistic view of field. To address this limitation, we choose the simple way to encapsulate global context that learns the global residual from the global-pooled representation and then broadcasts to each position in the feature map.\nDual-head classifier. 3D ConvNets are expected to have both spatial and temporal discrimination. For example, the SlowFast network (Feichtenhofer et al., 2019) contains one separate pathway for visual appearance and temporal dynamics, respectively. Here, we uniquely propose a simpler way that builds a dual-head classifier at the top of the network instead of the two-path structure in the SlowFast network. In between, the temporal head with large temporal dimension focuses on modeling the temporal evolution, and the spatial head with large spatial resolution emphasizes the spatial discrimination. The predictions from two heads are linearly fused. As such, our design costs less computations and is easier to implement." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 DATASETS", "text": "The experiments are conducted on HMDB51, UCF101, ActivityNet, SS-V1/V2, Kinetics-400 and Kinetics-600 datasets. Table 2 details the information and settings on these datasets. The HMDB51\n(Kuehne et al., 2011), UCF101 (Soomro et al., 2012), Kinetics-400 (Carreira & Zisserman, 2017) and Kinetics-600 (Carreira et al., 2018) are the most popular video benchmarks for action recognition on trimmed video clips. The Something-Something V1 (SS-V1) dataset is firstly constructed in Goyal et al. (2017) to learn fine-grained human-object interactions, and then extended to Something-Something V2 (SS-V2) recently. The ActivityNet (Caba Heilbron et al., 2015) dataset is an untrimmed video benchmark for activity recognition. The latest released version of the dataset (v1.3) is exploited. In our experiments, we only use the video-level label of ActivityNet and disable the temporal annotations. Note that the labels for test sets are not publicly available, and thus the performances of ActivityNet, SS-V1, SS-V2, Kinetics-400 and Kinetics-600 are all reported on the validation set. For optimization planning, the original training set of each dataset is split into two parts for learning the network weights and validating the performance, respectively. We construct this internal validation set with the same size as the original validation/test set. Note that the original validation/test set is never exploited in the optimization planning." }, { "heading": "5.2 IMPLEMENTATION DETAILS", "text": "For optimization planning, we set the number of choices for both input clip length Nl and learning rate Nr as 3, and utilize the extended transition graph introduced in Section 3.2. The candidate values of input clip length {l1, l2, l3} and learning rate {r1, r2, r3} for each dataset are summarized in Table 2. Specifically, on SS-V1, SS-V2, Kinetics-400 and Kinetics-600 datasets, the base learning rate is set as 0.04 and the dropout ratio is fixed as 0.5. For HMDB51, UCF101 and ActivityNet, we set lower base learning rate and higher dropout ratio due to limited training samples. The maximum clip length is 64 for HMDB51 and UCF101, while increased to 128 for ActivityNet, Kinetics-400 and Kinetics-600. Considering that the video clips in SS-V1 and SS-V2 are usually shorter than 64 frames, we only use two settings, i.e., 16-frame and 32-frame, for the input clip.\nThe network training in this paper is implemented on Caffe (Jia et al., 2014) framework and the mini-batch stochastic gradient descent is employed to tune the network. The resolution of the input clip is fixed as 224× 224, which is randomly cropped from the video clip resized with the short size in [256, 340]. The clip is randomly flipped along horizontal direction for data augmentation except for SS-V1 and SS-V2 in view of the direction-related categories. Following the settings in Wang et al. (2018c); Qiu et al. (2019), for the network training with long clips (64-frame and 128-frame), we freeze the parameters of all Batch Normalization layers except for the first one since the batch size is too small for batch normalization.\nThere are two inference strategies for the evaluations. The first one roughly predicts the video label on a 224 × 224 single center crop from the centric one clip resized with the short size 256. This strategy is only used when planning the optimization for the purpose of efficiency. Once the optimization path is fixed, we train 3D ConvNets with the path and evaluate the learnt 3D ConvNets by using the second strategy, i.e., the three-crop strategy as in Feichtenhofer et al. (2019), which crops three 256× 256 regions from each video clip. The video-level prediction score is achieved by averaging all scores from 10 uniform sampled clips." }, { "heading": "5.3 EVALUATION OF OPTIMIZATION PLANNING", "text": "We firstly verify the effectiveness of our proposed optimization planning for 3D ConvNets and compare the hand-tuned strategies. To find the most powerful hand-tuned strategy, we capitalize on the popular practices in the literature, and grid-search the training settings through four dimensions, i.e., input length, learning rate decay, sampling strategy and training epochs. Specifically, for input clip\nlength, we follow the common training scheme that first learns the network with short clips and then fine-tunes the network on lengthy clips, and experiment with three strategies l1 → l3, l2 → l3 and l1 → l2 → l3. For each input clip length, we train the network with the same number of epochs. For learning rate decay, we choose two mostly utilized strategies, i.e., 3-step learning rate decay (Wang et al., 2018c; Qiu et al., 2019) and cosine decay (Feichtenhofer et al., 2019). The optimal training epoch for each strategy is determined by grid-searching from [128, 192, 256, 320, 384] epochs.\nTable 3 shows the comparisons between optimization planning and hand-tuned strategies with three architectures P3D, G-P3D and DG-P3D on Kinetics-400 dataset. All the networks are derived from the ResNet-50 pre-trained on ImageNet dataset. The P3D network extends the original ResNet-50 to 3D network by utilizing pseudo-3D convolutions. G-P3D and DG-P3D further employ the global context and global context plus dual-head classifier, respectively. Overall, the proposed optimization planning consistently leads to a performance boost against the best hand-tuning strategy on three networks by 1.4%, 0.9% and 0.9%, respectively. The results basically indicate the advantage of dynamically determining the training strategy. Although the number of epochs for each hand-tuned strategy is tuned by grid-search, the strategy of optimization planning is more flexible and exhibits higher performance. Moreover, with the same optimization planning strategy, DG-P3D network achieves 1.2% improvement over G-P3D, which validates the proposed dual-head classifier.\nTaking our DG-P3D as 3D ConvNets, Table 4 details the comparisons between optimization planning and the hand-tuned strategy across six different datasets. The accuracy of the hand-tuned strategy is reported on the best training scheme by grid search on each dataset. Such best handtuned strategy can be considered as a well-tuned DG-P3D model without optimization planning. The time cost of optimization planning contains the training time of exploring all the possible transitions, and that of hand-tuned strategy is measured by grid-searching the candidate training strategies. Compared to the hand-tuned strategy, optimization planning shows consistent improvements across different datasets, and requires much less time than the exhaustive grid search due to adaptive determination of training scheme. Figure 4 further depicts the optimal optimization paths on different datasets. An interesting observation is that SS-V1/2 tend to select uniform sampling while Kinetics-400 prefers consecutive sampling. We speculate that this may be the result of different emphases of the two sampling strategies. In general, the most special on uniform sampling is to capture the completeness of a video with only a small number of sampled frames. In contrast, consecutive sampling emphasizes the continuity in a video but may only focus on a part of the video content. The SS-V1/V2 datasets consist of fine-grained interactions and the differentiation between these interactions relies more on the completeness of an action. For example, it is almost impossible to distinguish the videos of the category “Pushing something so that it falls off the table” from those of “Pushing something so that it almost falls off but doesn’t,” if only based on part of the video content. In other words, uniform sampling offers the completeness of a video and benefits the recognition on\nSS-V1/V2. Instead, the videos in Kinetics-400 are usually with static scenes or slow motion. Hence, the completeness may not be essential in this case, but consecutive sampling encodes the continuous changes across frames and thus captures the spatio-temporal relation better." }, { "heading": "5.4 MORE EXPERIMENTAL ANALYSIS ON OPTIMIZATION PLANNING", "text": "Next, we analyze the impact of our optimization planning from two more perspectives: 1) performance difference using different optimization paths, and 2) the transfer of optimization path across different 3D ConvNets. With regard to the former aspect, we experiment with some variant paths on UCF101, which are built by either inserting an additional state or skipping an intermediate state in our adopted optimization path. For fair comparisons, the numbers of epochs in these variant paths are re-determined by the algorithm in Section 3.3. The results indicate that inserting and skipping one state result in an accuracy decrease of 0.2%∼1.0% and 0.3%∼1.5%, respectively. For the latter one, we conduct the experiments by utilizing the optimal path found with DG-P3D on Kinetics-400 as the path for I3D (Carreira & Zisserman, 2017). Training I3D with such optimization path achieves the accuracy of 73.8% on Kinetics-400 with RGB input and leads to 1.7% performance improvement against the original I3D model. The results again demonstrate the effectiveness of our optimization planning and basically validate the generalizability of the learnt strategy across different networks." }, { "heading": "5.5 COMPARISONS WITH STATE-OF-THE-ART", "text": "We compare with several state-of-the-art techniques on HMDB51, UCF101 and ActivityNet datasets. The performance comparisons are summarized in Table 5. The backbone of DG-P3D is either ResNet-50 or ResNet-101 pre-trained on ImageNet. Please note that most recent works employ Kinetics-400 pre-training to improve the accuracy. Here, we also choose the two-step strategy that first trains DG-P3D on Kinetics-400 (K400) and then fine-tunes the network on the target dataset. The two steps are both trained with optimization planning. Overall, DG-P3D achieves the highest performances on all the three datasets, i.e., 78.8% on HMDB51, 97.8% on UCF101 and 86.8% on ActivityNet. In particular, DG-P3D outperforms the other 3D ConvNets of I3D, R(2+1)D, S3D-G and LGD-3D by 4.3%, 4.3%, 2.9% and 3.1% on HMDB51, respectively. The results again verify the merit of the learnt 3D ConvNets. For ActivityNet, most baselines utilize the temporal annotation to locate the foreground segment in the untrimmed videos. In our experiments, we only use the video-level annotations and our DG-P3D still surpasses the best competitor MARL by 1.1%.\nThen, we turn to evaluate DG-P3D with optimization planning on four large-scale datasets, i.e., SSV1, SS-V2, Kinetics-400 and Kinetics-600. The top-1 and top-5 accuracies are reported on the four datasets. For Kinetics datasets, we additionally consider the flow modality for fair comparison with the baselines. The two-direction optical flow image is extracted by TV-L1 algorithm (Zach et al., 2007) in this paper. To reduce the time cost, the best optimization path found on the RGB modality of Kinetics-400 is utilized as the path for flow modality on Kinetics-400, and for both RGB and flow modalities on Kinetics-600. The results are shown in Table 6 and Table 7. Specifically, DG-P3D achieves the best performance with top-1 accuracy of 51.8% on SS-V1 and 64.5% on SS-V2. DGP3D is superior to STM, which reports the best known results, by 1.1% and 0.3% respectively. On Kinetics-400, with only RGB input, DG-P3D achieves 80.4% top-1 accuracy, which makes the improvements over the recent 3D ConvNets irCSN (Tran et al., 2019), X3D-XL (Feichtenhofer, 2020), LGD-3D (Qiu et al., 2019), SlowFast (Feichtenhofer et al., 2019) by 1.4%, 1.3%, 1.0% and 0.6%, respectively. Such accuracy is even higher than that of the two-stream I3D (Carreira & Zisserman, 2017), R(2+1)D (Tran et al., 2018) and S3D-G (Xie et al., 2018). When fusing the prediction from both modalities, the accuracy of DG-P3D is further improved to 82.5%. The similar performance trends are also observed on Kinetics-600. The two-stream DG-P3D achieves 84.3% top-1 accuracy, which leads the performance by 1.2% against the best competitor of two-stream LGD-3D." }, { "heading": "6 CONCLUSION", "text": "We have presented optimization planning which aims to automate the training scheme of 3D ConvNets. Particularly, a training process is decided by a sequence of training states, namely optimization path, plus the number of training epochs for each state. We specify the hyper-parameters in each state and the permutation of states determines the changes of hyper-parameters. Technically, we propose a dynamic programming method to seek the best optimization path in the pre-defined candidate transition graph and each state transition is stopped adaptively by estimating the knee point on the performance-epoch curve. Furthermore, we devise a new 3D ConvNets, i.e., DG-P3D, with a unique design of the dual-head classifier. The results on seven video benchmarks, which are different in terms of data scale, target categories and video duration, validate our proposal. Notably, DG-P3D with optimization planning obtains superior performances on all the seven datasets." }, { "heading": "A APPENDIX", "text": "The appendix contains: 1) the collection of the performance-epoch curves; 2) the comparisons of 3D ConvNets architectures; 3) visualization of more optimization paths by optimization planning.\nA.1 CURVE COLLECTION\nTo evaluate the functions for knee point estimation, we pre-collect 162 performance-epoch curves from the training processes of different networks on different datasets. Particularly, P3D, G-P3D and DG-P3D networks are trained using hand-tuned strategy on six different datasets, i.e., HMDB51, UCF101, ActivityNet, SS-V1, SS-V2 and Kinetics-400. To obtain the curves with different settings, for each dataset, the network is firstly trained with 8-frame clips using 3-step learning rate strategy ([0.01, 0.001, 0.0001]), and then fine-tuned on 16-frame and 32-frame clips. Each step is trained for 50 epochs. Therefore, we collect 3× 6× 9 = 162 curves in total.\nA.2 3D CONVNETS ARCHITECTURES\nTable 8 compares the computational cost of different network architectures. The number of floatingpoint operations (FLOPs) for one crop is given on each network. Overall, P3D network used in this paper requires similar computations with LGD-3D. The global context and dual-head classifier leads to 3.5% and 7.3% additional computations, respectively. Ultimately, the one-crop prediction of DG-P3D spends 218 GFLOPs, which is still lower than that of SlowFast network. The results basically indicate that our DG-P3D is potentially more economic and effective.\nA.3 VISUALIZATION OF MORE STRATEGIES\nFigure 5 depicts the best optimization paths learnt by optimization planning on HMDB51, UCF101, ActivityNet, SS-V1, SS-V2 and Kinetics-400, respectively. We additionally show the best optimization paths on HMDB51 and UCF101 in Figure 5(g) and Figure 5(h), when taking Kinetics-400 for network pre-training. As expected, the training strategy predicted by optimization planning changes in response to different network initialization." } ]
2,020
null
SP:9e4232a23a81fe31b824208547760a9906a05a4a
[ "This paper proposes strategies for learning the structure of multiple sets of data observed over a common set of variables which may exhibit distribution shift. The authors address this problem by augmenting the dataset with an indicator variable which indicates membership to dataset. After augmenting the dataset standard algorithms for structure learning are applied, with the additional restriction that the indicator variable may only be an ancestor. The authors provide theory that shows the procedure consistently estimates the local structures. The authors then show how the additional information obtained from the structure learned with the context variable can be used to disambiguate directions. Experimental results show the efficacy of the proposed approach. " ]
Causal discovery has witnessed significant progress over the past decades. Most algorithms in causal discovery consider a single domain with a fixed distribution. However, it is commonplace to encounter heterogeneous data (data from different domains with distribution shifts). Applying existing methods on such heterogeneous data may lead to spurious edges or incorrect directions in the learned graph. In this paper, we develop a novel score-based approach for causal discovery from heterogeneous data. Specifically, we propose a Multiple-Domain Score Search (MDSS) algorithm, which is guaranteed to find the correct graph skeleton asymptotically. Furthermore, benefiting from distribution shifts, MDSS enables the detection of more causal directions than previous algorithms designed for single domain data. The proposed MDSS can be readily incorporated into off-the-shelf search strategies, such as the greedy search and the policy-gradient-based search. Theoretical analyses and extensive experiments on both synthetic and real data demonstrate the efficacy of our method.
[]
[ { "authors": [ "Ryan Prescott Adams", "David JC MacKay" ], "title": "Bayesian online changepoint detection", "venue": "arXiv preprint arXiv:0710.3742,", "year": 2007 }, { "authors": [ "Marcus Bendtsen" ], "title": "Regime aware learning", "venue": "In Conference on Probabilistic Graphical Models,", "year": 2016 }, { "authors": [ "Vince D Calhoun", "Robyn Miller", "Godfrey Pearlson", "Tulay Adalı" ], "title": "The chronnectome: time-varying connectivity networks as the next frontier in fmri data", "venue": "discovery. Neuron,", "year": 2014 }, { "authors": [ "David Maxwell Chickering" ], "title": "Optimal structure identification with greedy search", "venue": "Journal of machine learning research,", "year": 2002 }, { "authors": [ "AmirEmad Ghassami", "Negar Kiyavash", "Biwei Huang", "Kun Zhang" ], "title": "Multi-domain causal structure learning in linear systems", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Arthur Gretton", "Kenji Fukumizu", "Choon H Teo", "Le Song", "Bernhard Schölkopf", "Alex J Smola" ], "title": "A kernel statistical test of independence", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "Kevin D Hoover" ], "title": "The logic of causal inference: Econometrics and the conditional analysis of causation", "venue": "Economics & Philosophy,", "year": 1990 }, { "authors": [ "Patrik O Hoyer", "Dominik Janzing", "Joris M Mooij", "Jonas Peters", "Bernhard Schölkopf" ], "title": "Nonlinear causal discovery with additive noise models", "venue": "In NIPS,", "year": 2009 }, { "authors": [ "Biwei Huang", "Kun Zhang", "Bernhard Schölkopf" ], "title": "Identification of time-dependent causal model: A gaussian process treatment", "venue": "In Twenty-Fourth International Joint Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Biwei Huang", "Kun Zhang", "Yizhu Lin", "Bernhard Schölkopf", "Clark Glymour" ], "title": "Generalized score functions for causal discovery", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 }, { "authors": [ "Biwei Huang", "Kun Zhang", "Jiji Zhang", "Joseph Ramsey", "Ruben Sanchez-Romero", "Clark Glymour", "Bernhard Schölkopf" ], "title": "Causal discovery from heterogeneous/nonstationary data", "venue": null, "year": 1903 }, { "authors": [ "Erich Kummerfeld", "David Danks" ], "title": "Tracking time-varying graphical structure", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Sébastien Lachapelle", "Philippe Brouillard", "Tristan Deleu", "Simon Lacoste-Julien" ], "title": "Gradient-based neural dag learning", "venue": "arXiv preprint arXiv:1906.02226,", "year": 2019 }, { "authors": [ "Young Ho Lee" ], "title": "Meta-analysis of genetic association studies", "venue": "Annals of laboratory medicine,", "year": 2015 }, { "authors": [ "Guillemette Marot", "Jean-Louis Foulley", "Claus-Dieter Mayer", "Florence Jaffrézic" ], "title": "Moderated effect size and p-value combinations for microarray", "venue": "meta-analyses. Bioinformatics,", "year": 2009 }, { "authors": [ "Judea Pearl" ], "title": "Causality: Models, Reasoning, and Inference", "venue": null, "year": 2000 }, { "authors": [ "Jonas Peters", "Peter Bühlmann", "Nicolai Meinshausen" ], "title": "Causal inference by using invariant prediction: identification and confidence intervals", "venue": "Journal of the Royal Statistical Society: Series B (Statistical Methodology),", "year": 2016 }, { "authors": [ "Russell A Poldrack", "Timothy O Laumann", "Oluwasanmi Koyejo", "Brenda Gregory", "Ashleigh Hover", "Mei-Yen Chen", "Krzysztof J Gorgolewski", "Jeffrey Luci", "Sung Jun Joo", "Ryan L Boyd" ], "title": "Long-term neural and physiological phenotyping of a single human", "venue": "Nature communications,", "year": 2015 }, { "authors": [ "Joseph Ramsey", "Madelyn Glymour", "Ruben Sanchez-Romero", "Clark Glymour" ], "title": "A million variables and more: the fast greedy equivalence search algorithm for learning high-dimensional graphical causal models, with an application to functional magnetic resonance", "venue": "images. International journal of data science and analytics,", "year": 2017 }, { "authors": [ "Shohei Shimizu", "Patrik O Hoyer", "Aapo Hyvärinen", "Antti Kerminen" ], "title": "A linear non-gaussian acyclic model for causal discovery", "venue": "Journal of Machine Learning Research,", "year": 2003 }, { "authors": [ "Le Song", "Mladen Kolar", "Eric P Xing" ], "title": "Time-varying dynamic bayesian networks. In Advances in neural information processing", "venue": null, "year": 2009 }, { "authors": [ "Le Song", "Kenji Fukumizu", "Arthur Gretton" ], "title": "Kernel embeddings of conditional distributions: A unified kernel framework for nonparametric inference in graphical models", "venue": "IEEE Signal Processing Magazine,", "year": 2013 }, { "authors": [ "Makram Talih", "Nicolas Hengartner" ], "title": "Structural learning with time-varying components: tracking the cross-section of financial time series", "venue": "Journal of the Royal Statistical Society: Series B (Statistical Methodology),", "year": 2005 }, { "authors": [ "Bo Thiesson", "Christopher Meek", "David Maxwell Chickering", "David Heckerman" ], "title": "Learning mixtures of bayesian networks", "venue": null, "year": 1998 }, { "authors": [ "J Tian", "J Pearl" ], "title": "Causal discovery from changes: a bayesian approach, ucla cognitive systems laboratory", "venue": "Technical report, Technical Report,", "year": 2001 }, { "authors": [ "Ioannis Tsamardinos", "Laura E Brown", "Constantin F Aliferis" ], "title": "The max-min hill-climbing bayesian network structure learning algorithm", "venue": "Machine learning,", "year": 2006 }, { "authors": [ "Eric P Xing", "Wenjie Fu", "Le Song" ], "title": "A state-space mixed membership blockmodel for dynamic network tomography", "venue": "The Annals of Applied Statistics,", "year": 2010 }, { "authors": [ "Yue Yu", "Jie Chen", "Tian Gao", "Mo Yu" ], "title": "Dag-gnn: Dag structure learning with graph neural networks", "venue": "arXiv preprint arXiv:1904.10098,", "year": 2019 }, { "authors": [ "Kun Zhang", "Aapo Hyvärinen" ], "title": "On the identifiability of the post-nonlinear causal model", "venue": "In UAI,", "year": 2009 }, { "authors": [ "Xun Zheng", "Bryon Aragam", "Pradeep K Ravikumar", "Eric P Xing" ], "title": "Dags with no tears: Continuous optimization for structure learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Xun Zheng", "Chen Dan", "Bryon Aragam", "Pradeep Ravikumar", "Eric P Xing" ], "title": "Learning sparse nonparametric dags", "venue": "arXiv preprint arXiv:1909.13189,", "year": 2019 }, { "authors": [ "Shengyu Zhu", "Zhitang Chen" ], "title": "Causal discovery with reinforcement learning", "venue": "arXiv preprint arXiv:1906.04477,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Discovering causal relations among variables is a fundamental problem in various fields such as economics, biology, drug testing, and commercial decision making. Because conducting randomized controlled trials is usually expensive or even infeasible, discovering causal relations from observational data, i.e. causal discovery (Pearl, 2000; Spirtes et al., 2000), has received much attention over the past few decades. Early causal discovery algorithms can be roughly categorized into two types: constraint-based ones (e.g. PC (Spirtes et al., 2000)) and score-based ones (e.g. GES (Chickering, 2002)). In general, these methods cannot uniquely identify the causal graph but are guaranteed to output a Markov equivalence class. Since the seminal work by Shimizu et al. (2006), several methods have been developed, achieving identifiability of the whole causal structure by making use of constrained Functional Causal Models (FCMs), including the linear non-Gaussian model (Shimizu et al., 2006), the nonlinear additive noise model (Hoyer et al., 2009), and the post-nonlinear model (Zhang & Hyvärinen, 2009). Recently, Zheng et al. (2018) proposed a score-based method that formulates the causal discovery problem as continuous optimization with a structural constraint that ensures acyclicity. Based on the continuous structural constraint, several researchers further proposed to model the causal relations by neural networks (NNs) (Lachapelle et al., 2019; Yu et al., 2019; Zheng et al., 2019). Another recent work Zhu & Chen (2019) used reinforcement learning (RL) for causal discovery, where the RL agent searches over the graph space and outputs a graph that fits the data best.\nThe above approaches are designed for data from a single domain with a fixed causal model, with the limitation that many of the edge directions cannot be determined without strong functional constraints. In addition, the sample size of data from one domain is usually not large enough to guarantee small statistical estimation errors. One way to improve statistical reliability is to combine datasets from multiple domains, such as P-value meta-analyses (Lee, 2015; Marot et al., 2009). The idea of combining multiple-domain data is commonly seen in learning with mixture of Bayesion networks (Thiesson et al., 1998). While mixture of Bayesion networks are usually used for density estimation, the purpose of causal analysis from multiple-domain data is completely different, it aims at discovering the underlying causal graphs for all domains. Regarding causal analysis from multiple-domain data, a challenge is the data heterogeneity problem: the data distribution may vary across domains. For example, in fMRI hippocampus signal analysis, the connection strength among different brain regions may change across different subjects (domains). Due to the distribution shift, directly pooling the data from multiple domains may lead to spurious edges. To tackle the issue, different ways have been investigated, including using sliding windows (Calhoun et al., 2014), online change point detection (Adams & MacKay, 2007), online undirected graph learning (Talih\n& Hengartner, 2005), locally stationary structure tracker (Kummerfeld & Danks, 2013), and regime aware learning (Bendtsen, 2016). However, these methods may suffer from high estimation variance due to sample scarcity, large type II errors, and a large number of statistical tests. Huang et al. (2015) recovers causal relations with changing modules by making use of certain types of smoothness of the change, while it does not explicitly locate the changing causal modules. Other similar methods, including Xing et al. (2010); Song et al. (2009), can be reduced to online parameter learning because the causal directions are given.\nBy utilizing the invariance property (Hoover, 1990; Tian & Pearl, 2001; Peters et al., 2016) and the more general independent change mechanism (Pearl, 2000), recently, Ghassami et al. (2018) developed two methods: identical boundaries (IB) and minimal changes (MC), for causal discovery from multi-domain data. However, the proposed methods 1) assume causal sufficiency (i.e., all common causes of variables are measured), which is usually not held in real circumstances, 2) are designed for linear systems only, 3) and are not capable of identifying causal directions from more than ten domains. Huang et al. (2019) proposed a more general approach called CD-NOD for both linear and nonlinear heterogeneous data, by extending the PC algorithm to tackle the heterogeneity issue. However, inheriting the drawbacks of constraint-based methods, CD-NOD involves a multiple testing problem and is time-consuming due to large number of independence tests.\nTo overcome the limitations of existing works, we propose a Multiple-Domain Score Search (MDSS) method for causal discovery from heterogeneous data, which enjoys the following properties. (1) To avoid spurious edges when combing multi-domain data, MDSS searches over the space of augmented graphs, which includes an additional domain index as a surrogate variable to characterize the distribution shift. (2) The changing causal modules can be immediately identified from the recovered augmented graph. (3) Benefiting from causal invariance and the independent change mechanism, MDSS uses a novel Multiple-Domain Score (MDS) to help identify more causal directions beyond those in the Markov equivalence class from distribution-shifted data. (4) MDSS can be readily incorporated into off-the-shelf search strategies and is time-efficient and applicable to both linear and nonlinear data. (5) Theoretically, we show that MDSS is guaranteed to find the correct graph skeleton asymptotically, and further identify more causal directions than other traditional score-based and constraint-based algorithms. Empirical studies on both synthetic and real data prove the efficacy of our method." }, { "heading": "2 METHODOLOGY", "text": "In this section, we start from a brief introduction to causal discovery and distribution shifts (Section 2.1), and then in Section 2.2 and 2.3, we introduce our proposed Multiple-Domain Score Search (MDSS). In Section 2.2, MDSS starts with a predefined graph search algorithm to learn the skeleton of the causal graph, with the linear Bayesian information criterion (BIC) score or nonlinear generalized score (GS (Huang et al., 2018)) on the augmented causal system. Then in Section 2.3, MDSS further identifies causal directions with Multiple-Domain Score (MDS) based on the identified skeleton of the graph from Section 2.2. Both theoretically and empirically, we show that MDSS can identify more directions compared to algorithms that are designed for i.i.d. or stationary data." }, { "heading": "2.1 BACKGROUND IN CAUSAL DISCOVERY AND DISTRIBUTION SHIFTS", "text": "The basic causal discovery problem can be formulated as follows: Suppose there are d observable random variables, i.e. V = (V1, ..., Vd). Each random variable satisfies the following generating process: Vi = fi ( PAi, ) , where fi is a function to model the causal relation between Vi and its parents PAi, and i is a noise variable with non-zero variance. All the noise variables are independent of each other. The task of causal discovery is to recover the causal adjacency matrix B given the observed data matrix X ∈ RT×d, where Bij = 1 indicates that Vi is a parent of Vj , and T is the sample size.\nWe denote the underlying causal graph over V as G0. For each Vi, we call P (Vi|PAi) its causal module. For a single domain, the joint probability can be factorized as P (V) = ∏d i=1 P (Vi|PAi). Suppose there are n domains with distribution shifts (i.e. P (V) changes across domains), which implies that some causal modules change across domains. The changes may be caused by the variation of functional models, causal strength, or noise variance. Furthermore, we have the following assumptions.\nAssumption 1. The changes of causal modules can be represented as functions of domain index C, denoted by g(C),\nAssumption 2. There is no confounder in each single dataset, but we allow the changes of different causal modules being dependent.\nRemark: If changes in several causal modules are dependent, it can be regarded as special \"confounders\" that simultaneously affect these causal modules. As a consequence of such confounders, previous causal discovery algorithms designed for i.i.d. or stationary data may output erroneous edges. See section 3.1 for an illustration. Thus, causal discovery from multiple-domain data with distribution shifts (i.e. , heterogeneous data) can be much more difficult than that from single-domain data." }, { "heading": "2.2 SKELETON ESTIMATION ON AUGMENTED GRAPHS", "text": "With Assumptions 1 and 2, it is natural to consider g(C) as an extra variable in order to remove any potential influence caused by these special confounders. We assume that there are L such confounders (g1(C), ..., gL(C)). The causal relation between each observable variable Vi and its parents PAi can be formalized with\nVi = fi ( PAi,gi(C), θi(C), i ) , (1)\nwhere gi(C) ⊆ {gl(C)}Ll=1 is the set of confounders that influence Vi, θi(C) are the effective parameters in Vi’s causal module that are also assumed to be functions of C and are mutually independent for all variables.\nLet G0 be the underlying causal graph over V. We denote the graph resulting from adding arrows gi(C)→ Vi and θi(C)→ Vi on G0 for each Vi in V as Gaug over V ∪ {gl(C)}Ll=1 ∪ {θi(C)} d i=1. We call Gaug an augmented graph (see Figure 1(d) as an example), which satisfies the following assumption.\nAssumption 3. The joint distribution over V ∪ {gl(C)}Ll=1 ∪ {θi(C)} d i=1 is Markov and faithful to Gaug .\nTo remove the potential influence from confounders and recover causal relations from multiple domains, one way is to perform causal discovery algorithms on the augmented graph. While {gl(C)}Ll=1 and {θi(C)} d i=1 are not directly observed, we take C as a surrogate variable (Huang et al., 2019) for them because C is always available as a domain index. Given Assumptions 1, 2 and 3, one can apply any score-based method over V∪{C} to recover the causal relations among variables V as if {gl(C)}Ll=1∪{θi(C)} d i=1 were known. For simplicity, we denote the graph over V ∪ {C} as augmented graph as well. Since C is the domain index, P (C) follows a discrete uniform distribution. Correspondingly, the generating process of non-stationary data can be considered as follows: First we generate random values from P (C), and then we generate data points over V according to the SEM in Equation 1. Finally, generated data points are sorted in ascending order according to the values of C (i.e., data points having the same value of C are regarded as belonging to the same domain). In other words, we observe the distribution P (V|C), where P (V|C) may change across different values of C, resulting in non-stationary data. Note that if we do not include C into the system explicitly, samples of V are not i.i.d. However, after explicitly including the domain index C into the system, P (V, C) is fixed, and thus the pooled data are i.i.d. samples from distribution P (V, C).\nBefore stating our main result, we first give the definitions of globally consistent scoring criterion and locally consistent scoring criterion, which will be used in the paper.\nDefinition 1 (Globally Consistent Scoring Criterion). Let D be a dataset consisting of T i.i.d. samples from some distribution P (·). LetH and G be any DAGs. A scoring criterion S is globally consistent if the following two properties hold as T →∞:\n1. IfH contains P and G does not contain P , then S(H,D) > S(G,D)1.\n2. IfH and G both contain P , and G contains fewer parameters thanH, then S(H,D) < S(G,D). Definition 2 (Locally Consistent Scoring Criterion). Let D be a dataset consisting of T i.i.d. samples from some distribution P (·). Let G be any DAG, and let G′ be the DAG that results from adding the edge Vi → Vj on G. A scoring criterion S(G, D) is locally consistent if the following two properties hold as T →∞:\n1. If Vj 6⊥Vi|PAGj , then S(G′, D) > S(G, D).\n2. If Vj ⊥Vi|PAGj , then S(G′, D) < S(G, D). 1Here, larger score means the corresponding graph is closer to the equivalent class of the true DAG, while the MDS\ndefined in Section 2.3 should be regarded as a type of \"loss function\" which needs to be minimized.\nIt has been shown that the BIC score and the GS score are both globally and locally consistent (Chickering, 2002; Huang et al., 2018).\nThe procedure for skeleton estimation on augmented graphs is described in Algorithm 1. The predefined graph search algorithms will be discussed in Section 2.4. Apart from the recovered skeleton over V, the changing modules can be detected as well in Step 4 of Algorithm 1. It is important to note that we allow causal relations to be either linear or nonlinear. If they are nonlinear, we apply GS as a score function. When they are linear, although we can also use GS, we use linear BIC instead because it is less likely to be overfitting for linear data and is computationally more efficient.\nAlgorithm 1 Skeleton Search on Augmented Graph Input: n datasets, each has T observations, d variables and index C. Output: skeleton S of Gaug’s subgraph G1 over V, and variables VC ∈ V that are connected with C.\n1: Pool all datasets with an extra surrogate variable C to form a data matrix X ∈ RnT×(d+1). 2: Use the predefined graph search algorithm with BIC or GS plus acyclicity constraints to recover the\naugmented graph. Eliminate any direction Vi → C in the graph with the prior that any variable Vi does not affect domain index. This step leads to the recovered augmented graph Gaug . 3: Discard the index variable in Gaug to obtain the induced subgraph G1. Discard the directions in G1 and output the skeleton S of G1. 4: Detect changing causal modules by inspecting Gaug recovered in Step 2, and output VC .\nThe validity of searching on augmented graph is guaranteed by Theorem 1.\nTheorem 1. Let D be the pooling of all datasets, DC be the augmented dataset with the domain index as an extra random variable. Let G0 be the underlying causal graph for the distribution of D over V, GC be the underlying causal graph for the distribution of DC over V ∪ C. If we denote G′C as the graph after the following modifications on GC: 1. adding any edges, 2. deleting any edges or 3. reversing any edges that changes the conditional dependence relation of GC , then we have S(GC , DC) > S(G′C , DC), where S is any globally consistent scoring criterion.\nProof of the theorem is given in Appendix A.1. Intuitively, this theorem means we will obtain an augmented graph that is in the same Markov equivalence class as the true augmented graph if we maximize the score." }, { "heading": "2.3 CAUSAL DIRECTION DETERMINATION BY MULTIPLE-DOMAIN-SCORE", "text": "For each variable Vk ∈ VC , we prove that it is possible to determine the directions of edges that connect to Vk. We denote Vl as any variable that connects to Vk. There are two possible cases:\n1) Vl /∈ VC . In this case, C − Vk − Vl forms an unshielded triple. It is intuitive to incorporate the prior that C → Vk (i.e. change of domain leads to the distribution shift of Vk). There are two possible patterns in this case: C → Vk → Vl and C → Vk ← Vl, which we denote as P and P ′ respectively. For P , the causal mechanisms P (effect|cause) is invariant when P (cause) changes. For P ′, we have the invariance of P (cause) when the causal mechanism P (effect|cause) changes, which is complementary to the invariance of causal mechanisms. The direction between Vk and Vl can be determined as long as a globally consistent score is used. To be specific, suppose P is the true causal pattern underlying the generative distribution, the score of P will be larger than that of P ′ if the score function used is globally consistent and decomposable, because compared with P , P ′ eliminates a conditional independence (C ⊥Vl|Vk) that actually holds in the generative distribution. This causal direction determination is not achievable for algorithms designed for stationary data from a single domain (because domain index cannot be used as an additional variable in this case). To utilize this prior, we simply eliminate any direction Vi → C as described in Step 2 of Algorithm 1. Figure 1(a) is a graphical illustration for this case.\n2) Vl ∈ VC . In this case, both Vk and Vl are connected to C, which is much more difficult than case 1). We propose a novel multiple-domain score (MDS) that utilizes the property of independent changes of causal modules to determine causal directions based on the causal skeleton derived from Algorithm 1. To specify the idea, we take the two-variable case as an example. Here we assume the true causal direction is V1 → V2. Figure 1(b) stands for the case where θ1 and θ2 are independent. (We drop the notation of domain index C for simplicity). In other words, P (V1; θ1) and P (V2|V1; θ2) change independently. If the recovered\ndirection is reverse (see Figure 1(c)), we factorize the joint distribution as\nP (V1, V2; θ ′ 1, θ ′ 2) = P (V2; θ ′ 2)P (V1|V2; θ′1) , (2)\nwhere θ′1 and θ ′ 2 are assumed to be sufficient for P (V2) and P (V1|V2) respectively. Since V1 ← V2 is not the true direction, θ′1 and θ ′ 2 are not independent, and they are determined jointly by θ1 and θ2. Based on this point, the causal direction can be determined by comparing the dependence between θ1, θ2 and the dependence between θ′1 and θ ′ 2, and choose the direction with smaller dependence.\nFor linear systems, the dependence can be described with covariance. θ1 and θ2 can be easily obtained by regressing V1 and V2 on their parents respectively. We first perform the regression for each domain then calculate the covariance between θ1(C) and θ2(C). When there are more than two variables that are connected to C, we denote such set of variables as VC with cardinality m. For each variable V kC ∈ VC and its parents PAkC ⊆ VC , we calculate the sum of the dependence between parameters of V kC ’s causal module and the parameters of the causal module of each variable in PAkC . To incorporate the minimization of such dependence into score-based method, we propose MDS for linear systems:\nMDSlinear = 1\nn n∑ i=1 (d ln(Ti)− 2 ln(Li))+λ1I(G /∈ DAGs)+λ2h(A)+ λ3 m m∑ k=1 |cov(θV kC , θPAkC )|, (3)\nwhere n, d, Ti and Li represent the number of domains, the number of variables (here we assume this quantity is the same for all domains), sample size, and the maximized log likelihood for domain i, respectively. λ1, λ2 and λ3 are regularization coefficients. λ3 is fixed to 0.001 in our experiments, λ1 and λ2 are adjusted dynamically while training (see Zhu & Chen (2019) for how λ1 and λ2 are adjusted). A is the weighted adjacency matrix recovered by the algorithm. m is the number of nonstationary variables. cov(·) is the covariance operator. The first term in Equation 3 is the average of BIC on n domains, the second and third terms are acyclicity constraints proposed in Zhu & Chen (2019) to narrow down the search space to DAGs. See Appendix A.2 for more details about the acyclicity constraints.\nFor nonlinear systems, θ cannot be calculated explicitly. In the two-variable case, the dependence between θ1 and θ2 can be characterized by the dependence between P (V1) and P (V2|V1) with the assumption that θ1 and θ2 are sufficient for the corresponding distribution module. To calculate the dependence between P (V1) and P (V2|V1), Huang et al. (2019) proposes to first use kernel embeddings of distributions P (V1) and P (V2|V1), then measure their dependence with extended Hilbert Schmidt Independence Criterion (HSIC (Gretton et al., 2008)) in Reproducing Kernel Hilbert Space (RKHS). When there are more than two variables that are connected to C, for each variable V kC ∈ VC and its parents PAkC ∈ VC, we calculate the dependence between P (PAkC) and P (V k C |PAkC). We propose corresponding MDS for nonlinear systems by integrating such dependence with GS:\nMDSnonlinear = 1\nn n∑ i=1 (GSi) + λ1I(G /∈ DAGs) + λ2h(A) + λ3 m m∑ k=1 HSIC(µV kC |PAkC , µPAkC ), (4)\nwhere GSi is the generalized score for domain i, µV kC |PAkC and µPAkC are the kernel embeddings of distributions P (V kC |PAkC) and P (PAkC) respectively. HSIC(·) is HSIC operator that measures the dependence of two random variables. See Appendix A.3, A.4 and A.5 for brief descriptions of GS, kernel embedding of distributions and HSIC respectively.\nDegeneration issue. If we apply search strategies over the entire space of graphs over V to optimize MDS, the MDS penalty (corresponds to the fourth term in Equation 3 and 4) tends to eliminate any edges between each V kC and its parents because the dependence between θV kC and empty set (i.e. no parents for V k C ) is 0. We call this an degeneration issue. To tackle this issue, we optimize the MDS score based on the skeleton of graph G1 from Algorithm 1. To be specific, we fix the skeleton S of G1 and apply search strategies over the space defined by S to optimize the MDS score. In other words, MDS is optimized by only altering the direction of each edge in S. With the solution to degeneration issue, we claim that the proposed MDS can recover more correct directions compared with G1. This is supported by Theorem 2 and Theorem 3. See Appendix A.6 and A.7 for proofs.\nLet D be the pooling of n datasets with distribution shifts and G0 be the DAG underlying the distribution of D. Let S be the skeleton of G0 and G1 (recall G1 is in the same equivalent class as G0, which means they have the same skeleton). Let E1 be the set of edges that exist in both G0 and G1 but have different directions in G0 and G1. Let E2 ∈ E1 be the set of edges whose left node (or variable) and right node are both nonstationary. Let n1 and n2 be the cardinality of E1 and E2. Theorem 2. For linear systems, let G∗ = arg min G MDSlinear(G,D), let E∗2 ∈ E2 be the set of edges whose directions are correctly determined by G∗, let n∗ denote the cardinality of E∗2. Given E2 is not empty and G1, G∗ have the same skeleton S, then G∗ is in the same equivalent class as G0 and n∗ = n2. Theorem 3. For nonlinear systems, let G∗ = arg min\nG MDSnonlinear(G,D), let E∗2 ∈ E2 be the set of\nedges whose directions are correctly determined by G∗, let n∗ denote the cardinality of E∗2. Given E2 is not empty and G1, G∗ have the same skeleton S, then G∗ is in the same equivalent class as G0 and 0 6 n∗ 6 n2.\nTheorem 2 and 3 mainly state that under proper assumptions, the directions of some edges whose left node and right are both nonstationary can be correctly determined by the proposed method.\nConfounding case. When the confounder g(C) exists (e.g. Figure 1(d)), the above approach still works if the influence from the confounder is not very strong for the following reason: for the correct direction, the dependence of θ1 and θ2 would come from the confounder, while for the wrong direction, the dependence would come from the confounder as well as the wrong direction." }, { "heading": "2.4 GRAPH SEARCH STRATEGIES", "text": "The proposed MDSS can be readily incorporated into off-the-shelf search strategies. In this paper, we adopt the policy-gradient-based search strategy (Zhu & Chen, 2019) to search for the optimal causal structure. Compared with other search strategies such as greedy equivalence search (GES (Chickering, 2002)), max-min hill climbing (Tsamardinos et al., 2006), direct search by regarding the weighted graph adjacency matrix as parameters (Zheng et al., 2018; Yu et al., 2019; Lachapelle et al., 2019), the policy-gradient-based search by a reinforcement learning (RL) agent with stochastic policy can determine automatically where to search given the uncertainty information of the learned policy, which gets updated promptly by the stream of reward signals (Zhu & Chen, 2019). The graph search strategy using RL is proven to be better than other search strategies mentioned above empirically.\nThe idea of causal discovery with RL can be summarized as follows. The algorithm uses an encoder-decoder neural network model to generate directed graphs from the observed data, which are then used to compute rewards consisting of the predefined score function as well as some regularization terms for acyclicity. The encoder-decoder model can be regarded as an \"actor\" that learns to generate \"actions\" (i.e., graph adjacency matrices) in actor-critic algorithm, an algorithm commonly used in RL. The reward function can be regarded as the \"environment\" that evaluates how good the \"action\" is (i.e., how good the produced graph adjacency matrix fits the observed data). The weights of the encoder-decoder model is trained by policy gradient and stochastic optimization methods. The output of the algorithm is the graph that achieves the best reward during the training process.\nTo integrate MDS with the policy-gradient-based search, we replace the predefined score function in the original paper (where BIC is used) with MDS. Apart from policy-gradient-based search, we also experiment with greedy equivalence search, details of which can be found in Appendix A.8.\nThe complete search procedure is described in Algorithm 2.\nAlgorithm 2 Multiple-domain Score Search Input n datasets each has T observations, d variables and index C. Output causal graph G2 over V.\n1: Execute Algorithm 1, input all the datasets and corresponding domain index, output skeleton S and nonstationary variables VC . 2: Execute the predefined graph search algorithm with MDS in the space defined by S , output G2 over V. 3: Perform any pruning methods on G2 if needed." }, { "heading": "3 EXPERIMENTS", "text": "In this section, we conduct empirical studies to show the effectiveness of our MDSS method combined with the MDS score. We compare MDSS to some well-known causal discovery algorithms that are designed for i.i.d. or stationary data from a single domain (GES (Chickering, 2002), PC (Spirtes et al., 2000), LiNGAM (Shimizu et al., 2006), NO-TEARS (Zheng et al., 2018) and RL (Zhu & Chen, 2019)) as well as algorithms designed for heterogeneous data from multiple domains (CD-NOD (Huang et al., 2019), MC and IB (Ghassami et al., 2018)). The comparison is made on both synthetic and real data.\nWe evaluate the estimated graphs using three metrics: True Negative Rate (TNR), True Positive Rate (TPR), and Structural Hamming Distance (SHD, i.e. , the smallest number of edge additions, deletions, and reversals to convert the estimated graph into the true DAG). A lower SHD indicates a better estimate of the causal graph. For algorithms that output completed partially directed acyclic graph (CPDAG), we randomly choose a direction for those undirected edges." }, { "heading": "3.1 A TOY EXAMPLE", "text": "We use a synthetic toy example to illustrate the influence of confounders g(C) for algorithms (we use RL) designed for homogeneous data, and demonstrate that MDSS can avoid such influence and further identify more directions. See Appendix A.9 for this example." }, { "heading": "3.2 SYNTHETIC DATA", "text": "In this section, we conduct extensive experiments with MDSS and other causal discovery algorithms on linear and nonlinear synthetic data. We denote n as the number of datasets, each has d variables and T observations. We set n ∈ {6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20}, d ∈ {6, 7, 8}, T = 100 for both linear and nonlinear data. We repeat each setting 20 times with DAGs randomly generated by Erdős–Rényi model (ER) with parameter 0.3. Each variable Vi is chosen as nonstationary with probability 0.6. Similar to Section 3.1, linear data are generated using linear SEM Vi = wiPAi + bi + i, we fix wi and i across domains and vary bi if Vi is chosen as nonstationary. Nonlinear data is generated using nonlinear SEM Vi = fi(PAi) + bi + i, fi(·) is randomly picked from {sin(·), cos(·), sigmoid(·)}, and bi varies if Vi is nonstationary, i stays invariant.\nWe first consider the setting when n = 6 and d = 10. MDSS, MC, IB, and CD-NOD are tested on data from all domains. GES, PC, LiNGAM, NO-TEARS, and RL are tested on data from all domains as well as data from a single domain (the domain is randomly chosen). For GES, we use fast GES (FGES (Ramsey et al., 2017)), which is an improved version of the original GES. The mean and standard deviation are reported in Table 1 and 2. As we can see, MDSS outperforms other algorithms on both linear and nonlinear data. The performances of PC, FGES, LiNGAM, NO-TEARS, and RL on pooling of all domain data are worse than that on single domain data. Specifically, despite the minor increase in TPR, their TNR decrease dramatically when data from all domains are used. This phenomenon further proves our proposition that distribution shifts will introduce spurious edges if not properly dealt with.\nWe further compare the performance of MDSS, IB, MC and CD-NOD by varying d and n. The results are reported in Figure 2. The black curves in figures of row 2 and row 3 are shorter than others because CD-NOD takes too much time to give any result when d > 6 and n > 15. According to the results, MDSS outperforms the others in most cases.\nTo demonstrate that the proposed MDS contributes to the performance, we conduct some ablation studies. To be specific, we keep the directions in step 3 of Algorithm 1 and output G1 (Algorithm 2, or MDS search, is not executed). We use the same experimental setting as Table 1. The results (TPR, TNR and SHD) are\n0.58 ± 0.21, 0.92 ± 0.07 and 7.95 ± 4.51 for linear data, 0.32 ± 0.22, 0.58 ± 0.22 and 6.20 ± 0.26 for nonlinear data. When compared with the results of MDSS in Table 1, it is obvious that MDS search identifies more directions and boost the performance." }, { "heading": "3.3 REAL DATA", "text": "We apply MDSS to fMRI hippocampus dataset (Poldrack et al., 2015). This dataset records signals from 6 separate brain regions: perirhinal cortex (PRC), parahippocampal cortex (PHC), entorhinal cortex (ERC), subiculum (Sub), CA1, and CA3/Dentate Gyrus (DG) of a single person with resting states in 84 successive days. The records for each day can be regarded as a domain. We select 10 of them. The results from MDSS, MC, IB and CD-NOD are given in Figure 3. The directions of anatomical ground truth are: PHC→ ERC, PRC → ERC, ERC → CA3/DG, CA3/DG → CA1, CA1 → Sub, Sub → ERC and ERC → CA1. See Appendix A.10 for illustration of the ground true causal graph. As we can see, MDSS outperforms the others." }, { "heading": "4 CONCLUSIONS", "text": "This paper proposes a Multiple-Domain Score Search (MDSS) algorithm for causal discovery from heterogeneous data. It first performs skeleton learning over the space of augmented graphs. Then a Multiple-Domain Score (MDS) is used to determine causal directions based on the skeleton of the recovered graph. The MDS is proposed based on distribution shifts across domains and the assumption of independent change. Compared with previous methods, MDSS can remove the influence of distribution shifts and further recover more causal directions. In future work, we aim to improve MDSS from the following two aspects: 1) Score calculation takes more time than training NNs for searching, so it is essential to optimize score computing to accelerate the entire framework. 2) The current framework of MDSS cannot deal with the more general case where causal directions also change, while this phenomenon does exist in some real-world circumstances." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 PROOF OF THEOREM 1", "text": "Proof. Let P (V) be the distribution of the data over V , PC(V, C) be the distribution of the augmented data over V ∪ C. According to Markov and faithfulness conditions, GC is the perfect map of PC(V, C). 1. If G′C is the graph after deleting any edges from GC , then GC contains PC(V, C) while G ′ C does not. According to Definition 1, we have S(GC , DC) > S(G′C , DC). 2. If G′C is the graph after reversing any edges from GC , that changes the conditional dependence of GC , then obviously GC contains PC(V, C) while G′C does not. According to Definition 1, we have S(GC , DC) > S(G ′ C , DC). 3. If G′C is the graph after adding any edges from GC . Although both G ′ C and GC contain PC(V, C), G ′ C has more parameters. According to Definition 1, we have S(GC , DC) > S(G′C , DC)." }, { "heading": "A.2 ACYCLICITY CONSTRAINTS", "text": "Causal discovery from samples of a joint distribution is a challenging combinatorial problem because of the intractable search space super-exponential in the number of graph nodes. Recently, Zheng et al. (2018) formulates the structure learning problem as a purely continuous optimization problem by a new characterization of acyclicity that is not only smooth but also exact. Zheng et al. (2018) proposes to measure the \"DAG-ness\" of a graph by\nh(A) = tr ( eA◦A ) − d, (5)\nwhere A is a weighted adjacency matrix and d is the number of node in the graph. Function h(·) satisfies the following properties:\n• h(A) = 0 if and only if A is acyclic. • The values of h quantify the \"DAG-ness\" of the graph. • h is smooth. • h and its derivatives are easy to compute.\nFurther, Zhu & Chen (2019) finds that h(A), which is non-negative, can be small for certain cyclic graphs and its minimum over all non-DAGs is not easy to compute. As a consequence, it would require a very large penalty weight for h(A) to obtain exact DAGs if only h(A) is used. To address the issue, Zhu & Chen (2019) proposes another acyclicity penalty term I(G /∈ DAGs), which is the indicator function w.r.t. acyclicity to induce exact DAGs. The combination of the above two acyclicity constraints can be written as λ1I(G /∈ DAGs) + λ2h(A), which corresponds to the second and third terms in our proposed MDS. See their original papers Zheng et al. (2018) and Zhu & Chen (2019) for more details." }, { "heading": "A.3 GENERALIZED SCORE", "text": "We use generalized score (GS (Huang et al., 2018)) as a model selection criteria to measure how well the a graph fits the data. Here we give a brief introduction of the calculation of GS.\nAssume X is a random variable with domain X , andHX is a reproducing kernel Hilbert space (RKHS) on X with continuous feature mapping φX : X → HX . Similarly we define variable Y , Z with domain Y , Z , the corresponding RKHSHY ,HZ and feature mapping φY , φZ . Let Z̈ := (Y, Z), consider the following two regression functions in RKHS:\nφX (X) = F1(Z) + U1, φX (X) = F2(Z̈) + U2, (6)\nwhere F1 : Z → HX and F2 : Z̈ → HX . If X ⊥Y |Z, the following equation holds: EZ [V arX|Z [φX (X)|Z]] = EZ̈ [V arX|Z̈ [φX (X)|Z̈]], (7)\nwhich means that it is not useful to incorporate Y as a predictor of X given Z, so the first model (i.e. the model with less complexity) in Equation 6 is preferred.\nCross-validated likelihood is used to express such preference. To perform cross validation, the whole data set D is split into a training set and a test set and repeat this procedure K times, i.e. K-fold cross validation. Let D(k)1 and D (k) 0 be the kth training set and kth validation set respectively. Let D (k) 1,i and D (k) 0,i be the data of Xi and its parents in training set and validation set respectively. The GS of DAG Gh using cross-validated likelihood is calculated with\nSCV (Gh;D) = m∑ i=1 SCV ( Xi, PA Gh i ) =\nm∑ i=1\n( 1\nK K∑ k=1 ` ( ˆ̃F (k) i |D (k) 0,i )) ,\n(8)\nwhere PAGhi are parents of Xi in graph Gh, ˆ̃F (k) i is the regression function estimated from kth training data D (k) 1,i , ` ( ˆ̃F (k) i |D (k) 0,i ) is the log-likelihood evaluated on the kth validation set with the estimated regression function.\nAnother type of GS based on marginal likelihood are also proposed, see Huang et al. (2018) for more details." }, { "heading": "A.4 KERNEL EMBEDDING OF DISTRIBUTIONS", "text": "According to Equation 4 in our paper, we need to calculate the dependence between distributions P (V kC |PAk) and P (PAk). The dependence between random variables can be measured by Hilbert Schmidt Independence Criterion (HSIC), which will be discussed in next section. To transform the distribution for data from different domains to a random variable in RKHS, we use kernel embeddings of conditional distributions (Song et al., 2013). In the rest of this section, we denote PAk as X and V kC as Y for simplicity.\nLet X be the domain of X , and (H, k) be a reproducing kernel Hilbert space (RKHS) with a measurable kernel on X . Let φ(x) ∈ H be a continuous feature mapping φX : X → H. Similar notations are for variables Y andC. We define the cross-covariance operatorCY X : H → G asCY X := EY X [φ(X)⊗ψ(Y )]. The kernel embedding of the conditional distribution P (X|C = cn) for data from a given domain C = cn can be calculated as\nµX|C=cn = CXCC −1 CCφ (cn) (9)\nThe empirical estimate for µX|C=cn is\nµ̂X|C=cn = 1\nN ΦxΦ\n> c\n( 1\nN ΦcΦ\n> c + λI )−1 φcn\n= Φx (Kc + λI) −1 kc,cn\n, (10)\nwhere N is sample size, Φx := [φ (x1) , . . . , φ (xN )], Φc := [φ (c1) , . . . , φ (cN )], Kc (ct, ct′) = 〈φ (ct) , φ (ct′)〉, kc,cn := [k (c1, cn) , . . . , k (cN , cn)]\n>. The corresponding Gram matrix with Gaussian kernel with σx is\nMHX = exp\n( − diag ( MlX ) · 1N + 1N · diag ( MlX ) − 2MlX\n2σ2x\n) , (11)\nwhere diag(·) sets all off-diagonal entries of the matrix as zero, and 1N is a N ×N matrix with all entries being 1. MlX is the Gram matrix with a linear kernel:\nMlX = Kc (Kc + λI) −1 Kx (Kc + λI) −1 Kc, (12)\nwhose (c, c′) entry can be calculated by\nMlX (c, c ′) = µ̂>X|C=c µ̂X|C=c′\n= k>c,c (Kc + λI) −1 Φ>x Φx (Kc + λI) −1 kc,c′ = k>c,c (Kc + λI) −1 Kx (Kc + λI) −1 kc,c′\n(13)\nSimilarly we can calculate the empirical kernel embedding of the conditional distribution P (Y |X,C = cn) and the corresponding Gram matrix, which we denote as µ̂Y |X,C=cn and M G Y |X , respectively. For more details about kernel embeddings of distributions, see Song et al. (2013) and Huang et al. (2019)." }, { "heading": "A.5 EXTENDED HILBERT SCHMIDT INDEPENDENCE CRITERION", "text": "With the notations and results in the above section, we can calculate the dependence between P (X) and P (Y |X) by extended Hilbert Schmidt Independence Criterion:\nHSICP (X),P (Y |X) = 1 (N − 1)2 tr ( MHXHM G Y |XH ) , (14)\nwhere H is a matrix used to center the features, whose entries Hij := δij −N−1. Huang et al. (2019) uses a normalized version of the estimated HSIC, which is invariant to the scale in MHX and M G Y |X :\nHSICNP (X),P (Y |X) = HSICP (X),P (Y |X)\n1 N−1 tr ( MHXH ) · 1N−1 tr ( MGY |XH ) = tr ( MHXHM G Y |XH\n) tr ( MHXH ) tr ( MGY |XH ) (15)" }, { "heading": "A.6 PROOF OF THEOREM 2", "text": "Definition 3. Score Equivalence. Let D be a dataset consisting of T records that are i.i.d samples from some distribution P (·). A score function S is score equivalent if for any two DAGs G and G′ which are in the same Markov equivalence class, we have S(G′, D) = S(G, D).\nProof. Let G′ = arg min G MDSnonlinear(G,D), G′′ = arg min G MDSnonlinear(G,D) and G′′′ = arg min G\nMDSnonlinear(G,D) be three DAGs with the same skeleton S as G1, let E′2, E′′2 and E′′′2 ∈ E2 be the set of edges with cardinality n′, n′′ and n′′′, whose directions are correctly determined by G′, G′′ and G′′′ respectively.\n• If G′ is not in the same equivalent class as G0 and n′ = n2. According to score consistency of BIC, the first term in MDSlinear(G′,D) is larger than that in MDSlinear(G∗,D) and the last term in MDSlinear(G′,D) and MDSlinear(G∗,D) is same because n′ = n2, then MDSlinear(G ′,D) > MDSlinear(G∗,D).\n• If G′ is in the same equivalent class as G0 and n′ < n2. According to score equivalence of BIC, the first term in MDSlinear(G′,D) and MDSlinear(G∗,D) is same and the last term in MDSlinear(G′,D) is larger than that in MDSlinear(G∗,D) because n′ < n2, then MDSlinear(G ′,D) > MDSlinear(G∗,D).\n• If G′ is not in the same equivalent class as G0 and n′ < n2. then MDSlinear(G′,D) > MDSlinear(G ∗,D) clearly holds according to the above two cases.\nAll the above three cases are contradict with the condition that these three graphs are DAGs that minimize MDSnonlinear(G,D)." }, { "heading": "A.7 PROOF OF THEOREM 3", "text": "The conclusion that 0 6 n∗ 6 n2 holds clearly. The conclusion that G∗ is in the same equivalent class as G0 can be proved similar to the proof of Theorem 2. MDSnonlinear is not able to guarantee n∗ = n2 mainly because GS is not score equivalent." }, { "heading": "A.8 EXPERIMENTS OF GREEDY EQUIVALENCE SEARCH", "text": "The proposed MDSS can be readily incorporated into off-the-shelf search strategies. In the main paper, we adopt policy-gradient-based search strategy (Zhu & Chen, 2019) to search the optimal causal structure. In this section, we demonstrate that greedy equivalence search (Chickering, 2002) can also be utilized as the search strategy.\nSimilar to the two-stage search in our main paper, we first perform greedy equivalence search on the augmented graphs (i.e. graphs with domain index as an additional node) to optimize the score (BIC for linear systems and GS for nonlinear systems). The output of this step is an equivalence class of the augmented graph. Then we utilize distribution shifts of the nonstationary variables to detect more edge directions. Consider the same setting as in Section 3.2 (i.e. , n = 6 and d = 10). In linear case, TPR, TNR, SHD for MDSS with greedy equivalence search is 0.67 ± 0.15, 0.75 ± 0.16 and 4.75 ± 2.05 respectively. In nonlinear case, TPR, TNR, SHD for MDSS with greedy equivalence search is 0.69 ± 0.13, 0.63 ± 0.21 and 5.80 ± 2.67 respectively. Compared with the results in Table 1, 1) MDSS with greedy equivalence search outperforms MC, IB and CD-NOD in both linear and nonlinear cases, 2) although MDSS with greedy equivalence search is not as good as MDSS with policy-gradient-based search in linear case, it achieves comparable results as policy-gradient-based one in nonlinear case." }, { "heading": "A.9 A TOY EXAMPLE", "text": "The example is consisted of 10 linear datasets with 4 variables, whose underlying causal graph is given in Figure 4(a). We use linear SEM Vi = wiPAi + bi + i to generate the data. For each variable Vi, wi is fixed to 1 and i is also fixed as a standard Gaussian noise across all datasets. To introduce distribution shifts, we vary bi across datasets, here b3 is chosen to be invariant (i.e. V3 is stationary).\nWe first run RL on single dataset (randomly chosen from 10 datasets) and the pooling of all datasets respectively, with results shown in Figure 4(b) and 4(c). RL misidentifies the direction between V1 and V2 in both cases, which is reasonable because RL uses BIC plus an acyclicity constraint as score function, and BIC is score equivalent. Furthermore, when multiple datasets with distribution shifts are used, RL outputs erroneous edges (i.e. edges between V1, V4 and V2, V4) due to confounders. Next we run MDSS on the pooling of all datasets, with result shown in Figure 4(d). MDSS correctly detects variables with changing causal modules. It recovers all the directions between those 4 variables correctly as well." }, { "heading": "A.10 GROUND TRUE CAUSAL GRAPH FOR REAL DATA", "text": "" } ]
2,020
null
SP:37e441bbd53413fb7ae61d146145795c481a2bf0
[ "The paper introduces an algorithm for the case where actions have delayed effects in RL, and specifically in the case where the delay is random. A resampling approach is applied to off policy buffered data in order to align it with the current policy and this approach is integrated into a SAC architecture, creating the new DCAC algorithm. Empirical results in constant-delay and random-delay environments show the algorithm outperforming baselines." ]
Action and observation delays commonly occur in many Reinforcement Learning applications, such as remote control scenarios. We study the anatomy of randomly delayed environments, and show that partially resampling trajectory fragments in hindsight allows for off-policy multi-step value estimation. We apply this principle to derive Delay-Correcting Actor-Critic (DCAC), an algorithm based on Soft ActorCritic with significantly better performance in environments with delays. This is shown theoretically and also demonstrated practically on a delay-augmented version of the MuJoCo continuous control benchmark.
[ { "affiliations": [], "name": "RANDOM DELAYS" }, { "affiliations": [], "name": "Yann Bouteiller" }, { "affiliations": [], "name": "Simon Ramstedt" }, { "affiliations": [], "name": "Giovanni Beltrame" } ]
[ { "authors": [ "Vlad Firoiu", "Tina Ju", "Joshua B. Tenenbaum" ], "title": "At human speed: Deep reinforcement learning with action", "venue": "delay. CoRR,", "year": 2018 }, { "authors": [ "Florian Fuchs", "Yunlong Song", "Elia Kaufmann", "Davide Scaramuzza", "Peter Duerr" ], "title": "Super-human performance in gran turismo sport using deep reinforcement learning, 2020", "venue": null, "year": 2020 }, { "authors": [ "Scott Fujimoto", "Herke van Hoof", "David Meger" ], "title": "Addressing function approximation error in actor-critic methods", "venue": "arXiv preprint arXiv:1802.09477,", "year": 2018 }, { "authors": [ "Yuan Ge", "Qigong Chen", "Ming Jiang", "Yiqing Huang" ], "title": "Modeling of random delays in networked control systems", "venue": "Journal of Control Science and Engineering,", "year": 2013 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "arXiv preprint arXiv:1801.01290,", "year": 2018 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Kristian Hartikainen", "George Tucker", "Sehoon Ha", "Jie Tan", "Vikash Kumar", "Henry Zhu", "Abhishek Gupta", "Pieter Abbeel" ], "title": "Soft actor-critic algorithms and applications", "venue": "arXiv preprint arXiv:1812.05905,", "year": 2018 }, { "authors": [ "Todd Hester", "Peter Stone" ], "title": "Texplore: real-time sample-efficient reinforcement learning for robots", "venue": "Machine learning,", "year": 2013 }, { "authors": [ "Jemin Hwangbo", "Inkyu Sa", "Roland Siegwart", "Marco Hutter" ], "title": "Control of a quadrotor with reinforcement learning", "venue": "IEEE Robotics and Automation Letters,", "year": 2017 }, { "authors": [ "Konstantinos V Katsikopoulos", "Sascha E Engelbrecht" ], "title": "Markov decision processes with delays and asynchronous cost collection", "venue": "IEEE transactions on automatic control,", "year": 2003 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "A. Rupam Mahmood", "Dmytro Korenkevych", "Brent J. Komer", "James Bergstra" ], "title": "Setting up a reinforcement learning task with a real-world robot, 2018", "venue": null, "year": 2018 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Remi Munos", "Tom Stepleton", "Anna Harutyunyan", "Marc Bellemare" ], "title": "Safe and efficient off-policy reinforcement learning", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Johan Nilsson", "Bo Bernhardsson", "Björn Wittenmark" ], "title": "Stochastic analysis and control of real-time systems with random time", "venue": "delays. Automatica,", "year": 1998 }, { "authors": [ "Simon Ramstedt", "Christopher Pal" ], "title": "Real-time reinforcement learning", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "L. Santinelli", "F. Guet", "J. Morio" ], "title": "Revising measurement-based probabilistic timing analysis", "venue": "IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS),", "year": 2017 }, { "authors": [ "Erik Schuitema", "Lucian Busoniu", "Robert Babuska", "Pieter Jonker" ], "title": "Control delay in reinforcement learning for real-time dynamic systems: A memoryless approach", "venue": "In International Conference on Intelligent Robots and Systems,", "year": 2010 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control. in 2012 ieee", "venue": "In RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Hado Van Hasselt", "Arthur Guez", "David Silver" ], "title": "Deep reinforcement learning with double qlearning", "venue": "arXiv preprint arXiv:1509.06461,", "year": 2015 }, { "authors": [ "Hado P van Hasselt", "Arthur Guez", "Matteo Hessel", "Volodymyr Mnih", "David Silver" ], "title": "Learning values across many orders of magnitude", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Thomas J. Walsh", "Ali Nouri", "Lihong Li", "Michael L. Littman" ], "title": "Learning and planning in environments with delayed feedback", "venue": "Autonomous Agents and Multi-Agent Systems,", "year": 2008 }, { "authors": [ "Ted Xiao", "Eric Jang", "Dmitry Kalashnikov", "Sergey Levine", "Julian Ibarz", "Karol Hausman", "Alexander Herzog" ], "title": "Thinking while moving: Deep reinforcement learning with concurrent control", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Haarnoja" ], "title": "The only difference from the DCAC model is that the SAC critic tracks q(x), and not v(x). Indeed, differently from usual actor-critic algorithms, the output of DCAC’s critic approximates the state-value v(x) (instead of the action-value q(x)), as it is sufficient to optimize the actor loss described in Definition 7. Weights and biases are initialized with the default Pytorch initializer. Both the actor and the critic are optimized by gradient descent with the Adam optimizer", "venue": null, "year": 2018 } ]
[ { "heading": null, "text": "1 INTRODUCTION\nThis article is concerned with the Reinforcement Learning (RL) scenario depicted in Figure 1, which is commonly encountered in real-world applications (Mahmood et al., 2018; Fuchs et al., 2020; Hwangbo et al., 2017). Oftentimes, actions generated by the agent are not immediately applied in the environment, and observations do not immediately reach the agent. Such environments have mainly been studied under the unrealistic assumption of constant delays (Nilsson et al., 1998; Ge et al., 2013; Mahmood et al., 2018). Here, prior work has proposed different planning algorithms which naively try to undelay the environment by simulating future observations (Walsh et al., 2008; Schuitema et al., 2010; Firoiu et al., 2018).\nWe propose an off-policy, planning-free approach that enables lowbias and low-variance multi-step value estimation in environments with random delays. First, we study the anatomy of such environments in order to exploit their structure, defining Random-Delay Markov Decision Processes (RDMDP). Then, we show how to trans-\nform trajectory fragments collected under one policy into trajectory fragments distributed according to another policy. We demonstrate this principle by deriving a novel off-policy algorithm (DCAC) based on Soft Actor-Critic (SAC), and exhibiting greatly improved performance in delayed environments. Along with this work we release our code, including a wrapper that conveniently augments any OpenAI gym environment with custom delays." }, { "heading": "2 DELAYED ENVIRONMENTS", "text": "We frame the general setting of real-world Reinforcement Learning in terms of an agent, random observation delays, random action delays, and an undelayed environment. At the beginning of each time-step, the agent starts computing a new action from the most recent available delayed observation. Meanwhile, a new observation is sent and the most recent delayed action is applied in the undelayed environment. Real-valued delays are rounded up to the next integer time-step.\n∗equal contribution\nFor a given delayed observation st, the observation delay ωt refers to the number of elapsed time-steps from when st finishes being captured to when it starts being used to compute a new action. The action delay αt refers to the number of elapsed time-steps from when the last action influencing st starts being computed to one time-step before st finishes being captured.\nWe further refer to ωt + αt as the total delay of st.\nAs a motivating illustration of real-world delayed setting, we have collected a dataset of communication delays between a decisionmaking computer and a flying robot over WiFi, summarized in Figure 2. In the presence of such delays, the naive approach is to simply use the last received observation. In this case, any delay longer than one time-step violates the Markov assumption, since the last sent action becomes an unobserved part of the current state of the environment. To overcome this issue, we define a Markov Decision Process that takes into account the communication dynamics." }, { "heading": "2.1 RANDOM DELAY MARKOV DECISION PROCESSES", "text": "To ensure the Markov property in delayed settings, it is necessary to augment the delayed observation with at least the last K sent actions. K is the combined maximum possible observation and action delay. This is required as the oldest actions along with the delayed observation describe the current state of the undelayed environment, whereas the most recent actions are yet to be applied (see Appendix C). Using this augmentation suffices to ensure that the Markov property is met in certain delayed environments. On the other hand, it is possible to do much better when the delays themselves are also part of the state-space. First, this allows us to model self-correlated delays, e.g. discarding outdated actions and observations (see Appendix A.1). Second, this provides useful information to the model about how old an observation is and what actions have been applied next. Third, knowledge over the total delay allows for efficient credit assignment and off-policy partial trajectory resampling, as we show in this work.\nDefinition 1. A Random Delay Markov Decision Process RDMDP(E, pω, pα) = (X,A, µ̃, p̃) augments a Markov Decision Process E = (S,A, µ, p) with: (1) state-space X = S ×AK × N2, (2) action-space A, (3) initial state distribution µ̃(x0) = µ̃(s, u, ω, α) = µ(s) δ(u− cu) δ(ω − cω) δ(α− cα), (4) transition distribution p̃(s′,u′,ω′,α′ ,r′|s,u,ω,α,a)=fω−ω′(s′,α′,r′|s,u,ω,α,a)pω(ω′|ω)pu(u′|u,a),\nwhere s ∈ S is the delayed observation, u ∈ AK is a buffer of the last K sent actions, ω ∈ N is the observation delay, and α ∈ N is the action delay as defined above. To avoid conflicting with the subscript notation, we index the action buffers’ elements using square brackets. Here, u[1] is the most recent and u[K] is the oldest action in the buffer. We denote slices by u[i :j] = (u[i], . . . , u[j]) and u[i :−j] = (u[i], . . . , u[K−j]). We slightly override this notation and additionally define u[0] = a.\nThe constants cu ∈ AK and cω, cα ∈ N initialize u, ω, α, since δ is the Dirac delta distribution. The transition distribution itself is composed of three parts: (1) The observation delay distribution pω modelling the evolution of observation delays. Note that this density function must represent a discrete distribution (i.e. be a weighted sum of Dirac delta distributions). Furthermore, this process will repeat observations if there are no new ones available. This means that the observation delay can maximally grow by one from one time-step to the next. (2) The transition distribution for the action buffer pu(u′|u, a) = δ(u′ − (a, u[1 :−1])). (3) The distribution f∆ describing the evolution of observations, rewards and action delays (Definition 2).\nDefinition 2. For each change in observation delays (∆=ω−ω′) we define a variable step update distribution f∆ as\nf∆(s ′,α′,r′|s,u,ω,α,a)=Es∗,α∗,r∗∼f∆−1(·|s,u,ω,α,a)[p(s\n′,r′−r∗|s∗,u[ ω′︷ ︸︸ ︷ ω−∆+α′]) pα(α′|α∗)]. (1)\nThe base case of the recursion is f−1(s′, α′, r′ | s, u, ω, α, a) = δ(s′ − s) δ(α′ − α) δ(r′).\nHere, pα is the action delay distribution which, similar to pω, must be discrete. The transition distribution of the underlying, undelayed MDP is p. The r′ − r∗ term accumulates intermediate rewards in case observations are skipped or repeated (see Appendix A.4). Since the observation delay cannot increment by more than one, f−1 is used when ω is increasing, whereas f0 is used when there is no change in observation delay.\nA simple special case of the RDMDP is the constant observation and action delay case with pω(ω′|ω) = δ(ω′ − cω) and pα(α′|α) = δ(α′ − cα). Here, the RDMDP reduces to a Constantly Delayed Markov Decision Process, described by Walsh et al. (2008). In this case, the action and observation delays α, ω can be removed from the state-space as they don’t carry information. Examples of RDMDP dynamics are visualized in Figure 3 (see also Appendix C)." }, { "heading": "3 REINFORCEMENT LEARNING IN DELAYED ENVIRONMENTS", "text": "Delayed environments as described in Section 2 are specific types of MDP, with an augmented statespace and delayed dynamics. Therefore, using this augmented state-space, traditional algorithms such as Soft Actor-Critic (SAC) (Haarnoja et al., 2018a)(Haarnoja et al., 2018b) will always work in randomly delayed settings. However, their performance will still deteriorate because of the more difficult credit assignment caused by delayed observations and rewards, on top of the exploration and generalization burdens of delayed environments. We now analyze how to compensate for the credit assignment difficulty by leveraging our knowledge about the delays’ dynamics.\nOne solution is to perform on-policy multi-step rollouts on sub-trajectories that are longer than the considered delays. On the other hand, on-policy algorithms are known to be sample-inefficient and therefore are not commonly used in real-world applications, where data collection is costly. This motivates the development of off-policy algorithms able to reuse old samples, such as SAC.\nIntuitively, in delayed environments, one should take advantage of the fact that actions only influence observations and rewards after a number of time-steps relative to the beginning of their computation (the total delay ω + α). Since the delay information is part of the state-space, it can be leveraged to track the action influence through time. However, applying conventional off-policy algorithms in delayed settings leads to the following issue: the trajectories used to perform the aforementioned multi-step backups have been sampled under an outdated policy, and therefore contain outdated action buffers. In this section, we propose a method to tackle this issue by performing partial trajectory resampling. We make use of the fact that the delayed dynamics are known to simulate the effect they would have had under the current policy, effectively transforming off-policy sub-trajectories into on-policy sub-trajectories. This enables us to derive a family of efficient off-policy algorithms for randomly delayed settings." }, { "heading": "3.1 PARTIAL TRAJECTORY RESAMPLING IN DELAYED ENVIRONMENTS", "text": "One important observation implied by Figure 3 is that, given the delayed dynamics of RDMDPs , some actions contained in the action buffer of an off-policy state did not influence the subsequent delayed\nobservations and rewards for a number of time-steps. Therefore, if an off-policy sub-trajectory is short enough, it is possible to recursively resample its action buffers with no influence on the return. We propose the following transformation of off-policy sub-trajectories:\nDefinition 3. The partial trajectory resampling operator recursively updates action buffers as follows\nσπn(s ∗ 1, u ∗ 1, ω ∗ 1 , α ∗ 1︸ ︷︷ ︸\nx∗1\n, r∗1 , τ ∗ n−1|x∗0; s1, u1, ω1, α1︸ ︷︷ ︸\nx1\n, r1, τn−1)\n=δ((s∗1,ω ∗ 1 ,α ∗ 1,r ∗ 1)−(s1,ω1,α1,r1))Ea0∼π(·|x∗0)[δ(u ∗ 1−(a0,u∗0[1:−1]))] σπn−1(τ∗n−1|x∗1;τn−1) (2)\nwith trivial base case σ0(x∗0) = 1\nThis operator recursively resamples the most recent actions of each action buffer in an input subtrajectory τn, according to a new policy π. Everything else stays unchanged. A visual example is provided in Figure 4 with n = 2 and an action buffer of two actions. When resampled actions are delayed and would not affect the environment, they do not \"invalidate\" the sub-trajectory. The resampled trajectories can then be considered on-policy.\nTheorem 1. The partial trajectory resampling operator σπn (Def. 3) transforms off-policy trajectories into on-policy trajectories\nEτn∼pµn(·|x0)[σ π n(τ ∗ n|x0;τn)]=pπn(τ∗n|x0) (3)\non the condition that none of the delayed observations depend on any of the resampled actions, i.e.\nω∗t + α ∗ t ≥ t (4)\nwhere t indexes the trajectory τ∗n = (s ∗ 1, u ∗ 1, ω ∗ 1 , α ∗ 1 , r ∗ 1 , . . . , s ∗ n, u ∗ n, ω ∗ n, α ∗ n , r ∗ n) from 1 to n.\nThe condition in Equation 4 can be understood visually with the help of Figure 3. In the constant delay example it is fulfilled until the third time-step. After that, the observations would have been influenced by the resampled actions (starting with a0)." }, { "heading": "3.2 MULTI-STEP OFF-POLICY VALUE ESTIMATION IN DELAYED ENVIRONMENTS", "text": "We have shown in Section 3.1 how it is possible to transform off-policy sub-trajectories into on-policy sub-trajectories in the presence of random delays. From this, we can derive a family of efficient off-policy algorithms for the randomly delayed setting. For this matter, we make use of the classic on-policy Monte-Carlo n-step value estimator:\nDefinition 4. The n-step state-value estimator is defined as\nv̂n(x0; x ∗ 1, r ∗ 1 , τ ∗ n−1︸ ︷︷ ︸\nτ∗n\n) = r∗1 + γv̂n−1(x ∗ 1; τ ∗ n−1) = n∑ i=1 γi−1r∗i + γ nv̂0(x ∗ n). (5)\nwhere v̂0 is a state-value function approximator (e.g. a neural network).\nIndeed, in γ-discounted RL, performing on-policy n-step rollouts to estimate the value function reduces the bias introduced by the function approximator by a factor of γn:\nLemma 1. The n-step value estimator has the following bias:\nbias(v̂n(x0, ·)) = γnE...,x∗n,r∗n∼pπn(·|x0)[bias(v̂0(x ∗ n))] (6)\nA simple corollary of Lemma 1 is that the on-policy n-step value estimator is unbiased when the function approximator v̂0 is unbiased. On the other hand, Theorem 1 provides a recipe for transforming sub-trajectories collected under old policies into actual on-policy sub-trajectories. From a given state in an off-policy trajectory, this is done by applying σπn to all the subsequent transitions until we meet a total delay (ωi + αi) that is shorter than the length of the formed sub-trajectory. Consequently, the transformed sub-trajectory can be fed to the on-policy n-step value estimator, where n is the length of this sub-trajectory. This does not only provide a better value estimate than usual 1-step off-policy estimators according to Lemma 1, but it maximally compensates for the multi-step credit assignment difficulty introduced by random delays. Indeed, the length of the transformed sub-trajectory is then exactly the number of time-steps it took the first action of the sub-trajectory to have an influence on subsequent delayed observations, minus one time-step.\nAs opposed to other unbiased n-step off-policy methods, such as importance sampling and Retrace (Munos et al., 2016), this method doesn’t suffer from variance explosion. This is because the presence of delays allows us to transform off-policy sub-trajectories into on-policy sub-trajectories, so that old samples don’t need to be weighted by the policy ratio.\nAlthough we use a multi-step state-value estimator, the same principles can be applied to action-value estimation as well. In fact, the trajectory transformation described in Definition 3 enables efficient off-policy n-step value estimation in any value-based algorithm that would otherwise perform 1-step action-value backups, such as DQN, DDPG or SAC. In the next section, we illustrate this using SAC.\nFigure 5 summarizes the whole procedure in a simple 1D-world example. The maximum possible delay is K = 3 here, and the agent can only go ‘left’ or ‘right’. An initial augmented state x0 is sampled from the replay memory, along with the 3 subsequent augmented states and rewards. The condition of Theorem 1 is satisfied for n ≤ 2. It follows that τn = τ2 = (x1, x2). This offpolicy trajectory fragment is partially resampled, which yields the corresponding on-policy trajectory fragment τ∗n = τ ∗ 2 . This on-policy trajectory fragment can then be used to compute an unbiased n-step value estimate of the initial state x0 = x∗0." }, { "heading": "4 DELAY-CORRECTING ACTOR-CRITIC", "text": "We have seen in Section 3 how it is possible, in the delayed setting, to collect off-policy trajectories and still use on-policy multi-step estimators in an unbiased way, which allows us to compensate for the more difficult credit assignment introduced by the presence of random delays. We now apply this method to derive Delay-Correcting Actor-Critic (DCAC), an improved version of Soft Actor-Critic (Haarnoja et al., 2018a;b) for real-time randomly delayed settings." }, { "heading": "4.1 VALUE APPROXIMATION", "text": "Like SAC, DCAC makes use of the entropy-augmented soft value function (Haarnoja et al., 2018a):\nLemma 2. In a RDMDP (E, pω, pα) the soft value function is:\nvsoft(x∗0)=Ea∼π(·|x∗0)[Ex∗1 ,r∗1∼p̃(·|x∗0 ,a)[r ∗ 1+γv soft(x∗1)]−logπ(a|x∗0)] (7)\nIt can be estimated by augmenting the reward function in Definition 4 with an entropy reward:\nDefinition 5. The delayed on-policy n-step soft state-value estimator, i.e. the n-step state-value estimator with entropy augmented rewards under the current policy π, is\nv̂softn (x ∗ 0; τ ∗ n)=r ∗ 1+γv̂ soft n−1(x ∗ 1;τ ∗ n−1)−Ea∼π(·|x∗0)[logπ(a|x ∗ 0)] (8)\nwhere v̂soft0 is a state-value function approximator (e.g. a neural network).\nGiven the off-policy trajectory transformation proposed in Section 3, Definition 5 directly gives DCAC’s value target. To recap, we sample an initial state x0 (= x∗0) and a subsequent trajectory τn (= x1, r1, . . . xn, rn) from a replay memory. The sampling procedure ensures that n is the greatest length so that the sampled trajectory τn does not contain any total delay ωi + βi < i. This trajectory was collected under an old policy µ, but we need a trajectory compatible with the current policy π to use v̂softn in an unbiased way. Therefore, we feed τn to the partial trajectory resampling operator defined in Definition 3. This produces an equivalent on-policy sub-trajectory τ∗n with respect to the current policy π according to Theorem 1, while maximally taking advantage of the bias reduction described by Lemma 1. This partially resampled on-policy sub-trajectory is fed as input to v̂softn (x0; τ ∗ n), which yields the target used in DCAC’s soft state-value loss:\nDefinition 6. The DCAC critic loss is\nLDCACv (v) = E(x0,τn)∼D Eτ∗n∼σπn(·|x0;τn)[(vθ(x0)− v̂ soft n (x0; τ ∗ n)) 2] (9)\nwhere x0, τn are a start state and following trajectory, sampled from the replay memory, and satisfying the condition of Theorem 1." }, { "heading": "4.2 POLICY IMPROVEMENT", "text": "In addition to using the on-policy n-step value estimator as target for our parametric value estimator, we can also use it for policy improvement. As in SAC we use the reparameterization trick (Kingma & Welling, 2013) to obtain the policy gradient from the value estimator. However, since we use our trajectory transformation and a multi-step value estimator, this involves backpropagation through time in the action buffer.\nDefinition 7. The DCAC actor loss is\nLDCACπ (π) = −E(x0,τn)∼D Eτ∗n∼σπn(·|x0;τn)[v̂ soft n (x0; τ ∗ n)] (10)\nwhere x0, τn are a start state and following trajectory, sampled from the replay memory, and satisfying the condition of Theorem 1.\nProposition 1. The DCAC actor loss is a less biased version of the SAC actor loss with\nbias(LDCACπ ) = En[γn] bias(LSACπ ) (11)\nassuming both are using similarly biased parametric value estimators to compute the loss, i.e.\nbias(v̂soft0 (x)) = Ea∼π(·|x)[bias(q̂soft0 (x, a))] (12)" }, { "heading": "5 EXPERIMENTAL RESULTS", "text": "To evaluate our approach and make future work in this direction easy for the RL community, we release as open-source, along with our code, a Gym wrapper that introduces custom multi-step delays in any classical turn-based Gym environment. In particular, this enables us to introduce random delays to the Gym MuJoCo continuous control suite (Brockman et al., 2016; Todorov et al.), which is otherwise turn-based.\nCompared algorithms. A naive version of SAC would only use the unaugmented delayed observations, which violates the Markov assumption in delayed settings as previously pointed out. Consequently, naive SAC exhibits near-random results in delayed environments. A few such experiments are provided in the Appendix for illustration (Figure 9).\nIn order to make a fair comparison, all other experiments compare DCAC against SAC in the same RDMDP setting, i.e. all algorithms use the augmented observation space defined in Section 2.1. Since SAC is the algorithm we chose to improve for delayed scenarios, comparing DCAC against it in the same setting provides a like-for-like comparison. We also found it interesting to compare against RTAC (Ramstedt & Pal, 2019). Indeed, DCAC reduces to this algorithm in the special case where observation transmission is instantaneous (ω=0) and action computation and transmission constantly takes one time-step (α=1). Whereas DCAC performs variable-length state-value backups with partial trajectory resampling as explained in Section 4 , RTAC performs 1-step state-value backups, and SAC performs the usual 1-step action-value backup described in its second version (Haarnoja et al., 2018b). All hyperparameters and implementation details are provided in Section B of the Appendix.\nFor each experiment, we perform six runs with different seeds, and shade the 90% confidence intervals.\nConstant delays. Our first batch of experiments features simple, constantly delayed scenarios. Figure 6 displays the results of the most difficult of these experiments (i.e. where the delays are longest), while the others are provided in Section D.2 of the Appendix. The advantage of using DCAC is obvious in the presence of long constant delays. Note that DCAC reduces to the RTAC (Ramstedt & Pal, 2019) algorithm when ω = 0 and α = 1 and behaves as an evolved form of RTAC in the presence of longer constant delays.\nReal-world random delays. Our second batch of experiments features random delays of different magnitudes. The experiment we chose to present in Figure 7 is motivated by the fact that our approach is designed for real-world applications. Importantly, it provides an example how to implement DCAC in practice (see Appendix A and B for more details). We sample the communication delays for actions and observations from our real-world WiFi dataset, presented in Figure 2. When action or observation communications supersede previous communications, only the most recently produced information is kept. In other words, when an action is received in the undelayed environment, its age is compared to the action that is currently being applied. Then, the one that the agent most recently started to produce is applied. Similarly, when the agent receives a new observation, it only keeps the one that was most recently captured in the undelayed environment (see the right-hand side of Figure 3 for a visual example). We discretize the communication delays by using a time-step of 20ms. Importantly, note that Figure 2 has been cropped to 60ms, but the actual dataset contains outliers that can go as far as 1s. However, long delays (longer than 80ms in our example) are almost always superseded and discarded. Therefore, when such information is received, we clip the corresponding delay with no visible impact in performance: in practice, the maximum acceptable delays are design choices, and can be guided by existing probabilistic timing methods (Santinelli et al., 2017)." }, { "heading": "6 RELATED WORK", "text": "We trace our line of research back to Katsikopoulos & Engelbrecht (2003), who provided the first discussion about Delayed Markov Decision Processes. In particular, they were interested in asynchronous rewards, which provides interesting insights in relation to Appendix A.4. Walsh et al. (2008) later re-introduced the notion of “Constantly Delayed Markov Decision Process”. While recent advances in deep learning enable implementations of what the authors call an “augmented approach”, this was considered intractable at the time because the size of the action buffer grows with the considered delay length. Instead, they studied the case where observations are retrieved with a constant delay and developed a model-based algorithm to predict the current state of the environment. Similarly, Schuitema et al. (2010) developed “memory-less” approaches based on SARSA and vanilla\nQ-learning, taking advantage of prior knowledge about the duration of a constant control delay. Hester & Stone (2013) adopted the action buffer-augmented approach to handle random delays, and relied on a decision-tree algorithm to perform credit assignment implicitly. By comparison, our approach relies on delay measurements to perform credit assignment explicitly. More recently, Firoiu et al. (2018) introduced constant action delays to a video game to train agents whose reaction time compares to humans. Similar to previous work, the authors used a state-predictive model, but based on a recurrent neural network architecture. Ramstedt & Pal (2019) formalized the framework of Real-Time Reinforcement Learning (RTRL) that we generalize here to all forms of real-time delays. Initially designed to cope with the fact that inference is not instantaneous in real-world control, the RTRL setting is equivalent to a constantly delayed MDP with α = 1 and ω = 0. Finally, Xiao et al. (2020) adopted an alternative approach by considering the influence of the action selection time when action selection is performed within the duration of a larger time-step. However, their framework only allows delays smaller than one time-step, whereas large time-steps are not compatible with high-frequency control." }, { "heading": "7 CONCLUSION AND FUTURE WORK", "text": "We proposed a deep off-policy and planning-free approach that explicitly tackles the credit assignment difficulty introduced by real-world random delays. This is done by taking advantage of delay measurements in order to generate actual on-policy sub-trajectories from off-policy samples. In addition, we provide a theoretical analysis that can easily be reused to derive a wide family of algorithms such as DCAC, whereas previous work mostly dealt with finding approximate ways of modelling the state-space in constantly delayed environments. The action buffer is fundamentally required to define a Markovian state-space for RDMDPs , but it is of course possible to observe this action buffer approximately, e.g. by compressing it in the hidden state of an RNN, which is complementary to our work.\nWe have designed our approach with real-world applications in mind, and it is easily scalable to a wide variety of scenarios. For practical implementation, see Section 5 and Sections A and B of the Appendix. See also rtgym, a small python helper that we use in future work to easily implement delayed environments in the real world.\nTo the best of our knowledge, DCAC is the first deep actor-critic approach to exhibit such strong performance on both randomly and constantly delayed settings, as it makes use of the partially known dynamics of the environment to compensate for difficult credit assignment. We believe that our model can be further improved by making use of the fact that our critic estimates the state-value instead of the action-value function. Indeed, in this setting, Ramstedt & Pal (2019) showed that it is possible to simplify the model by merging the actor and the critic networks using the PopArt output normalization (van Hasselt et al., 2016), which we did not try yet and leave for future work.\nOur approach handles and adapts to arbitrary choices of time-step duration, although in practice time-steps smaller than the upper bound of the inference time will require a few tricks. We believe that this approach is close to time-step agnostic RL and will investigate this direction in future work." }, { "heading": "ACKNOWLEDGMENTS", "text": "We thank Pierre-Yves Lajoie, Yoshua Bengio and our anonymous reviewers for their constructive feedback, which greatly helped us improve the article. We also thank ElementAI and Compute Canada for providing the computational resources we used to run our experiments." }, { "heading": "A PRACTICAL CONSIDERATIONS AND SCALABILITY", "text": "" }, { "heading": "A.1 SELF-CORRELATED DELAYS", "text": "The separation between ω and α allows auto-correlated conditional distributions on both delays. This is necessary to allow superseded actions and observations to be discarded. In RDMDPs , the agent keeps the delayed observation that was most recently captured in the undelayed environment. Ideally, it is also ensured by the undelayed environment that the applied action is the action that most recently started being computed by the agent. In practice, this can be ensured by augmenting the actions with timestamps corresponding to the beginning of their computation, and observations with timestamps corresponding to the end of their capture. Thus, the undelayed environment and the agent can keep track of the most recent received timestamp and discard outdated incoming information." }, { "heading": "A.2 HOW TO MEASURE DELAYS", "text": "To measure the delays in practice, one possibility is to make use of the aforementioned timestamps. In addition to the augmentations described in A.1, one can augment each observation sent by the undelayed environment with the timestamp of the action that was applied before the end of observation capture. When the agent receives an observation, this observation then contains two timestamps: one that directly corresponds to an action in the buffer (agent’s clock), and one that corresponds to when the observation finished being captured (undelayed environment’s clock). The identified action in the buffer directly gives the total delay. If the agent and the undelayed environment have e.g. synchronized clocks, the current timestamp minus the timestamp corresponding to observation capture gives the observation delay (and thus we can deduce the action delay)." }, { "heading": "A.3 SCALABILITY OF THE ACTION BUFFER", "text": "As seen in our WiFi experiment, the maximum delays are design choices in practice. The actual maximum delays can be prohibitively long (e.g. infinite when packets are lost) and would require a long action buffer to be handled in the worst-case scenario. However, in random delays scenarios, long delays are likely to be superseded by shorter delays. Therefore, observations reaching the agent with a total delay that exceeds the chosen K value should simply be discarded, and a procedure implemented to handle the unlikely edge-case where more than K such observations are received in a row. Also note that, although we used a simple action buffer in this work, more clever representations are possible in the presence of long delays, e.g. run-length encoding." }, { "heading": "A.4 DELAYED REWARDS", "text": "We have implicitly made a choice when defining the rewards for RDMDPs . Indeed, keep in mind that observations can be dropped (superseded) at the level of the agent. In such cases, we chose to accumulate the rewards corresponding to the lost transitions. When an observation gets repeated because no new observation is available, the corresponding reward is 0, and when a new observation arrives, the corresponding reward contains the sum of intermediate rewards in lost transitions.\nIn practice, this is ensured for example by making the assumption that the remote robot (i.e. the undelayed environment) can observe its own instantaneous reward. This allows the robot to compute its cumulative reward and send it to the agent along with the observation. The agent can then compute the difference between the last cumulative reward it received from the remote robot and the new one for each incoming observation (NB: outdated observations are discarded so the agent only sees cumulative rewards with time-increasing timestamps).\nAlternatively, the practitioner can choose to repeat the delayed rewards along with the repeated delayed observations at the level of the agent (this is what we use to do in earlier versions of the paper). When a trick similar to the aforementioned cannot be implemented, this can be done instead, with no impact on our analysis. However, the reward signal will inherently have a higher variance." }, { "heading": "A.5 LONG OBSERVATION CAPTURE", "text": "In practice, it is often the case that observation capture is not instantaneous. In such situation, one should increase the size of the action buffer so that it always includes the actions for which it is unclear whether they have influenced the observation yet or not. Indeed, when observation capture is not instantaneous it is not possible to know which undelayed state(s) it describes. The length of the multi-step backup performed by DCAC doesn’t need to be adapted, because it only cares about the first action that is known to not have influenced the delayed observation." }, { "heading": "A.6 COMBINED OBSERVATIONS", "text": "Equivalently, if observations are formed of several combined parts that were captured at different times, the action buffer must be long enough to always include the first action that has not influenced the oldest sub-observation yet (i.e. be as long as the maximum possible combined total delay).\nB IMPLEMENTATION DETAILS" }, { "heading": "B.1 MORE INFORMATION AS INPUT TO THE MODEL", "text": "The action delay α identifies the action that was applied during the previous time-step. It is needed to define RDMDPs and thus is used by DCAC. However, in practice we can include another piece of information on top of α: the delay of the action that is going to be applied in the undelayed environment when the captured observation is sent. We use this additional information as input of the model for all tested algorithms." }, { "heading": "B.2 MODEL ARCHITECTURE", "text": "The model we use in all our experiments is composed of two separate multi-layer perceptrons (MLPs): a critic network, and an actor network. Both MLPs are built with the same simple architecture of two hidden layers, 256 units each. The critic outputs a single value, whereas the actor outputs an action distribution with the dimension of the action-space, from which actions are sampled with the reparameterization trick. This architecture is compatible with the second version of SAC described in Haarnoja et al. (2018b). The only difference from the DCAC model is that the SAC critic tracks q(x), and not v(x). Indeed, differently from usual actor-critic algorithms, the output of DCAC’s critic approximates the state-value v(x) (instead of the action-value q(x)), as it is sufficient to optimize the actor loss described in Definition 7. Weights and biases are initialized with the default Pytorch initializer. Both the actor and the critic are optimized by gradient descent with the Adam optimizer, on losses LDCAC(π) (Equation 10) and LDCAC(v) (Equation 9), respectively. Classically, we use twin critic networks (Van Hasselt et al., 2015; Fujimoto et al., 2018) with target weight tracking (Mnih et al., 2015) to stabilize training." }, { "heading": "B.3 HYPERPARAMETERS", "text": "Other than our neural network architecture, our implementations of SAC, RTAC and DCAC all share the following hyperparameters:\nNB: the target weights are updated according to the following running average: θ ← τθ + (1− τ)θ\nC VISUAL EXAMPLES" }, { "heading": "D ADDITIONAL EXPERIMENTS", "text": "D.1 IMPORTANCE OF THE AUGMENTED OBSERVATION SPACE" }, { "heading": "D.2 CONSTANT DELAYS", "text": "" }, { "heading": "D.3 RANDOM DELAYS", "text": "" }, { "heading": "E DEFINITIONS", "text": "Definition 8. The n-step state-reward distribution for an environment E = (S,A, µ, p) and a policy π is defined as\npπn+1(s ′, r′, τn︸ ︷︷ ︸ τn+1\n|s) = Ea∼π(·|s)[pπn(τn|s′)p(s′, r′|s, a)] = ∫ A pπn(τn|s′)p(s′, r′|s, a)π(a|s)da (13)\nwith the base case pπ0 (s) = 1 and the first iterate p π 1 (s ′, r′|s) = ∫ A p(s′, r′|s, a)π(a|s)da.\nDefinition 9. A 1-step action-value estimator is defined as\nq̂1(s,a; s ′,r′)=r′+γ Ea′∼π(·|s′)[q̂0(s′,a′)]. (14)\nPart of this estimator is usually another parametric estimator q̂0 (e.g. a neural network trained with stochastic gradient descent)." }, { "heading": "F OTHER MATHEMATICAL RESULTS", "text": "" }, { "heading": "F.1 LEMMA ON STEADY-STATE VALUE ESTIMATION BIAS", "text": "Lemma 3. The expected bias of the n-step value estimator under the steady-state distribution (if it exists) is\nEx∼pπss [bias v̂n(x)] = γ nEx∼pπss [bias v̂0(x)] (15)\nProof. We remind ourselves that the steady state distribution observes\npπss(xn) = Ex0∼pπss [p π n(..., xn, rn|x0)]. (16)\nAccording to Lemma 1 we then have\nEx0∼pπss bias(v̂n(x0, ·)) =γ nE...,x∗n,r∗n∼pπn(·|x0)[bias(v̂0(x ∗ n))] (17)\n=γnEx∼pπss [bias v̂0(x)]. (18)" }, { "heading": "F.2 LEMMA ON A DIRAC DELTA PRODUCT DISTRIBUTION", "text": "Lemma 4. For p(u, v) = δ(u− c)q(u, v) if q(u, v) <∞ for u = c then p(u, v) = δ(u− c)q(c, v).\nProof. If u = c then p(u, v) = δ(u− c)q(c, v), otherwise p(u, v) = 0 = δ(u− c)q(c, v)" }, { "heading": "F.3 LEMMA ON F", "text": "Lemma 5. The dynamics described by f depend neither on the input action nor on a range of actions in the action buffer:\nf∆(s ∗ 1, α ∗ 1, r ∗ 1 |x0, a µ 0 ) = f∆(s ∗ 1, α ∗ 1, r ∗ 1 |x∗0, aπ0 )\nwith x0 = s0, u0, ω0, α0 and x∗0 = s ∗ 0, u ∗ 0, ω ∗ 0 , α ∗ 0 , given that s0, ω0, α0 = s ∗ 0, ω ∗ 0 , α ∗ 0 and given\nu0[ω ∗ 0 − δ + α∗1] = u∗0[ω∗0 − δ + α∗1] for all δ ∈ {∆,∆− 1, . . . , 0}\nProof. We prove by induction.\nThe base case (ω∗0 − ω∗1 = −1) is trivial since it does not depend on the inputs that differ.\nFor the induction step we have\nf∆(s ∗ 1, α ∗ 1, r ∗ 1 |s0, u0, ω0, α0 , a µ 0 ) =\nEs̄,ᾱ,r̄∼f∆−1(·|s0,u0,ω0,α0 ,aµ0 )[p(s ∗ 1, r ∗ 1 − r̄|s̄, u[ω0 −∆ + α∗1]) pα(α∗1|ᾱ)] (19)\nBecause of our condition on u0 and u∗0 and the fact that ω0 = ω ∗ 0 this is equal to\nEs̄,ᾱ,r̄∼f∆−1(·|s0,u0,ω0,α0 ,aµ0 )[p(s ∗ 1, r ∗ 1 − r̄|s̄, u∗0[ω∗0 −∆ + α∗1]) pα(α∗1|ᾱ)]\nWe can now use the induction hypothesis since the conditions on s0, u0, ω0, α0 are still met when ∆← ∆− 1.\nEs̄,ᾱ,r̄∼f∆−1(·|s∗0 ,u∗0 ,ω∗0 ,α∗0 ,aπ0 )[p(s ∗ 1, r ∗ 1 − r̄|s̄, u∗0[ω∗0 −∆ + α∗1]) pα(α∗1|ᾱ)]\n= f∆(s ∗ 1, α ∗ 1, r ∗ 1 |x∗0, aπ0 ) (20)" }, { "heading": "F.4 LEMMA ON PARTIAL RESAMPLING", "text": "Lemma 6. Partially resampling trajectories collected under a policy µ according to σπn transforms them into trajectories distributed according to π.\nEτn∼pµn(·|x0)[σ π n(τ ∗ n|x∗0; τn)] = pπn(τ∗n|x∗0)\nwith x0 = s0, u0, ω0, α0 and x∗0 = s ∗ 0, u ∗ 0, ω ∗ 0 , α ∗ 0 , on the condition that s0, ω0, α0 = s ∗ 0, ω ∗ 0 , α ∗ 0 and on the condition that the actions in the initial action buffers u0 and u∗0 that are applied in the following trajectory are the same, i.e.\nu0[k : end] = u ∗ 0[k : end] with k = min i (ω∗i+1 + α ∗ i+1 − i) for i ∈ {0, n− 1}\nand for the trajectory τ∗n = (s ∗ 1, u ∗ 1, ω ∗ 1 , α ∗ 1, . . . , s ∗ n, u ∗ n, ω ∗ n, α ∗ n).\nProof. We start with the induction base for n = 0. The theorem is trivial in this case since we have 0-length trajectories () and pµ0 (()|x0) = σπ0 (()|x∗0; ()) = pπ0 (()|x∗0) = 1. For the induction step we start with the left hand side of the lemma’s main equation.\nEτn∼pµn(·|x0)[σ π n(τ ∗ n|x∗0; τn)]\n= Eaµ0∼µ(·|x0)[Ex1,r1∼p̃(x1,r1|x0,aµ0 )[Eτn−1∼pµn−1(·|x1)[σ π n(τ ∗ n|x∗0;x1, r1, τn−1)]]] (21)\nwith\np̃(s1, u1, ω1, α1 , r1|s0, u0, ω0, α0 , aµ0 ) = fω0−ω1(s1, α1, r1|s0, u0, ω0, α0 , a µ 0 ) pω(ω1|ω0) pu(u1|u0, a µ 0 )\nPlugging that and solving the integral over u1 yields\n= Eaµ0∼µ(·|x0)[Eω1∼pω(·|ω0)[Es1,α1,r1∼fω0−ω1 (·|s0,u0,ω0,α0 ,aµ0 )[ Eτn−1∼pµn−1(·|s1,(aµ0 ,u0[1:−1]),ω1,α1 )[σ π n(τ ∗ n|x∗0; s1, (aµ, u0[1 : −1]), ω1, α1 , r1, τn−1)]]]] (22)\nRolling out σπn by one step and integrating out s1, ω1, α1, r1 yields\n= Eaµ0∼µ(·|x0)[Eτn−1∼pµn−1(·|s∗1 ,(aµ0 ,u0[1:−1]),ω∗1 ,α∗1 )[Eaπ0∼π(·|x∗0)[δ(u ∗ 1 − (aπ0 , u∗0[1 : −1]))\nσπn−1(τ ∗ n−1|s∗1, u∗1, ω∗1 , α∗1 ; τn−1)fω0−ω∗1 (s ∗ 1, α ∗ 1, r ∗ 1 |s0, u0, ω0, α0 , a µ 0 ) pω(ω ∗ 1 |ω0)]]] (23)\nReordering terms and substituting s0, ω0, α0 = s∗0, ω ∗ 0 , α ∗ 0 yields\n= pω(ω ∗ 1 |ω∗0)Eaπ0∼π(·|x∗0)[δ(u ∗ 1 − (aπ0 , u∗0[1 : −1]))\nEaµ0∼µ(·|x0)[fω∗0−ω∗1 (s ∗ 1, α ∗ 1, r ∗ 1 |x0, a µ 0 )\nEτn−1∼pµn−1(·|s∗1 ,(aµ,u0[1:−1]),ω∗1 ,α∗1 )[σ π n−1(τ ∗ n−1|x∗1; τn−1)]]] (24)\nWe can substitute the f term according to Lemma 5 since the condition between x0 and x∗0 is met. More precisely the condition on u0 and u∗0 is met because k ≤ ω∗0 −∆ + α∗1 = ω∗1 + α∗1. After the substitution we have\n= pω(ω ∗ 1 |ω∗0)Eaπ0∼π(·|x∗0)[δ(u ∗ 1 − (aπ0 , u∗0[1 : −1])) fω∗0−ω∗1 (s ∗ 1, α ∗ 1, r ∗ 1 |x∗0, aπ0 )\nEaµ0∼µ(·|x0)[Eτn−1∼pµn−1(·|s∗1 ,(aµ,u0[1:−1]),ω∗1 ,α∗1 )[σ π n−1(τ ∗ n−1|x∗1; τn−1)]]] (25)\nWe can substitute the induction hypothesis in the following form.\nEτn−1∼pµn−1(·|x1)[σ π n−1(τ ∗ n−1|x∗1; τn−1)] = pπn−1(τ∗n−1|x∗1)\non the condition that\nu1[k : end] = u ∗ 1[k : end] with k = min i (ω∗i+2 + α ∗ i+2 − i) for i ∈ {0, n− 2}\nfor the trajectory τ∗n−1 = (s ∗ 2, u ∗ 2, ω ∗ 2 , α ∗ 2, . . . , s ∗ n, u ∗ n, ω ∗ n, α ∗ n). To check that this condition is met we observe that u1 = (a µ 0 , u0[1 : −1]) and substitute u∗1 = (aπ0 , u∗0[1 : −1]) (made possible by Lemma 4) which means that\nu0[k − 1 : end] = u∗0[k − 1 : end] with k = min i (ω∗i+2 + α ∗ i+2 − i) for i ∈ {0, n− 2}\nSubstituting the induction hypothesis yields\n= pω(ω ∗ 1 |ω∗0)Eaπ0∼π(·|x∗0)[δ(u ∗ 1−(aπ0 , u0[1 : −1])) fω∗0−ω∗1 (s ∗ 1, α ∗ 1, r ∗ 1 |x∗0, aπ0 ) pπn−1(τ∗n−1|x∗1)]\n(26)\nwhich is\nEaπ0∼π(·|x∗0)[p π n−1(τ ∗ n−1|x∗1) p̃(x∗1, r∗1 |x∗0, aπ0 )] = pπn(τ∗n|x∗0)" }, { "heading": "G PROOFS OF THE RESULTS FROM THE MAIN PAPER", "text": "A\nTheorem 1. The partial trajectory resampling operator σπn (Def. 3) transforms off-policy trajectories into on-policy trajectories\nEτn∼pµn(·|x0)[σ π n(τ ∗ n|x0;τn)]=pπn(τ∗n|x0) (3)\non the condition that none of the delayed observations depend on any of the resampled actions, i.e.\nω∗t + α ∗ t ≥ t (4)\nwhere t indexes the trajectory τ∗n = (s ∗ 1, u ∗ 1, ω ∗ 1 , α ∗ 1 , r ∗ 1 , . . . , s ∗ n, u ∗ n, ω ∗ n, α ∗ n , r ∗ n) from 1 to n.\nProof. The theorem is a special case of Lemma 6 with x0 = x∗0. This allows us to simplify the condition in the lemma as we show next.\nSince u0 = u∗0 we can allow all k ≥ 1 which is the minimum allowed index for u. Therefore we must ensure 1 ≤ mini(ω∗i+1 + α∗i+1 − i). Since the min must be larger than 1 then all arguments must be larger than 1 which means this is equivalent to\n1 ≤ ω∗i+1 + α∗i+1 − i for i ∈ {0, n− 1}.\nThis can be transformed into\nω∗t + α ∗ t ≥ t for t ∈ {1, n} (27)\nLemma 1. The n-step value estimator has the following bias:\nbias(v̂n(x0, ·)) = γnE...,x∗n,r∗n∼pπn(·|x0)[bias(v̂0(x ∗ n))] (6)" }, { "heading": "Proof.", "text": "bias(v̂n(x0, ·)) = Eτ∗n∼pπn(·|x0)[v̂n(x0, τ ∗ n)− vπ(x0)] (28) = Eτ∗n∼pπn(·|x0)[r ∗ 1 + γv̂n−1(x ∗ 1; τ ∗ n−1)]− Ea0∼π(·|x0)[Er∗1 ,x∗1∼p̃(·|x0,a0)[r ∗ 1 + γv π(x∗1)]] (29) = Eτ∗n∼pπn(·|x0)[r ∗ 1 + γv̂n−1(x ∗ 1; τ ∗ n−1)− r∗1 − γvπ(x∗1)] (30) = γEτ∗n∼pπn(·|x0)[v̂n−1(x ∗ 1; τ ∗ n−1)− vπ(x∗1)] (31)\n= . . . (32) = γnEτ∗n∼pπn(·|x0)[v̂0(x ∗ n)− vπ(x∗n)] (33) = γnE...,x∗n,r∗n∼pπn(·|x0)[bias(v̂0(x ∗ n))] (34)\nB\nLemma 2. In a RDMDP (E, pω, pα) the soft value function is:\nvsoft(x∗0)=Ea∼π(·|x∗0)[Ex∗1 ,r∗1∼p̃(·|x∗0 ,a)[r ∗ 1+γv soft(x∗1)]−logπ(a|x∗0)] (7)\nProof. The soft value function for an environment (X,A, µ̄, p̄) is defined as\nvsoft(x∗0)=Ea∼π(·|x∗0)[q soft(x∗0,a)−logπ(a|x∗0)] (35)\nwhere qsoft(x∗0, a) = Ex∗1 ,r∗1∼p̄(x∗0 ,a)[r ∗ 1 + γv soft(x∗1)] (36)\nIf (X,A, µ̄, p̄) = RDMDP(E, pω, pα) = (X,A, µ̃, p̃) with E = (S,A, µ, p) this is\nqsoft(x∗0,a)=Ex∗1 ,r∗1∼p̃(·|x∗0 ,a)[r ∗ 1+γv soft(x∗1)] (37)\nand vsoft(x∗0) = Ea∼π(·|x∗0)[Ex∗1 ,r∗1∼p̃(·|x∗0)[r ∗ 1 + γv soft(x∗1)]− log π(a|x∗0)] (38)\nProposition 1. The DCAC actor loss is a less biased version of the SAC actor loss with\nbias(LDCACπ ) = En[γn] bias(LSACπ ) (11)\nassuming both are using similarly biased parametric value estimators to compute the loss, i.e.\nbias(v̂soft0 (x)) = Ea∼π(·|x)[bias(q̂soft0 (x, a))] (12)\nProof. Note that for simplicity, we also assume that the states in the replay memory are distributed according to the steady-state distribution, i.e. D ∼ pπss. This assumption could be avoided by making more complicated assumptions about the biases of the state-value and action-value estimators.\nWe now start with the bias of the DCAC loss with respect to an unbiased SAC loss using the true action-value function,\nbias(LDCACπ ) = L DCAC π − LSAC -UBπ (39)\nwhere\nLDCACπ =− Ex0,τn∼D Eτ∗n∼σπn(·|x0;τn)[v̂ soft n (x0; τ ∗ n)] (40)\n=− Ex0∼DEnEτ∗n∼pπn(·|x0)[v̂ soft n (x0; τ ∗ n)] | Theorem 1 (41)\nand\nLSAC -UBπ =Ex0∼D[Ea∼π(·|x0)[log π(a|x0)− q soft(x0, a)]] (42)\n=Ex0∼D[vsoft(x0)]. (43)\nSubstituting these we have\nbias(LDCACπ ) =Ex0∼DEn[v̂softn (x0; τ∗n)− vsoft(x0)] (44) =Ex0∼DEn[bias(v̂softn (x0; ·)] (45) =Ex0∼DEn[γnE...,xn,rn∼pπn(·|x0)[bias(v̂ soft 0 (xn))]] | Lemma 1 (46)\n=En[γn] Ex∼D[bias(v̂soft0 (x))] | using D ∼ pπss and Lemma 3 (47) =En[γn] Ex∼D[Ea∼π(·|x)[bias(q̂soft0 (x, a))]] | Equation 12 (48) =En[γn] Ex∼D[Ea∼π(·|x)[q̂soft0 (x, a)− qsoft(x, a)]] (49) =En[γn] (LSACπ − LSAC -UBπ ) (50) =En[γn] bias(LSACπ ) (51)" } ]
2,021
null
SP:6fc9ae204ba7ca8db33d3ce39362ab05d36eec97
[ "This paper proposes a mechanism to generate adversarial examples by applying latent variables level manipulation, based on the styleGAN framework. Unlike previous works mostly focused on image level perturbations and geometry transformations, this work tends to control higher level latent sampling such as style, so as to generate a style-adversarial examples. Although a similar idea has been proposed by Song et al. (2018), this work is along the same direction and achieves better performance. The loss is proposed for general classification tasks such as object classification, object detection and semantic segmentation. The experimental results show not only qualitatively confusing human vision but also quantitatively improve the performance on testing clean images." ]
We propose a novel approach for generating unrestricted adversarial examples by manipulating fine-grained aspects of image generation. Unlike existing unrestricted attacks that typically hand-craft geometric transformations, we learn stylistic and stochastic modifications leveraging state-of-the-art generative models. This allows us to manipulate an image in a controlled, fine-grained manner without being bounded by a norm threshold. Our approach can be used for targeted and non-targeted unrestricted attacks on classification, semantic segmentation and object detection models. Our attacks can bypass certified defenses, yet our adversarial images look indistinguishable from natural images as verified by human evaluation. Moreover, we demonstrate that adversarial training with our examples improves performance of the model on clean images without requiring any modifications to the architecture. We perform experiments on LSUN, CelebA-HQ and COCO-Stuff as high resolution datasets to validate efficacy of our proposed approach.
[]
[ { "authors": [ "Rima Alaifari", "Giovanni S Alberti", "Tandri Gauksson" ], "title": "Adef: An iterative algorithm to construct adversarial deformations", "venue": "arXiv preprint arXiv:1804.07729,", "year": 2018 }, { "authors": [ "Michael A Alcorn", "Qi Li", "Zhitao Gong", "Chengfei Wang", "Long Mai", "Wei-Shinn Ku", "Anh Nguyen" ], "title": "Strike (with) a pose: Neural networks are easily fooled by strange poses of familiar objects", "venue": "arXiv preprint arXiv:1811.11553,", "year": 2018 }, { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "arXiv preprint arXiv:1802.00420,", "year": 2018 }, { "authors": [ "Tom B Brown", "Nicholas Carlini", "Chiyuan Zhang", "Catherine Olsson", "Paul Christiano", "Ian Goodfellow" ], "title": "Unrestricted adversarial examples", "venue": "arXiv preprint arXiv:1809.08352,", "year": 2018 }, { "authors": [ "Holger Caesar", "Jasper Uijlings", "Vittorio Ferrari" ], "title": "Coco-stuff: Thing and stuff classes in context", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Liang-Chieh Chen", "George Papandreou", "Iasonas Kokkinos", "Kevin Murphy", "Alan L Yuille" ], "title": "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2017 }, { "authors": [ "Jeremy M Cohen", "Elan Rosenfeld", "J Zico Kolter" ], "title": "Certified adversarial robustness via randomized smoothing", "venue": "arXiv preprint arXiv:1902.02918,", "year": 2019 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Yinpeng Dong", "Fangzhou Liao", "Tianyu Pang", "Hang Su", "Jun Zhu", "Xiaolin Hu", "Jianguo Li" ], "title": "Boosting adversarial attacks with momentum", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Logan Engstrom", "Brandon Tran", "Dimitris Tsipras", "Ludwig Schmidt", "Aleksander Madry" ], "title": "A rotation and a translation suffice: Fooling cnns with simple transformations", "venue": "arXiv preprint arXiv:1712.02779,", "year": 2017 }, { "authors": [ "Roger Fletcher" ], "title": "Practical methods of optimization", "venue": null, "year": 2013 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Sven Gowal", "Chongli Qin", "Po-Sen Huang", "Taylan Cemgil", "Krishnamurthy Dvijotham", "Timothy Mann", "Pushmeet Kohli" ], "title": "Achieving robustness in the wild via adversarial mixing with disentangled representations", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Chuan Guo", "Mayank Rana", "Moustapha Cisse", "Laurens van der Maaten" ], "title": "Countering adversarial images using input transformations", "venue": "arXiv preprint arXiv:1711.00117,", "year": 2017 }, { "authors": [ "Justin Johnson", "Alexandre Alahi", "Li Fei-Fei" ], "title": "Perceptual losses for real-time style transfer and super-resolution", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Harini Kannan", "Alexey Kurakin", "Ian Goodfellow" ], "title": "Adversarial logit pairing", "venue": "arXiv preprint arXiv:1803.06373,", "year": 2018 }, { "authors": [ "Tero Karras", "Timo Aila", "Samuli Laine", "Jaakko Lehtinen" ], "title": "Progressive growing of gans for improved quality, stability, and variation", "venue": "arXiv preprint arXiv:1710.10196,", "year": 2017 }, { "authors": [ "Tero Karras", "Samuli Laine", "Timo Aila" ], "title": "A style-based generator architecture for generative adversarial networks", "venue": "arXiv preprint arXiv:1812.04948,", "year": 2018 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial examples in the physical world", "venue": "arXiv preprint arXiv:1607.02533,", "year": 2016 }, { "authors": [ "Mathias Lecuyer", "Vaggelis Atlidakis", "Roxana Geambasu", "Daniel Hsu", "Suman Jana" ], "title": "Certified robustness to adversarial examples with differential privacy", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2019 }, { "authors": [ "Fangzhou Liao", "Ming Liang", "Yinpeng Dong", "Tianyu Pang", "Xiaolin Hu", "Jun Zhu" ], "title": "Defense against adversarial attacks using high-level representation guided denoiser", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Tsung-Yi Lin", "Priya Goyal", "Ross Girshick", "Kaiming He", "Piotr Dollár" ], "title": "Focal loss for dense object detection", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "Augustus Odena", "Christopher Olah", "Jonathon Shlens" ], "title": "Conditional image synthesis with auxiliary classifier gans", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Taesung Park", "Ming-Yu Liu", "Ting-Chun Wang", "Jun-Yan Zhu" ], "title": "Semantic image synthesis with spatially-adaptive normalization", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Aditi Raghunathan", "Jacob Steinhardt", "Percy S Liang" ], "title": "Semidefinite relaxations for certifying robustness to adversarial examples", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Pouya Samangouei", "Maya Kabkab", "Rama Chellappa" ], "title": "Defense-gan: Protecting classifiers against adversarial attacks using generative models", "venue": "arXiv preprint arXiv:1805.06605,", "year": 2018 }, { "authors": [ "Yang Song", "Rui Shu", "Nate Kushman", "Stefano Ermon" ], "title": "Constructing unrestricted adversarial examples with generative models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "David Stutz", "Matthias Hein", "Bernt Schiele" ], "title": "Disentangling adversarial robustness and generalization", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Yusuke Tsuzuku", "Issei Sato", "Masashi Sugiyama" ], "title": "Lipschitz-margin training: Scalable certification of perturbation invariance for deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Eric Wong", "J Zico Kolter" ], "title": "Provable defenses against adversarial examples via the convex outer adversarial polytope", "venue": "arXiv preprint arXiv:1711.00851,", "year": 2017 }, { "authors": [ "Chaowei Xiao", "Jun-Yan Zhu", "Bo Li", "Warren He", "Mingyan Liu", "Dawn Song" ], "title": "Spatially transformed adversarial examples", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Cihang Xie", "Jianyu Wang", "Zhishuai Zhang", "Zhou Ren", "Alan Yuille" ], "title": "Mitigating adversarial effects through randomization", "venue": "arXiv preprint arXiv:1711.01991,", "year": 2017 }, { "authors": [ "Cihang Xie", "Yuxin Wu", "Laurens van der Maaten", "Alan Yuille", "Kaiming He" ], "title": "Feature denoising for improving adversarial robustness", "venue": "arXiv preprint arXiv:1812.03411,", "year": 2018 }, { "authors": [ "Cihang Xie", "Mingxing Tan", "Boqing Gong", "Jiang Wang", "Alan L Yuille", "Quoc V Le" ], "title": "Adversarial examples improve image recognition", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Fisher Yu", "Ari Seff", "Yinda Zhang", "Shuran Song", "Thomas Funkhouser", "Jianxiong Xiao" ], "title": "Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop", "venue": "arXiv preprint arXiv:1506.03365,", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Adversarial examples, inputs resembling real samples but maliciously crafted to mislead machine learning models, have been studied extensively in the last few years. Most of the existing papers, however, focus on normconstrained attacks and defenses, in which the adversarial input lies in an -neighborhood of a real sample using the Lp distance metric (commonly with p = 0, 2,∞). For small , the adversarial input is quasi-indistinguishable from the natural sample. For an adversarial image to fool the human visual system, it is sufficient to be normconstrained; but this condition is not necessary. Moreover, defenses tailored for norm-constrained attacks can fail on other subtle input modifications. This has led to a recent surge of interest on unrestricted adversarial attacks in which the adversary is not bounded by a norm threshold. These methods typically hand-craft transformations to capture visual similarity. Spatial transformations [Engstrom et al. (2017); Xiao et al. (2018); Alaifari et al. (2018)], viewpoint or pose changes [Alcorn et al. (2018)], inserting small patches [Brown et al. (2017)], among other methods, have been proposed for unrestricted adversarial attacks.\nIn this paper, we focus on fine-grained manipulation of images for unrestricted adversarial attacks. We build upon state-of-the-art generative models which disentangle factors of variation in images. We create fine and coarsegrained adversarial changes by manipulating various latent variables at different resolutions. Loss of the target network is used to guide the generation process. The pre-trained generative model constrains the search space for our adversarial examples to realistic images, thereby revealing the target model’s vulnerability in the natural image space. We verify that we do not deviate from the space of realistic images with a user study as well as a t-SNE plot comparing distributions of real and adversarial images (see Fig. 7 in the appendix). As a result, we observe that including these examples in training the model enhances its accuracy on clean images.\nOur contributions can be summarized as follows:\n• We present the first method for fine-grained generation of high-resolution unrestricted adversarial examples in which the attacker controls which aspects of the image to manipulate, resulting in a diverse set of realistic, on-the-manifold adversarial examples.\n• We demonstrate that adversarial training with our examples improves performance of the model on clean images. This is in contrast to training with norm-bounded perturbations which degrades the model’s accuracy. Unlike recent approaches such as Xie et al. (2020) which use a separate auxiliary batch norm for adversarial examples, our method does not require any modifications to the architecture.\n• We propose the first method for generating unrestricted adversarial examples for semantic segmentation and object detection. Training with our examples improves segmentation results on clean images.\n• We demonstrate that our proposed attack can break certified defenses on norm-bounded perturbations." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 NORM-CONSTRAINED ADVERSARIAL EXAMPLES", "text": "Most of the existing works on adversarial attacks and defenses focus on norm-constrained adversarial examples: for a given classifier F : Rn → {1, . . . ,K} and an image x ∈ Rn, the adversarial image x′ ∈ Rn is created such that ‖x− x′‖p < and F (x) 6= F (x′). Common values for p are 0, 2,∞, and is chosen small enough so that the perturbation is imperceptible. Various algorithms have been proposed for creating x′ from x. Optimization-based methods solve a surrogate optimization problem based on the classifier’s loss and the perturbation norm. In their pioneering paper on adversarial examples, Szegedy et al. (2013) use box-constrained L-BFGS [Fletcher (2013)] to minimize the surrogate loss function. Carlini & Wagner (2017) propose stronger optimization-based attacks for L0, L2 and L∞ norms using better objective functions and the Adam optimizer. Gradient-based methods use gradient of the classifier’s loss with respect to the input image. Fast Gradient Sign Method (FGSM) [Goodfellow et al. (2014)] uses a first-order approximation of the function for faster generation and is optimized for the L∞ norm. Projected Gradient Descent (PGD) [Madry et al. (2017)] is an iterative variant of FGSM which provides a strong first-order attack by using multiple steps of gradient ascent and projecting perturbed images to an -ball centered at the input. Other variants of FGSM are proposed by Dong et al. (2018) and Kurakin et al. (2016).\nSeveral methods have been proposed for defending against adversarial attacks. These approaches can be broadly categorized to empirical defenses which are empirically robust to adversarial examples, and certified defenses which are provably robust to a certain class of attacks. One of the most successful empirical defenses is adversarial training [Goodfellow et al. (2014); Kurakin et al. (2016); Madry et al. (2017)] which augments training data with adversarial examples generated as the training progresses. Many empirical defenses attempt to combat adversaries using a form of input pre-processing or by manipulating intermediate features or gradients [Guo et al. (2017); Xie et al. (2017); Samangouei et al. (2018)]. Few approaches have been able to scale up to high-resolution datasets such as ImageNet [Liao et al. (2018); Xie et al. (2018); Kannan et al. (2018)]. Athalye et al. (2018) show that many of these defenses fail due to obfuscated gradients, which occurs when the defense method is designed to mask information about the model’s gradients. Vulnerabilities of empirical defenses have led to increased interest in certified defenses, which provide a guarantee that the classifier’s prediction is constant within a neighborhood of the input. Several certified defenses have been proposed [Wong & Kolter (2017); Raghunathan et al. (2018); Tsuzuku et al. (2018)] which typically do not scale to ImageNet. Cohen et al. (2019) use randomized smoothing with Gaussian noise to obtain provably L2-robust classifiers on ImageNet. Lecuyer et al. (2019) propose an alternative certified defense at ImageNet scale leveraging a connection between robustness against adversarial examples and differential privacy theory." }, { "heading": "2.2 UNRESTRICTED ADVERSARIAL EXAMPLES", "text": "For an image to be adversarial, it needs to be visually indistinguishable from real images. One way to achieve this is by applying subtle geometric transformations to the input image. Spatially transformed adversarial examples are introduced by Xiao et al. (2018) in which a flow field is learned to displace pixels of the image. Similarly, Alaifari et al. (2018) iteratively apply small deformations to the input in order to obtain the adversarial image. Engstrom et al. (2017) show that simple translations and rotations are enough for fooling deep neural networks. Alcorn et al. (2018) manipulate pose of an object to fool deep neural networks. They estimate parameters of a 3D renderer that cause the target model to misbehave in response to the rendered image. Another approach for evading the norm constraint is to insert new objects in the image. Adversarial Patch [Brown et al. (2017)] creates an adversarial image by completely replacing part of an image with a synthetic patch, which is image-agnostic and robust to transformations. Existence of on-the-manifold adversarial examples is also shown by Gilmer et al. (2018), that consider the task of classifying between two concentric n-dimensional spheres. Stutz et al. (2019) demonstrate that both robust and accurate models are possible by using on-the-manifold adversarial examples. A challenge for creating unrestricted adversarial examples and defending against them is introduced by Brown et al. (2018) using the simple task of classifying between birds and bicycles. The recent work by Gowal et al. (2020) show that adversarial training with examples generated by StyleGAN can improve performance of the model on clean images. They consider the classification task on low-resolution datasets such as ColorMNIST and CelebA, and only use fine changes in their adversarial training. Our approach is effective on high-resolution datasets such as CelebA-HQ and LSUN, uses a range of low-level to high-level changes for adversarial training and encompasses several tasks including classification, segmentation and detection. In addition, we demonstrate that our adversarial examples can break certified defenses on norm-constrained perturbations and are realistic as verified by human evaluation. Song\net al. (2018) search in the latent (z) space of AC-GAN [Odena et al. (2017)] to find generated images that can fool a target classifier but yield correct predictions on AC-GAN’s auxiliary classifier. They constrain the search region of z so that it is close to a randomly sampled noise vector, and show results on MNIST, SVHN and CelebA datasets. Requiring two classifiers to have inconsistent predictions degrades sample quality of the model. As we show in the appendix, training with these adversarial examples hurts the model’s performance on clean images. Moreover, this approach has no control over the generation process since small changes in the z space can lead to large changes in generated images and even create unrealistic samples. On the other hand, our method manipulates high-resolution real or synthesized images in a fine-grained manner owing to the interpretable disentangled latent space. It also generates samples which improve the model’s accuracy on clean images both in classification and segmentation tasks. To further illustrate difference of our approach with Song et al. (2018), we plot t-SNE embeddings of real images from CelebA-HQ as well as adversarial examples from our method and Song et al.’s approach in the appendix and show that our adversarial images stay closer to the manifold of real images." }, { "heading": "3 APPROACH", "text": "Most of the existing works on unrestricted adversarial attacks rely on geometric transformations and deformations which are oblivious to latent factors of variation. In this paper, we leverage disentangled latent representations of images for unrestricted adversarial attacks. We build upon state-of-the-art generative models and consider various target tasks: classification, semantic segmentation and object detection." }, { "heading": "3.1 CLASSIFICATION", "text": "Style-GAN [Karras et al. (2018)] is a state-of-the-art generative model which disentangles high-level attributes and stochastic variations in an unsupervised manner. Stylistic variations are represented by style variables and stochastic details are captured by noise variables. Changing the noise only affects low-level details, leaving the overall composition and high-level aspects intact. This allows us to manipulate the noise variables such that variations are barely noticeable by the human eye. The style variables affect higher level aspects of image generation. For instance, when the model is trained on bedrooms, style variables from the top layers control viewpoint of the camera, middle layers select the particular furniture, and bottom layers deal with colors and details of materials. This allows us to manipulate images in a controlled manner, providing an avenue for fine-grained unrestricted attacks.\nFormally, we can represent Style-GAN with a mapping function f and a synthesis network g. The mapping function is an 8-layer MLP which takes a latent code z, and produces an intermediate latent vector w = f(z). This vector is then specialized by learned affine transformations A to style variables y, which control adaptive instance normalization operations after each convolutional layer of the synthesis network g. Noise inputs are single-channel images consisting of un-correlated Gaussian noise that are fed to each layer of the synthesis network. Learned perfeature scaling factors B are used to generate noise variables η which are added to the output of convolutional" }, { "heading": "Batch Norm", "text": "layers. The synthesis network takes style y and noise η as input, and generates an image x = g(y,η). We pass the generated image to a pre-trained classifier F . We seek to slightly modify x so that F can no longer classify it correctly. We achieve this through perturbing the style and noise tensors. We initialize adversarial style and noise variables as y(0)adv = y and η (0) adv = η, and iteratively update them in order to fool the classifier. Loss of the classifier determines the update rule, which in turn depends on the type of attack. As common in the literature, we consider two types of attacks: non-targeted and targeted.\nIn order to generate non-targeted adversarial examples, we need to change the model’s original prediction. Starting from initial values y(0)adv = y and η (0) adv = η, we can iteratively perform gradient ascent in the style and noise spaces of the generator to find values that maximize the classifier’s loss. Alternatively, as proposed by Kurakin et al. (2016), we can use the least-likely predicted class llx = argmin(F (x)) as our target. We found this approach more effective in practice. At time step t, the update rule for the style and noise variables is:\ny (t+1) adv = y (t) adv − · sign(∇y(t)advJ(F (g(y (t) adv,η (t) adv)), llx)) (1) η (t+1) adv = η\n(t) adv − δ · sign(∇η(t)advJ(F (g(y (t) adv,η (t) adv)), llx)) (2)\nin which J(·, ·) is the classifier’s loss function, F (·) gives the probability distribution over classes, x = g(y,η), and , δ ∈ R are step sizes. We use ( , δ) = (0.004, 0.2) and (0.004, 0.1) for LSUN and CelebA-HQ respectively. We perform multiple steps of gradient descent (usually 2 to 10) until the classifier is fooled.\nGenerating targeted adversarial examples is more challenging as we need to change the prediction to a specific class T . In this case, we perform gradient descent to minimize the classifier’s loss with respect to the target:\ny (t+1) adv = y (t) adv − · sign(∇y(t)advJ(F (g(y (t) adv,η (t) adv)), T )) (3)\nη (t+1) adv = η (t) adv − δ · sign(∇η(t)advJ(F (g(y (t) adv,η (t) adv)), T )) (4)\nWe use ( , δ) = (0.005, 0.2) and (0.004, 0.1) in the experiments on LSUN and CelebA-HQ respectively. In practice 3 to 15 updates suffice to fool the classifier. Note that we only control deviation from the initial latent variables, and do not impose any norm constraint on generated images." }, { "heading": "3.1.1 INPUT-CONDITIONED GENERATION", "text": "Generation can also be conditioned on real input images by embedding them into the latent space of Style-GAN. We first synthesize images similar to the given input image I by optimizing values of y and η such that g(y,η) is close to I . More specifically, we minimize the perceptual distance [Johnson et al. (2016)] between g(y,η) and I . We can then proceed similar to equations 1–4 to perturb these tensors and generate the adversarial image. Realism of synthesized images depends on inference properties of the generative model. In practice, generated images resemble input images with high fidelity especially for CelebA-HQ images." }, { "heading": "3.2 SEMANTIC SEGMENTATION AND OBJECT DETECTION", "text": "We also consider the task of semantic segmentation and leverage the generative model proposed by Park et al. (2019). The model is conditioned on input semantic layouts and uses SPatially-Adaptive (DE)normalization (SPADE) modules to better preserve semantic information against common normalization layers. The layout is first projected onto an embedding space and then convolved to produce the modulation parameters γ and β. We adversarially modify these parameters with the goal of fooling a segmentation model. We consider non-targeted attacks using per-pixel predictions and compute gradient of the loss function with respect to the modulation parameters with an update rule similar to equations 1 and 2. Figure 2 illustrates the architecture. Note that manipulating variables at smaller resolutions lead to coarser changes. We consider a similar architecture for the object detection task except that we pass the generated image to the detection model and try to increase its loss. Results for this task are shown in the appendix." }, { "heading": "4 RESULTS AND DISCUSSION", "text": "We provide qualitative and quantitative results using experiments on LSUN [Yu et al. (2015)] and CelebA-HQ [Karras et al. (2017)]. LSUN contains 10 scene categories and 20 object categories. We use all the scene classes as well as two object classes: cars and cats. We consider this dataset since it is used in Style-GAN, and is well suited for a classification task. For the scene categories, a 10-way classifier is trained based on Inception-v3 [Szegedy et al. (2016)] which achieves an accuracy of 87.7% on LSUN’s test set. The two object classes also appear in ImageNet [Deng et al. (2009)], a richer dataset containing 1000 categories. Therefore, for experiments on cars and cats we use an Inception-v3 model trained on ImageNet. This allows us to explore a broader set of categories in our attacks, and is particularly helpful for targeted adversarial examples. CelebA-HQ consists of 30,000 face images at 1024 × 1024 resolution. We consider the gender classification task, and use the classifier provided by Karras et al. (2018). This is a binary task for which targeted and non-targeted attacks are similar.\nIn order to synthesize a variety of adversarial examples, we use different random seeds in Style-GAN to obtain various values for z,w,y and η. Style-based adversarial examples are generated by initializing yadv with the value of y, and iteratively updating it as in equation 1 (or 3) until the resulting image g(yadv,η) fools the classifier F . Noise-based adversarial examples are created similarly using ηadv and the update rule in equation 2 (or 4). While using different step sizes makes a fair comparison difficult, we generally found it easier to fool the model by manipulating the noise variables. We can also combine the effect of style and noise by simultaneously updating yadv and ηadv in each iteration, and feeding g(yadv,ηadv) to the classifier. In this case, the effect of style usually dominates since it creates coarser changes.\nFigure 3 illustrates generated adversarial examples on LSUN. Original image g(y,η), noise-based image g(y,ηadv) and style-based image g(yadv,η) are shown. Adversarial images look almost indistinguishable from natural images. Manipulating the noise variable results in subtle, imperceptible changes. Varying the style leads to coarser changes such as different colorization, pose changes, and even removing or inserting objects in the scene. We can also control granularity of changes by selecting specific layers of the model. Manipulating top layers, corresponding to coarse spatial resolutions, results in high-level changes. Lower layers, on the other hand, modify finer details. In the first two columns of Figure 3, we only modify top 6 layers (out of 18) to generate adversarial images. The middle two columns change layers 7 to 12, and the last column uses the bottom 6 layers.\nFigure 4 depicts adversarial examples on CelebA-HQ gender classification. Males are classified as females and vice versa. As we observe, various facial features are altered by the model yet the identity is preserved. Similar to LSUN images, noise-based changes are more subtle than style-based ones, and we observe a spectrum of highlevel, mid-level and low-level changes. Figure 5 illustrates adversarial examples conditioned on real input images using the procedure described in Section 3.1.1. Synthesized images resemble inputs with high fidelity, and set the initial values in our optimization process. In some cases, we can notice how the model is altering masculine or feminine features. For instance, women’s faces become more masculine in columns 2 and 4, and men’s beard is removed in column 3 of Figure 4 and column 1 of Figure 5.\nWe also show results on semantic segmentation in Figure 6 in which we consider non-targeted attacks on DeepLabv2 [Chen et al. (2017)] with a generator trained on the COCO-stuff dataset [Caesar et al. (2018)]. We iteratively modify modulation parameters at all layers, using a step size of 0.001, to maximize the segmentation loss with respect to the given label map. As we observe, subtle modifications to images lead to large drops in accuracy.\nUnlike perturbation-based attacks, Lp distances between original and adversarial images are large, yet they are visually similar. Moreover, we do not observe high-frequency perturbations in the generated images. The model\nlearns to modify the initial input without leaving the manifold of realistic images. Additional examples and higherresolution images are provided in the appendix." }, { "heading": "4.1 ADVERSARIAL TRAINING", "text": "Adversarial training increases robustness of models by injecting adversarial examples into training data. Adversarial training with norm-bounded examples degrades performance of the classifier on clean images as they have different underlying distributions. We show that adversarial training with our unrestricted examples improves the model’s accuracy on clean images. To ensure that the model maximally benefits from these additional samples, we need to avoid unrealistic examples which do not resemble natural images. Therefore, we only include samples that can fool the model in less than a specific number of iterations. We use a threshold of 10 as the maximum number of iterations, and demonstrate results on classification and semantic segmentation. We use the first 10 generated examples for each starting image in the segmentation task. Table 1 shows accuracy of the strengthened and orig-\ninal classifiers on clean and adversarial test images. For the segmentation task we report the average accuracy of adversarial images at iteration 10. Similar to norm-constrained perturbations, adversarial training is an effective defense against our unrestricted attacks. Note that accuracy of the model on clean test images is improved after adversarial training. This is in contrast to training with norm-bounded adversarial inputs which hurts the classifier’s performance on clean images, and it is due to the fact that unlike perturbation-based inputs, our generated images live on the manifold of realistic images as constrained by the generative model." }, { "heading": "4.2 USER STUDY", "text": "Norm-constrained attacks provide visual realism by Lp proximity to a real input. To verify that our unrestricted adversarial examples are realistic and correctly classified by an oracle, we perform human evaluation using Amazon Mechanical Turk. In the first experiment, each adversarial image is assigned to three workers, and their majority vote is considered as the label. The user interface for each worker contains nine images, and shows possible labels to choose from. We use 2400 noise-based and 2400 style-based adversarial images from the LSUN dataset, containing 200 samples from each class (10 scene classes and 2 object classes). The results indicate that 99.2% of workers’ majority votes match the ground-truth labels. This number is 98.7% for style-based adversarial examples and 99.7% for noise-based ones. As we observe in Figure 3, noise-based examples do not deviate much from the original image, resulting in easier prediction by a human observer. On the other hand, style-based images show coarser changes, which in a few cases result in unrecognizable images or false predictions by the workers.\nWe use a similar setup in the second experiment but for classifying real versus fake (generated). We also include 2400 real images as well as 2400 unperturbed images generated by Style-GAN. 74.7% of unperturbed images are labeled by workers as real. This number is 74.3% for noise-based adversarial examples and 70.8% for style-based ones, indicating less than 4% drop compared with unperturbed images generated by Style-GAN." }, { "heading": "4.3 EVALUATION ON CERTIFIED DEFENSES", "text": "Cohen et al. (2019) propose the first certified defense on norm-bounded perturbations at the scale of ImageNet. Using randomized smoothing with Gaussian noise, their defense guarantees a certain top-1 accuracy for perturbations with L2 norm less than a specific threshold. We demonstrate that our unrestricted attacks can break this certified defenses on ImageNet. We use 400 noise-based and 400 style-based adversarial images from the object categories of LSUN, and group all relevant ImageNet classes as the ground-truth. Our adversarial examples are evaluated against a randomized smoothing classifier based on ResNet-50 using Gaussian noise with standard deviation of 0.5. Table 2 shows accuracy of the model on clean and adversarial images. As we observe, the accuracy drops on adversarial inputs, and the certified defense is not effective against our attack. Note that we stop updating adversarial images as soon as the model is fooled. If we keep updating for more iterations afterwards, we can achieve even stronger attacks." }, { "heading": "5 CONCLUSION AND FUTURE WORK", "text": "The area of unrestricted adversarial examples is relatively under-explored. Not being bounded by a norm threshold provides its own pros and cons. It allows us to create a diverse set of attack mechanisms; however, fair comparison of relative strength of these attacks is challenging. It is also unclear how to even define provable defenses. While several papers have attempted to interpret norm-constrained attacks in terms of decision boundaries, there has been less effort in understanding the underlying reasons for models’ vulnerabilities to unrestricted attacks. We believe these can be promising directions for future research. We also plan to further explore transferability of our approach for black-box attacks in the future." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 COMPARISON WITH SONG ET AL. (2018)", "text": "We show that adversarial training with examples generated by Song et al. (2018) hurts the classifier’s performance on clean images. Table 3 demonstrates the results. We use the same classifier architectures as Song et al. (2018) and consider their basic attack. We observe that the test accuracy on clean images drops by 1.3%, 1.4% and 1.1% on MNIST, SVHN and CelebA respectively. As we show in Table 1 training with our examples improves the accuracy, demonstrating difference of our approach with that of Song et al. (2018).\nTo further illustrate and compare distributions of real and adversarial images, we use a pre-trained VGG network to extract features of each image from CelebA-HQ, our adversarial examples, and those of Song et al. (2018), and then plot them with t-SNE embeddings as shown in Figure 7. We can see that the embeddings of CelebA-HQ real and our adversarial images are blended while the those of CelebA-HQ and Song et al.’s adversarial examples are more segregated. This again provides evidence that our adversarial images stay closer to the original manifold and hence could be more useful as adversarial training data." }, { "heading": "A.2 ADVERSARIAL TRAINING WITH NORM-BOUNDED PERTURBATIONS", "text": "We consider adversarial training with norm-bounded perturbations and limit the number of iterations to make the setup comparable with our unrestricted adversarial training. Specifically, we use Iterative-FGSM with = 4 and a bounded number of steps. Results are shown in Table 4. Note that accuracy of the models drop on clean images although we use a weak attack. This is in contrast to training with our unrestricted adversarial examples that improves the accuracy." }, { "heading": "A.3 NUMBER OF ITERATIONS", "text": "To make sure the iterative process always converges in a reasonable number of steps, we measure the number of updates required to fool the classifier on 1000 randomly-selected images. Results are shown in Table 5. Note that for targeted attacks we first randomly sample a target class different from the ground-truth label for each image." }, { "heading": "A.4 OBJECT DETECTION RESULTS", "text": "Figure 8 illustrates results on the object detection task using the RetinaNet target model [Lin et al. (2017)]. We observe that small changes in the images lead to incorrect bounding boxes and predictions by the model.\nA.5 IMPACT OF γ AND β ON SEMANTIC SEGMENTATION RESULTS\nIn segmentation results shown in Figure 11 we simultaneously modify both γ and β parameters of the SPADE module (Figure 2). We can also consider the impact of modifying each parameter separately. Figure 9 illustrates the results. As we observe, changing γ and β modifies fine details of the images which are barely perceptible yet they lead to large changes in predictions of the segmentation model." }, { "heading": "A.6 ADVERSARIAL CHANGES TO SINGLE IMAGES", "text": "Figure 10 illustrates how images vary as we manipulate specific layers of the network. We observe that each set of layers creates different adversarial changes. For instance, layers 12 to 18 mainly change low-level color details." }, { "heading": "A.7 ADDITIONAL EXAMPLES", "text": "We also provide additional examples and higher-resolution images in the following. Figure 11 depicts additional examples on the segmentation task. Figure 12 illustrates adversarial examples on CelebA-HQ gender classification, and Figure 13 shows additional examples on the LSUN dataset. Higher-resolution versions for some of the adversarial images are shown in Figure 14, which particularly helps to distinguish subtle differences between original and noise-based images." } ]
2,020
null
SP:597472fc14f399625474d13df3453d6377a6c465
[ "In this paper, the author studied the problem of node embedding in signed networks. The authors proposed SGDNet which combines the idea of diffusion/random work in signed networks and Residual connection in GCN. The network is trained directly with classification loss on edge sign prediction. The authors carried out extensive experiments on several real-world networks with comparison to several state-of-the-art methods. The proposed method showed superior performance in the sign prediction task." ]
Given a signed social graph, how can we learn appropriate node representations to infer the signs of missing edges? Signed social graphs have received considerable attention to model trust relationships. Learning node representations is crucial to effectively analyze graph data, and various techniques such as network embedding and graph convolutional network (GCN) have been proposed for learning signed graphs. However, traditional network embedding methods are not end-to-end for a specific task such as link sign prediction, and GCN-based methods suffer from a performance degradation problem when their depth increases. In this paper, we propose SIGNED GRAPH DIFFUSION NETWORK (SGDNET), a novel graph neural network that achieves end-to-end node representation learning for link sign prediction in signed social graphs. We propose a random walk technique specially designed for signed graphs so that SGDNET effectively diffuses hidden node features. Through extensive experiments, we demonstrate that SGDNET outperforms state-of-the-art models in terms of link sign prediction accuracy.
[]
[ { "authors": [ "Lingyang Chu", "Zhefeng Wang", "Jian Pei", "Jiannan Wang", "Zijin Zhao", "Enhong Chen" ], "title": "Finding gangs in war from signed networks", "venue": "In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data", "year": 2016 }, { "authors": [ "Tyler Derr", "Charu C. Aggarwal", "Jiliang Tang" ], "title": "Signed network modeling based on structural balance theory", "venue": "In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM 2018,", "year": 2018 }, { "authors": [ "Tyler Derr", "Yao Ma", "Jiliang Tang" ], "title": "Signed graph convolutional networks", "venue": "In IEEE International Conference on Data Mining, ICDM 2018,", "year": 2018 }, { "authors": [ "Ramanathan V. Guha", "Ravi Kumar", "Prabhakar Raghavan", "Andrew Tomkins" ], "title": "Propagation of trust and distrust", "venue": "In Proceedings of the 13th international conference on World Wide Web,", "year": 2004 }, { "authors": [ "Nathan Halko", "Per-Gunnar Martinsson", "Joel A. Tropp" ], "title": "Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions", "venue": "SIAM Rev.,", "year": 2011 }, { "authors": [ "William L. Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Paul W Holland", "Samuel Leinhardt" ], "title": "Transitivity in structural models of small groups", "venue": "Comparative group studies,", "year": 1971 }, { "authors": [ "Glen Jeh", "Jennifer Widom" ], "title": "Scaling personalized web search", "venue": "In Proceedings of the Twelfth International World Wide Web Conference,", "year": 2003 }, { "authors": [ "Jinhong Jung", "Woojeong Jin", "Lee Sael", "U Kang" ], "title": "Personalized ranking in signed networks using signed random walk with restart", "venue": "In IEEE 16th International Conference on Data Mining,", "year": 2016 }, { "authors": [ "Jinhong Jung", "Ha-Myung Park", "U Kang" ], "title": "Balansing: Fast and scalable generation of realistic signed networks", "venue": "In Proceedings of the 23rd International Conference on Extending Database Technology,", "year": 2020 }, { "authors": [ "Junghwan Kim", "Haekyu Park", "Ji-Eun Lee", "U Kang" ], "title": "SIDE: representation learning in signed directed networks", "venue": "In Proceedings of the 2018 World Wide Web Conference on World Wide Web, WWW 2018,", "year": 2018 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Johannes Klicpera", "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Predict then propagate: Graph neural networks meet personalized pagerank", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Srijan Kumar", "Francesca Spezzano", "V.S. Subrahmanian" ], "title": "Accurately detecting trolls in slashdot zoo via decluttering", "venue": "In 2014 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining,", "year": 2014 }, { "authors": [ "Srijan Kumar", "Francesca Spezzano", "V.S. Subrahmanian", "Christos Faloutsos" ], "title": "Edge weight prediction in weighted signed networks", "venue": "In IEEE 16th International Conference on Data Mining,", "year": 2016 }, { "authors": [ "Jérôme Kunegis", "Andreas Lommatzsch", "Christian Bauckhage" ], "title": "The slashdot zoo: mining a social network with negative edges", "venue": "In Proceedings of the 18th International Conference on World Wide Web,", "year": 2009 }, { "authors": [ "Jure Leskovec", "Daniel P. Huttenlocher", "Jon M. Kleinberg" ], "title": "Predicting positive and negative links in online social networks", "venue": "In Proceedings of the 19th International Conference on World Wide Web,", "year": 2010 }, { "authors": [ "Jure Leskovec", "Daniel P. Huttenlocher", "Jon M. Kleinberg" ], "title": "Signed networks in social media", "venue": "In Proceedings of the 28th International Conference on Human Factors in Computing Systems,", "year": 2010 }, { "authors": [ "Guohao Li", "Matthias Müller", "Ali K. Thabet", "Bernard Ghanem" ], "title": "Deepgcns: Can gcns go as deep as cnns", "venue": "IEEE/CVF International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Qimai Li", "Zhichao Han", "Xiao-Ming Wu" ], "title": "Deeper insights into graph convolutional networks for semi-supervised learning. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence", "venue": null, "year": 2018 }, { "authors": [ "Xiaoming Li", "Hui Fang", "Jie Zhang" ], "title": "Supervised user ranking in signed social networks", "venue": "In The Thirty-Third AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Yu Li", "Yuan Tian", "Jiawei Zhang", "Yi Chang" ], "title": "Learning signed network embedding via graph attention", "venue": "In The Thirty-Fourth AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Kenta Oono", "Taiji Suzuki" ], "title": "Graph neural networks exponentially lose expressive power for node classification", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Everett M Rogers" ], "title": "Diffusion of innovations", "venue": "Simon and Schuster,", "year": 2010 }, { "authors": [ "Lloyd N Trefethen", "David Bau III" ], "title": "Numerical linear algebra, volume 50", "venue": null, "year": 1997 }, { "authors": [ "Petar Velickovic", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Liò", "Yoshua Bengio" ], "title": "Graph attention networks", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Pinghua Xu", "Wenbin Hu", "Jia Wu", "Bo Du" ], "title": "Link prediction with signed latent factors in signed social networks", "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2019,", "year": 2019 }, { "authors": [ "Bo Yang", "William K. Cheung", "Jiming Liu" ], "title": "Community mining from signed social networks", "venue": "IEEE Trans. Knowl. Data Eng.,", "year": 2007 }, { "authors": [ "Liang Yao", "Chengsheng Mao", "Yuan Luo" ], "title": "Graph convolutional networks for text classification", "venue": "In The Thirty-Third AAAI Conference on Artificial Intelligence,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Given a signed social graph, how can we learn appropriate node representations to infer the signs of missing edges? Signed social graphs model trust relationships between people with positive (trust) and negative (distrust) edges. Many online social services such as Epinions (Guha et al., 2004) and Slashdot (Kunegis et al., 2009) that allow users to express their opinions are naturally represented as signed social graphs. Such graphs have attracted considerable attention for diverse applications including link sign prediction (Leskovec et al., 2010a; Kumar et al., 2016), node ranking (Jung et al., 2016; Li et al., 2019b), community analysis (Yang et al., 2007; Chu et al., 2016), graph generation (Derr et al., 2018a; Jung et al., 2020), and anomaly detection (Kumar et al., 2014). Node representation learning is a fundamental building block for analyzing graph data, and many researchers have put tremendous efforts into developing effective models for unsigned graphs. Graph convolutional networks (GCN) and their variants (Kipf & Welling, 2017; Velickovic et al., 2018) have spurred great attention in machine learning community, and recent works (Klicpera et al., 2019; Li et al., 2019a) have demonstrated stunning progress by handling the performance degradation caused by over-smoothing (Li et al., 2018; Oono & Suzuki, 2020) (i.e., node representations become indistinguishable as the number of propagation increases) or the vanishing gradient problem (Li et al., 2019a) in the first generation of GCN models. However, all of these models have a limited performance on node representation learning in signed graphs since they only consider unsigned edges under the homophily assumption (Kipf & Welling, 2017).\nMany studies have been recently conducted to consider such signed edges, and they are categorized into network embedding and GCN-based models. Network embedding (Kim et al., 2018; Xu et al., 2019b) learns the representations of nodes by optimizing an unsupervised loss that primarily aims to locate two nodes’ embeddings closely (or far) if they are positively (or negatively) connected. However, they are not trained jointly with a specific task in an end-to-end manner, i.e., latent features and the task are trained separately. Thus, their performance is limited unless each of them is tuned delicately. GCN-based models (Derr et al., 2018b; Li et al., 2020) have extended the graph convolutions to signed graphs using balance theory (Holland & Leinhardt, 1971) in order to properly propagate node features on signed edges. However, these models are directly extended from existing GCNs without consideration of the over-smoothing problem that degrades their performance. This problem hinders them from exploiting more information from multi-hop neighbors for learning node features in signed graphs.\nWe propose SGDNET (SIGNED GRAPH DIFFUSION NETWORK), a novel graph neural network for node representation learning in signed graphs. Our main contributions are summarized as follows:\n• End-to-end learning. We design SGDNET that performs end-to-end node representation learning. Given a signed graph, SGDNET produces node embeddings through multiple signed graph diffusion (SGD) layers (Figure 1(a)), which are fed into a loss function of a specific task such as link sign prediction. • Novel feature diffusion. We propose a signed random walk diffusion method that prop-\nagates node embeddings on signed edges based on random walks considering signs, and injects local features (Figure 1(c)). This enables SGDNET to learn distinguishable node representations considering multi-hop neighbors while preserving local information. • Experiments. Extensive experiments show that SGDNET effectively learns node represen-\ntations of signed social graphs for link sign prediction, giving at least 3.9% higher accuracy than the state-of-the-art models in real datasets (Table 2)." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 GRAPH CONVOLUTIONAL NETWORKS ON UNSIGNED GRAPHS", "text": "Graph convolutional network (GCN) (Kipf & Welling, 2017) models the latent representation of a node by employing a convolutional operation on the features of its neighbors. Various GCN-based approaches (Kipf & Welling, 2017; Velickovic et al., 2018; Hamilton et al., 2017) have aroused considerable attention since they enable diverse graph supervised tasks (Kipf & Welling, 2017; Yao et al., 2019; Xu et al., 2019a) to be performed concisely under an end-to-end framework. However, the first generation of GCN models exhibit performance degradation due to the over-smoothing and vanishing gradient problems. Several works (Li et al., 2018; Oono & Suzuki, 2020) have theoretically revealed the over-smoothing problem. Also, Li et al. (Li et al., 2019a) have empirically shown that stacking more GCN layers leads to the vanishing gradient problem as in convolutional neural networks (He et al., 2016). Consequently, most GCN-based models (Kipf & Welling, 2017; Velickovic et al., 2018; Hamilton et al., 2017) are shallow; i.e., they do not use the feature information in faraway nodes when modeling node embeddings.\nA recent research direction aims at resolving the limitation. Klicpera et al. (Klicpera et al., 2019) proposed APPNP exploiting Personalized PageRank (Jeh & Widom, 2003) to not only propagate hidden node embeddings far but also preserve local features, thereby preventing aggregated features from being over-smoothed. Li et al. (Li et al., 2019a) suggested ResGCN adding skip connections between GCN layers, as in ResNet (He et al., 2016). However, all of these models do not provide how to use signed edges since they are based on the homophily assumption (Kipf & Welling, 2017), i.e.,\nusers having connections are likely to be similar, which is not valid for negative edges. As opposed to the homophily, negative edges have the semantics of heterophily (Rogers, 2010), i.e., users having connections are dissimilar. Although these methods can still be applied to signed graphs by ignoring the edge signs, their trained features have limited capacity." }, { "heading": "2.2 NETWORK EMBEDDING AND GRAPH CONVOLUTIONAL NETWORKS ON SIGNED GRAPHS", "text": "Traditional methods on network embedding extract latent node features specialized for signed graphs in an unsupervised manner. Kim et al. (Kim et al., 2018) proposed SIDE which optimizes a likelihood over direct and indirect signed connections on truncated random walks sampled from a signed graph. Xu et al. (Xu et al., 2019b) developed SLF considering positive, negative, and non-linked relationships between nodes to learn non-negative node embeddings. However, such approaches are not end-to-end, i.e., they are not directly optimized for solving a supervised task such as link prediction.\nThere are recent progresses on end-to-end learning on signed networks under the GCN framework. Derr et al. (Derr et al., 2018b) proposed SGCN which extends the GCN mechanism to signed graphs considering balanced and unbalanced relationships supported by structural balance theory (Holland & Leinhardt, 1971). Yu et al. (Li et al., 2020) developed SNEA using attention techniques to reveal the importance of these relationships. However, such state-of-the-art models do not consider the over-smoothing problem since they are directly extended from GCN." }, { "heading": "3 PROPOSED METHOD", "text": "We propose SGDNET (SIGNED GRAPH DIFFUSION NETWORK), a novel end-to-end model for node representation learning in signed graphs. Our SGDNET aims to properly aggregate node features on signed edges, and to effectively use the features of multi-hop neighbors so that generated features are not over-smoothed. Our main ideas are to diffuse node features along random walks considering the signs of edges, and to inject local node features at each aggregation.\nFigure 1 depicts the overall architecture of SGDNET. Given a signed graph G and initial node features X ∈ Rn×d0 as shown in Figure 1(a), SGDNET extracts the final node embeddings H(L) ∈ Rn×dL through multiple SGD layers where n is the number of nodes, L is the number of SGD layers, and dl is the embedding dimension of the l-th layer. Then, H(L) is fed into a loss function of a specific task so that they are jointly trained in an end-to-end framework. Given H(l−1), the l-th SGD layer aims to learn H(l) based on feature transformations and signed random walk diffusion Fd(·) as shown in Figure 1(b). The layer also uses the skip connection to prevent the vanishing gradient problem when the depth of SGDNET increases.\nFigure 1(c) illustrates the intuition behind the signed random walk diffusion. Each node has two features corresponding to positive and negative surfers, respectively. The surfer flips its sign when moving along negative edges, while the sign is kept along positive edges. For example, the positive (or negative) surfer becomes positive at node v if it moves from a positively connected node u (or a negatively connected node t). As a result, the aggregated features at node v become similar to those connected by positive edges (e.g., node u), and different from those connected by negative edges (e.g., node t). In other words, it satisfies homophily and heterophily at the same time while unsigned GCNs cannot handle the heterophily of negative edges. Furthermore, we inject the local feature (i.e., the input feature of the module) of node v at each aggregation so that the resulting features remain distinguishable during the diffusion." }, { "heading": "3.1 SIGNED GRAPH DIFFUSION LAYER", "text": "Given a signed graph G and the node embeddings H(l−1) from the previous layer, the l-th SGD layer learns new embeddings H(l) as shown in Figure 1(b). It first transforms H(l−1) into hidden features H̃(l) as H̃(l) = H(l−1)W(l)t with a learnable parameter W (l) t ∈ Rdl−1×dl . Then, it applies the signed random walk diffusion which is represented as the function Fd(G, H̃(l)) that returns P(l) ∈ Rn×dl and M(l) ∈ Rn×dl as the positive and the negative embeddings, respectively (details in Section 3.2). The embeddings are concatenated and transformed as follows:\nH(l) = φ ([ P(l)||M(l) ]\nW(l)n + H (l−1)\n) (1)\nwhere φ(·) is a non-linear activator such as tanh, || denotes horizontal concatenation of two matrices, and W(l)n ∈ R2dl×dl is a trainable weight matrix that learns a relationship between P(l) and M(l). We use the skip connection (He et al., 2016; Li et al., 2019a) with H(l−1) in Equation (1) to avoid the vanishing gradient issue which frequently occurs when multiple layers are stacked." }, { "heading": "3.2 SIGNED RANDOM WALK DIFFUSION", "text": "We design the signed random walk diffusion operator Fd(·) used in the l-th SGD layer. Given the signed graph G and the hidden node embeddings H̃(l), the diffusion operator Fd(·) diffuses the node features based on random walks considering edge signs so that it properly aggregates node features on signed edges and prevents the aggregated features from being over-smoothed.\nSigned random walks are performed by a signed random surfer (Jung et al., 2016) who has the + or − sign when moving around the graph. Figure 2(a) shows signed random walks on four cases according to edge signs: 1) a friend’s friend, 2) a friend’s enemy, 3) an enemy’s friend, and 4) an enemy’s enemy. The surfer starts from node s with the + sign. If it encounters a negative edge, the surfer flips its sign from + to −, or vice versa. Otherwise, the sign is kept. The surfer determines whether a target node t is a friend of node s or not according to its sign.\nFd(·) exploits the signed random walk for diffusing node features on signed edges. Each node is represented by two feature vectors which represent the positive and negative signs, respectively. Let k denote the number of diffusion steps or random walk steps. Then, p(k)v ∈ Rdl×1 and m(k)v ∈ Rdl×1 are aggregated at node v, respectively, where p(k)v (or m(k)v ) is the feature vector visited by the positive (or negative) surfer at step k. These are recursively obtained by the following equations:\np(k)v = (1− c) ( ∑\nu∈←−N +v\n1\n|−→Nu| p(k−1)u + ∑ t∈←−N −v 1 |−→N t| m (k−1) t ) + ch̃(l)v\nm(k)v = (1− c) ( ∑\nt∈←−N −v\n1\n|−→N t| p (k−1) t + ∑ u∈←−N +v 1 |−→Nu| m(k−1)u ) (2) where ←− N sv is the set of incoming neighbors to node v connected with edges of sign s, −→ N u is the set of outgoing neighbors from node u regardless of edge signs, h̃(l)v is the local feature of node v (i.e., the v-th row vector of H̃(l)), and 0 < c < 1 is a local feature injection ratio. That is, the features are computed by the signed random walk feature diffusion with weight 1− c and the local feature injection with weight c with the following details.\nSigned Random Walk Feature Diffusion. Figure 2(b) illustrates how p(k)v and m(k)v are diffused by the signed random walks according to Equation (2). Suppose the positive surfer visits node v at step k. For this to happen, the positive surfer of an incoming neighbor u at step k − 1 should choose the edge (u→ v,+) by a probability 1/|−→N u|. This transition to node v along the positive edge allows to keep the surfer’s positive sign. At the same time, the negative surfer of an incoming neighbor t at step k − 1 should move along the edge (t→ v,−) by a probability 1/|−→N t|. In this case, the surfer\nflips its sign from − to +. Considering these signed random walks, p(k)v is obtained by the weighted aggregation of p(k−1)u and m(k−1)t . Similarly, m (k) v is aggregated as shown in Figure 2(b).\nLocal Feature Injection. Although the feature diffusion above properly considers edge signs, the generated features could be over-smoothed after many steps if we depend solely on the diffusion. In other words, it considers only the graph information explored by the signed random surfer, while the local information in the hidden feature h̃(l)v is disregarded during the diffusion. Hence, as shown in Figure 2(b), we explicitly inject the local feature h̃(l)v to p(k)v with weight c at each aggregation in Equation (2) so that the diffused features are not over-smoothed. The reason why local features are only injected to + embeddings is that we consider a node should trust (+) its own information (i.e., its local feature)." }, { "heading": "3.3 CONVERGENCE GUARANTEE OF SIGNED RANDOM WALK DIFFUSION", "text": "Suppose that P(k) = [p(k)>1 ; · · · ;p (k)> n ] and M(k) = [m(k)>1 ; · · · ;m (k)> n ] represent the positive and negative embeddings of all nodes, respectively, where ; denotes vertical concatenation. Let As be the adjacency matrix for sign s such that Asuv is 1 for signed edge (u→ v, s), and 0 otherwise. Then, Equation (2) is vectorized as follows:\nP(k) = (1− c)(Ã>+P(k−1) + Ã>−M(k−1)) + cH̃(l)\nM(k) = (1− c)(Ã>−P(k−1) + Ã>+M(k−1))\n} ⇒ T(k) = (1− c)B̃T(k−1) + cQ (3)\nwhere Ãs = D−1As is the normalized matrix for sign s, and D is a diagonal out-degree matrix (i.e., Dii = | −→ N i|). The left equation of Equation (3) is compactly represented as the right equation where\nT(k) =\n[ P(k)\nM(k)\n] B̃ = [ Ã>+ à > −\nÃ>− à > +\n] Q = [ H̃(l)\n0\n] .\nThen, T(k) is guaranteed to converge as shown in the following theorem.\nTheorem 1 The diffused features in T(k) converge to equilibrium for c ∈ (0, 1) as follows:\nT∗ = lim k→∞ T(k) = lim k→∞ ( k−1∑ i=0 (1− c)iB̃i ) Q̃ = (I− (1− c)B̃)−1Q̃ (Q̃ := cQ) (4)\nIf we iterate Equation (3) K times for 1 ≤ k ≤ K, the exact solution T∗ is approximated as\nT∗ ≈ T(K) = Q̃ + (1− c)B̃Q̃ + · · ·+ (1− c)K−1B̃K−1Q̃ + (1− c)KB̃KT(0) (5)\nwhere ‖T∗ − T(K)‖1 ≤ (1 − c)K‖T∗ − T(0)‖1, and T(0) = [ P(0)\nM(0)\n] is the initial value of\nEquation (3). 2\nProof 1 A proof sketch is to show the spectral radius of B̃ is less than or equal to 1, which guarantees the convergence of the geometric series with (1− c)B̃ . See the details in Appendix A.1. 2\nAccording to Theorem 1, B̃KQ̃ is the node features diffused by K-step signed random walks with Q̃ where B̃K is interpreted as the transition matrix of K-step signed random walks. Thus, the approximation is the sum of the diffused features from 1 to K steps with a decaying factor 1− c, i.e., the effect of distant nodes gradually decreases while that of neighboring nodes is high. This is the reason why SGDNET prevents diffused features from being over-smoothed. Also, the approximation error ‖T∗ −T(K)‖1 exponentially deceases as K increases due to the term (1− c)K . Another point is that the iteration of Equation (3) converges to the same solution no matter what P(0) and M(0) are given. In this work, we initialize P(0) with H̃(l), and randomly initialize M(0) in [−1, 1]. The signed random walk diffusion operator Fd(·) iterates Equation (3) K times for 1 ≤ k ≤ K where K is the number of diffusion steps, and it returns P(l) ← P(K) and M(l) ← M(K) as the outputs of the diffusion module at the l-th SGD layer. The detailed pseudocode of SGDNET is described in Appendix A.3, and its time complexity is analyzed in Appendix A.2." }, { "heading": "3.4 LOSS FUNCTION FOR LINK SIGN PREDICTION", "text": "The link sign prediction is to predict the missing sign of a given edge. As shown in Figure 1(a), SGDNET produces the final node embeddings H(L). The embeddings are fed into a loss function L(G,H(L);Θ) = Lsign(G,H(L)) + λLreg(Θ) where Θ is the set of model parameters, Lsign(·) is the binary cross entropy loss, and Lreg(·) is the L2 regularization loss with weight decay λ. For a signed edge (u→ v, s), the edge feature is zuv ∈ R1×2dL= h(L)u ||h(L)v where h(L)u is the u-th row vector of H(L). Let E be the set of signed edges. Then, Lsign(·) is represented as follows:\nLsign(G,X) = − ∑\n(u→v,s)∈E ∑ t∈{+,−} I(t = s) log (softmaxt (zuvW))\nwhere W ∈ R2dL×2 is a learnable weight matrix, softmaxt(·) is the probability for sign t after softmax operation, and I(·) returns 1 if a given predicate is true, and 0 otherwise." }, { "heading": "4 EXPERIMENTS", "text": "We evaluate the effectiveness of SGDNET through the link sign prediction task.\nDatasets. We perform experiments on four standard signed graphs summarized in Table 1: BitcoinAlpha (Kumar et al., 2016), Bitcoin-OTC (Kumar et al., 2016), Slashdot (Kunegis et al., 2009), and Epinions (Guha et al., 2004). We provide the detailed description of each dataset in Appendix A.4. We also report additional experiments on Wikipedia dataset (Leskovec et al., 2010b) in Appendix A.5.\nCompetitors. We compare our proposed SGDNET with the following competitors:\n• APPNP (Klicpera et al., 2019): an unsigned GCN model based on Personalized PageRank. • ResGCN (Li et al., 2019a): another unsigned GCN model exploiting skip connections to\ndeeply stack multiple layers. • SIDE (Kim et al., 2018): a network embedding model optimizing the likelihood over signed\nedges using random walk sequences to encode structural information into node embeddings. • SLF (Xu et al., 2019b): another network embedding model considering positive, negative,\nand non-linked relationships to learn non-negative node embeddings. • SGCN (Derr et al., 2018b): a state-of-the-art signed GCN model considering balanced and\nunbalanced paths motivated from balance theory to propagate embeddings. • SNEA (Li et al., 2020): another signed GCN model extending SGCN by learning attentions\non the balanced and unbalanced paths.\nWe use the absolute adjacency matrix for APPNP and ResGCN since they handle only unsigned edges. All methods are implemented by PyTorch and Numpy in Python. We use a machine with Intel E5-2630 v4 2.2GHz CPU and Geforce GTX 1080 Ti for the experiments.\nEvaluation Metrics. We randomly split the edges of a signed graph into training and test sets by the 8:2 ratio. As shown in Table 1, the sign ratio is highly skewed to the positive sign, i.e., the sampled datasets are naturally imbalanced. Considering the class imbalance, we measure the area under the curve (AUC) to evaluate predictive performance. We also report F1-macro measuring the average of the ratios of correct predictions for each sign since negative edges need to be treated as important as positive edges (i.e., it gives equal importance to each class). A higher value of AUC or F1-macro indicates better performance. We repeat each experiment 10 times with different random seeds and report the average and standard deviation of test values.\nHyperparameter Settings. We set the dimension of final node embeddings to 32 for all methods so that their embeddings have the same learning capacity (see its effect in Appendix A.6). We perform 5-fold cross-validation for each method to find the best hyperparameters and measure the test accuracy with the selected ones. In the cross-validation for SGDNET, the number L of SGD layers is sought between 1 and 6, and the restart probability c is selected from 0.05 to 0.95 by step size 0.1. We set the number K of diffusion steps to 10 and the feature dimension dl of each layer to 32. We follow the range of each hyperparameter recommended in its corresponding paper for the cross-validation of other models. Our model is trained by the Adam optimizer (Kingma & Ba, 2015), where the learning rate is 0.01, the weight decay λ is 0.001, and the number of epochs is 100. We summarize the hyperparameters used by SGDNET for each dataset in Appendix A.7." }, { "heading": "4.1 LINK SIGN PREDICTION", "text": "We evaluate the performance of each method on link sign prediction. Tables 2 and 3 summarize the experimental results in terms of AUC and F1-macro, respectively. Note that our SGDNET shows the best performance in terms of AUC and F1-macro scores. SGDNET presents 3.9 ∼ 6.4% and 1.2 ∼ 7.4% improvements over the second best models in terms of AUC and F1-macro, respectively. We have the following observations.\n• The unsigned GCN models APPNP and ResGCN show worse performance than SGDNET, which shows the importance of using sign information. • The performance of network embedding techniques such as SIDE and SLF is worse than that\nof other GCN-based models; this shows the importance of jointly learning feature extraction and link sign prediction for the performance. • The performance of SGCN and SNEA which use limited features from nodes within 2 ∼ 3\nhops is worse than that of SGDNET which exploits up to K-hop neighbors’ features where K is set to 10 in these experiments. It indicates that carefully exploiting features from distant nodes as well as neighboring ones is crucial for the performance." }, { "heading": "4.2 EFFECT OF DIFFUSION STEPS", "text": "We investigate the effect of the feature diffusion in SGDNET for learning signed graphs. We use one SGD layer, and set the restart probability c to 0.15 to evaluate the pure effect of the diffusion\nmodule; we vary the number K of diffusion steps from 1 to 10 and evaluate the performance of SGDNET in terms of F1-macro for each diffusion step. Also, we compare SGDNET to SGCN, a state-of-the-art-model for learning signed graphs. The number of diffusion steps of SGCN is determined by its number of layers. Figure 3 shows that the performance of SGDNET gradually improves as K increases while that of SGCN dramatically decreases over all datasets. This indicates that SGCN suffers from the performance degradation problem when its network becomes deep, i.e., it is difficult to use more information beyond 3 hops in SGCN. On the other hand, SGDNET utilizes features of farther nodes, and generates more expressive and stable features than SGCN does. Note that the performance of SGDNET converges in general after a sufficient number of diffusion steps, which is highly associated with Theorem 1." }, { "heading": "4.3 EFFECT OF LOCAL INJECTION RATIO", "text": "We examine the effect of the local injection ratio c in the diffusion module of SGDNET. We use one SGD layer, and set the number K of diffusion steps to 10; we vary c from 0.05 to 0.95 by 0.1, and measure the performance of the link sign prediction task in terms of F1-macro. Figure 4 shows the effect of c to the predictive performance of SGDNET. For small datasets such as Bitcoin-Alpha and Bitcoin-OTC, c between 0.15 and 0.35 provides better performance. On the other hand, c around 0.5 shows higher accuracy for large datasets such as Slashdot and Epinions. For all datasets, a too low or too high value of c (e.g., 0.05 or 0.95) results in a poor performance." }, { "heading": "5 CONCLUSION", "text": "In this paper, we propose SIGNED GRAPH DIFFUSION NETWORK (SGDNET), a novel graph neural network that performs end-to-end node representation learning for link sign prediction in signed graphs. We propose a signed random walk diffusion method to properly diffuse node features on signed edges, and suggest a local feature injection method to make diffused features distinguishable. Our diffusion method empowers SGDNET to effectively train node embeddings considering multihop neighbors while preserving local information. Our extensive experiments show that SGDNET provides the best accuracy outperforming the state-of-the-art models in link sign prediction. Future research directions include extending our method for multi-view networks." }, { "heading": "A APPENDIX", "text": "A.1 CONVERGENCE ANALYSIS\nTheorem 1 (Convergence of Signed Random Walk Diffusion) The diffused features in T(k) converge to equilibrium for c ∈ (0, 1) as follows:\nT∗ = lim k→∞ T(k) = lim k→∞ ( k−1∑ i=0 (1− c)iB̃i ) Q̃ = (I− (1− c)B̃)−1Q̃ (Q̃ := cQ)\nIf we iterate Equation (3) K times for 1 ≤ k ≤ K, the exact solution T∗ is approximated as\nT∗ ≈ T(K) = Q̃ + (1− c)B̃Q̃ + · · ·+ (1− c)K−1B̃K−1Q̃ + (1− c)KB̃KT(0)\nwhere ‖T∗ − T(K)‖1 ≤ (1 − c)K‖T∗ − T(0)‖1, and T(0) = [ P(0)\nM(0)\n] is the initial value of\nEquation (3). 2\nProof 1 The iteration of Equation (3) is written as follows:\nT(k) = (1− c)B̃T(k−1) + cQ = ( (1− c)B̃ )2 T(k−2) + ( (1− c)B̃ + I ) Q̃\n= · · ·\n= ( (1− c)B̃ )k T(0) + ( k−1∑ i=0 ( (1− c)iB̃i )) Q̃. (6)\nNote that the spectral radius ρ(B̃) is less than or equal to 1 by Theorem 2; thus, for 0 < c < 1, the spectral radius of (1− c)B̃ is less than 1, i.e., ρ((1− c)B̃) = (1− c)ρ(B̃) ≤ (1− c) < 1. Hence, if k →∞, the power of (1− c)B̃ converges to 0, i.e., limk→∞(1− c)kB̃k = 0. Also, the second term in Equation (6) becomes the infinite geometric series of (1− c)B̃ which converges as the following equation:\nT∗ = lim k→∞ T(k) = 0 + lim k→∞ ( k−1∑ i=0 ( (1− c)iB̃i )) Q̃ = (I− (1− c)B̃)−1Q̃\nwhere the convergence always holds if ρ((1 − c)B̃) < 1. The converged solution T∗ satisfies T∗ = (1− c)B̃T∗ + cQ. Also, T∗ is approximated as Equation (5). Then, the approximation error ‖T∗ −T(K)‖1 is bounded as follows:\n‖T∗ −T(K)‖1 = ‖(1− c)B̃T∗ − (1− c)B̃T(K−1)‖1 ≤ (1− c)‖B̃‖1‖T∗ −T(K−1)‖1 ≤ (1− c)‖T∗ −T(K−1)‖1 ≤ · · · ≤ (1− c)K‖T∗ −T(0)‖1 (7)\nwhere ‖·‖1 is L1-norm of a matrix. Note that the bound ‖B̃‖1 ≤ 1 of Theorem 2 is used in the above derivation. 2\nTheorem 2 (Bound of Spectral Radius of B̃) The spectral radius of B̃ in Equation (3) is less than or equal to 1, i.e., ρ(B̃) ≤ ‖B̃‖1 ≤ 1. 2\nProof 2 According to spectral radius theorem (Trefethen & Bau III, 1997), ρ(B̃) ≤ ‖B̃‖1 where ‖·‖1 denotes L1-norm of a given matrix, indicating the maximum absolute column sum of the matrix. Note that the entries of B̃ are non-negative probabilities; thus, the absolute column sums of B̃ are equal to its column sums which are obtained as follows:\n1>2nB̃ = [ 1>n à > + + 1 > n à > − 1 > n à > − + 1 > n à > + ] = [ 1>n à > 1>n à > ] = [ b> b> ] (8)\nwhere Ã> = Ã>+ + à > −, and 1n is an n-dimensional one vector. Note that à > s = A > s D −1 for sign s where D is a diagonal out-degree matrix (i.e., Duu = | −→ Nu|). Then, 1>n Ã> is represented as\n1>n à > = 1>n (A > + + A > −)D −1 = 1>n |A| > D−1 = (|A|1n)>D−1 = b>\nwhere |A| = A+ + A− is the absolute adjacency matrix. The u-th entry of |A|1n indicates the out-degree of node u, denoted by |−→Nu|. Note that D−1uu is 1/| −→ Nu| if u is a non-deadend. Otherwise, D−1uu = 0 (i.e., a deadend node has no outgoing edges). Hence, the u-th entry of b > is 1 if node u is not a deadend, or 0 otherwise; its maximum value is less than or equal to 1. Therefore, ρ(B̃) ≤ ‖B̃‖1 ≤ 1. 2\nA.2 TIME COMPLEXITY ANALYSIS\nTheorem 3 (Time Complexity of SGDNET) The time complexity of the l-th SGD layer is O(Kmdl + ndl−1dl) where K is the number of diffusion steps, dl is the feature dimension of the l-th layer, and m and n are the number of edges and nodes, respectively. Assuming all of dl are set to d, SGDNET with L SGD layers takes O(LKmd+ Lnd2) time. 2\nProof 3 The feature transform operations require O(ndl−1dl) time due to their dense matrix multiplication. Each iteration of the signed random walk diffusion in Equation (3) takes O(mdl) time due to the sparse matrix multiplication B̃T(k−1) where the number of non-zeros of B̃ is O(m). Thus, O(Kmdl) is required for K iterations. Overall, the total time complexity of the l-th SGD layer is O(Kmdl + ndl−1dl). 2\nA.3 PSEUDOCODE OF SGDNET\nAlgorithm 1 describes SGDNET’s overall procedure which is depicted in Figure 1. Given signed adjacency matrix A and related hyper-parameters (e.g., numbers L andK of SGD layers and diffusion steps, respectively), SGDNET produces the final hidden node features H(L) which are fed to a loss function as described in Section 3.4. It first computes the normalized matrices Ã+ and Ã− (line 1). Then, it performs the forward function of SGDNET (lines 3 ∼ 12). The forward function repeats the signed random walk diffusion K times (lines 6 ∼ 9), and then performs the non-linear feature transformation skip-connected with H(l−1) (line 11).\nAlgorithm 1: Pseudocode of SGDNET Input: signed adjacency matrix A, initial node feature matrix X, number K of diffusion steps,\nnumber L of SGD layers, and local feature injection ratio c Output: hidden node feature matrix H(L)\n1: compute normalized matrices for each sign, i.e., Ã+ = D−1A+ and Ã− = D−1A− 2: initialize H(0) with X 3: for l← 1 to L do . start the forward function of SGDNET 4: perform the feature transformation as H̃(l) ← H(l−1)W(l)t 5: initialize P(0) with H̃(l) and randomly initialized M(0) in [−1, 1] 6: for k ← 1 to K do . perform the signed random walk diffusion in Equation (2) 7: P(k) ← (1− c)(Ã>+P(k−1) + Ã>−M(k−1)) + cH̃(l) 8: M(k) ← (1− c)(Ã>−P(k−1) + Ã>+M(k−1)) 9: end for\n10: set P(l) ← P(K) and M(l) ←M(K) 11: compute l-th hidden node features H(l) ← tanh( [ P(l)||M(l) ] W (l) n + H(l−1))\n12: end for 13: return H(L)\nA.4 DETAILED DESCRIPTION OF DATASETS\nThe Bitcoin-Alpha and Bitcoin-OTC datasets (Kumar et al., 2016) are extracted from directed online trust networks served by Bitcoin Alpha and Bitcoin OTC, respectively. The Slashdot dataset (Kunegis et al., 2009) is collected from Slashdot, a technology news site which allows a user to create positive or negative links to others. The Epinions dataset (Guha et al., 2004) is a directed signed graph scraped from Epinions, a product review site in which users mark their trust or distrust to others.\nThe publicly available signed graphs do not contain initial node features even though they have been utilized as standard datasets in signed graph analysis. Due to this reason, many previous works (Derr et al., 2018b; Li et al., 2020) on GCN for signed graphs have exploited singular vector decomposition (SVD) to extract initial node features. Thus, we follow this setup, i.e., X = UΣd is the initial feature matrix for all GCN-based models where A ' UΣdiV> is obtained by a truncated SVD method, called Randomized SVD (Halko et al., 2011), with target rank di = 128. Note that the method is very efficient (i.e., its time complexity is O(nd2i ) where n is the number of nodes) and performed only once as a preprocessing in advance; thus, it does not affect the computational performance of training and inference.\nA.5 ADDITIONAL EXPERIMENTS ON WIKIPEDIA DATASET\nWe perform additional experiments on Wikipedia dataset (Leskovec et al., 2010b) which has been also frequently used in signed graph analysis. The Wikipedia dataset is a signed graph representing the administrator election procedure in Wikipedia where a user can vote for (+) or against (−) a candidate. The numbers of nodes and edges are 7, 118 and 103, 675, respectively. Figure 5 shows the experimental results on the dataset. As seen in Figures 5(a) and 5(b), SGDNET outperforms other methods in terms of AUC and F1-macro, respectively. Figure 5(c) indicates our diffusion mechanism still works on the Wikipedia dataset. Figure 5(d) shows the effect of the local feature injection ratio c, indicating properly selected c such as 0.5 is helpful for the performance.\nA.6 EFFECT OF EMBEDDING DIMENSION\nWe investigate the effect of the node embedding dimension of each model in the datasets listed in Table 1. For this experiment, we vary the dimension of hidden and final node embeddings from 8 to 128, and observe the trend of AUC in the link sign prediction task. As shown in Figure 6, SGDNET outperforms its competitors over all the tested dimensions, and it is relatively less sensitive to the embedding dimension than other models in all datasets except Bitcoin-Alpha.\nA.7 HYPERPARAMETER CONFIGURATION" } ]
2,020
null
SP:bbd6a6fcf9731e02fdf3e45c4eb4156be2c38d33
[ "This work studies the problem of predicting model performance with more training data when the data are collected from different sources. The predictor is a function of the number training examples, and the ratio of examples from each source. The predictor needs to be built from a small number of training examples the observed model performance, and applied to larger numbers of training example without actually training the model. The predictor can be used to decide a good data collection policy which is expected to have best model performance. The proposed solution is a simple parametric form of the predictor, which is log-linear in the log of # training examples, and log-rational in the source distribution vector. The solution is motivated by recent literature about the same task for single source. The correctness of the solution is proved for several cases: linear regression, M-estimator and nonparametric binning. The performance of the predictor is then evaluated for a number of real-world tasks: linear regression for Amazon book rating, semantic parsing, machine translation and multitask question answering. The performance is measured by r2 score between the actual performance and predicted performance. The proposed predictor has a clear advantage over the baseline of using a linear predictor." ]
Real-world machine learning systems are often are trained using a mix of data sources with varying cost and quality. Understanding how the size and composition of a training dataset affect model performance is critical for advancing our understanding of generalization, as well as designing more effective data collection policies. We show that there is a simple, accurate way to predict the loss incurred by a model based on data size and composition. Our work expands recent observations of log-linear generalization error and uses this to cast model performance prediction as a learning problem. Using the theory of optimal experimental design, we derive a simple rational function approximation to generalization error that can be fitted using a few model training runs. Our approach achieves nearly exact (r > .93) predictions of model performance under substantial extrapolation in two different standard supervised learning tasks and is accurate (r > .83) on more challenging machine translation and question answering tasks where baselines achieve worse-than-random performance.
[]
[ { "authors": [ "A. Agarwal", "M. Dahleh", "T. Sarkar" ], "title": "A Marketplace for Data: An Algorithmic Solution", "venue": "arXiv preprint arXiv:1805.08125,", "year": 2019 }, { "authors": [ "S. Ben-David", "T. Lu", "T. Luu", "D. Pal" ], "title": "Impossibility theorems for domain adaptation", "venue": "In Artificial Intelligence and Statistics (AISTATS),", "year": 2010 }, { "authors": [ "D. Cer", "M. Diab", "E. Agirre", "I. Lopez-Gazpio", "L. Specia" ], "title": "Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation", "venue": "International Workshop on Semantic Evaluation (SemEval),", "year": 2017 }, { "authors": [ "C. Clark", "K. Lee", "M. Chang", "T. Kwiatkowski", "M. Collins", "K. Toutanova" ], "title": "Boolq: Exploring the surprising difficulty of natural yes/no questions", "venue": "In North American Association for Computational Linguistics", "year": 2019 }, { "authors": [ "K. Crammer", "M. Kearns", "J. Wortman" ], "title": "Learning from multiple sources", "venue": "In Advances in Neural Information Processing Systems,", "year": 2007 }, { "authors": [ "B. Dolan", "C. Quirk", "C. Brockett" ], "title": "Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources", "venue": "In International Conference on Computational Linguistics (COLING),", "year": 2004 }, { "authors": [ "J. Duchi", "E. Hazan", "Y. Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "In Conference on Learning Theory (COLT),", "year": 2010 }, { "authors": [ "A. Ghorbani", "J. Zou" ], "title": "Data shapley: Equitable valuation of data for machine learning", "venue": "arXiv preprint arXiv:1904.02868,", "year": 2019 }, { "authors": [ "A. Ghorbani", "M. Kim", "J. Zou" ], "title": "A Distributional Framework for Data Valuation", "venue": "arXiv preprint arXiv:2002.12334,", "year": 2020 }, { "authors": [ "S. Hanneke" ], "title": "A bound on the label complexity of agnostic active learning", "venue": "In International Conference on Machine Learning (ICML),", "year": 2007 }, { "authors": [ "J. Hestness", "S. Narang", "N. Ardalani", "G. Diamos", "H. Jun", "H. Kianinejad", "M. Patwary", "Y. Yang", "Y. Zhou" ], "title": "Deep learning scaling is predictable, empirically", "venue": "arXiv preprint arXiv:1712.00409,", "year": 2017 }, { "authors": [ "J. Hu", "M. Xia", "G. Neubig", "J. Carbonell" ], "title": "Domain adaptation of neural machine translation by lexicon induction", "venue": "In Association for Computational Linguistics (ACL),", "year": 2019 }, { "authors": [ "R. Jia", "D. Dao", "B. Wang", "F.A. Hubis", "N. Hynes", "N.M. Gurel", "B. Li", "C. Zhang", "D. Song", "C. Spanos" ], "title": "Towards efficient data valuation based on the shapley value", "venue": "arXiv preprint arXiv:1902.10275,", "year": 2019 }, { "authors": [ "R.C. John", "N.R. Draper" ], "title": "D-optimality for regression designs: A review", "venue": null, "year": 1975 }, { "authors": [ "J. Kaplan", "S. McCandlish", "T. Henighan", "T. Brown", "B. Chess", "R. Child", "S. Gray", "A. Radford", "J. Wu", "D. Amodei" ], "title": "Scaling laws for neural language models", "venue": null, "year": 2001 }, { "authors": [ "P. Koehn", "R. Knowles" ], "title": "Six challenges for neural machine translation", "venue": "In The First Workshop on Neural Machine Translation,", "year": 2017 }, { "authors": [ "Y. Mansour", "M. Mohri", "A. Rostamizadeh" ], "title": "Domain adaptation with multiple sources", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2009 }, { "authors": [ "O. Ohrimenko", "S. Tople", "S. Tschiatschek" ], "title": "Collaborative Machine Learning Markets with DataReplication-Robust Payments", "venue": "arXiv preprint arXiv:1911.09052,", "year": 2019 }, { "authors": [ "M. Post" ], "title": "A call for clarity in reporting BLEU scores", "venue": "In The Third Conference on Machine Translation: Research Papers,", "year": 2018 }, { "authors": [ "F. Pukelsheim" ], "title": "Optimal Design of Experiments", "venue": "Society for Industrial and Applied Mathematics, USA,", "year": 2006 }, { "authors": [ "J. Rosenfeld", "A. Rosenfeld", "Y. Belinkov" ], "title": "A constructive prediction of the generalization error across scales", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "A.W. van der Vaart" ], "title": "Asymptotic statistics", "venue": null, "year": 1998 }, { "authors": [ "R. Vershynin" ], "title": "Introduction to the non-asymptotic analysis of random matrices. arXiv", "venue": null, "year": 2010 }, { "authors": [ "A. Wang", "A. Singh", "J. Michael", "F. Hill", "O. Levy", "S.R. Bowman" ], "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "A. Wang", "I.F. Tenney", "Y. Pruksachatkun", "P. Yeres", "J. Phang", "H. Liu", "P. Htut", "K. Yu", "J. Hula", "P. Xia", "R. Pappagari", "S. Jin", "R. McCoy", "R. Patel", "Y. Huang", "E. Grave", "N. Kim", "T. Fevry", "B. Chen", "N. Nangia", "A. Mohananey", "K. Kann", "S. Bordia", "N. Patry", "D. Benton", "E. Pavlick", "S. Bowman" ], "title": "jiant 1.3: A software toolkit for research on general-purpose text understanding models. http://jiant.info/, 2019b", "venue": null, "year": 2019 }, { "authors": [ "A. Williams", "N. Nangia", "S. Bowman" ], "title": "A broad-coverage challenge corpus for sentence understanding through inference", "venue": "In Association for Computational Linguistics (ACL),", "year": 2018 }, { "authors": [ "J. Yoon", "S. Arik", "T. Pfister" ], "title": "Data valuation using reinforcement learning", "venue": "arXiv preprint arXiv:1909.11671,", "year": 2019 }, { "authors": [ "n‖θn" ], "title": "θ∞‖) Now since∇` is Donsker over each coordinate and θn → θ∞ in probability, we can obtain uniform concentration on the gradients", "venue": "(van der Vaart,", "year": 1998 } ]
[ { "heading": "1 INTRODUCTION", "text": "The success of large scale machine learning systems depends critically on the quantity and quality of data used during training, and we cannot expect these systems to succeed if there is not enough training data or if that data does not cover all the phenomena contained in the test distribution (BenDavid et al., 2010). Knowing this, the designer of a machine learning system might create multiple sources of data, with each one targeting a different feature or domain that the model ought to do well on (Crammer et al., 2007; Wang et al., 2019a). This data-driven design strategy provides powerful tools to improve and evaluate model behavior, but also poses an additional challenge: what is the right way to combine these various data sources? What is the optimal data collection policy for a given budget?\nOur goal is to answer these questions by quantifying the relationship between data sources and model performance – how well will our model do if we were to train it on n samples using a data mixture (q1 . . . qk) over ourK data sources. A precise model for predicting model performance will allow us to both identify the optimal data collection policy and quantify cost-performance tradeoffs.\nThe starting point of our work is the recent observation across speech, vision and text (Hestness et al., 2017; Kaplan et al., 2020; Rosenfeld et al., 2020) that the empirical performance of a model is remarkably predictable, and follows the log-linear formula\nlog(error) ≈ −α log(n) + C. (1)\nIn this work, we expand this observation to the multi-data-source setting and discover the surprising fact that the slope of the log-linear relationship (α) does not vary with data composition and that the data composition only affects the intercept (C).\nThe simple dependence of log-error on data size allows us to reduce the problem of estimating model error into a learning problem. Our approach is straightforward: we hypothesize that model error follows V (n, q) := exp(−α log(n)+log(C(q))) for a simple parametric functional formC(q), and fit this to observed pairs of (n, q, error) that we obtain by subsampling the dataset and re-training a model. We show that there is a natural and simple choice of C(q) as a rational function that we derive from optimal experimental design for linear regression, M-estimation, and nonparametric\nsmoothing. The simple and parametric dependence of V (n, q) on n allows us to use our resulting estimates to predict model performance under substantial extrapolation in data size.\nEmpirically, the resulting predictions are extremely accurate and hold under substantial extrapolation. On the Amazon review prediction dataset (Mansour et al., 2009), we can learn to predict model performance nearly perfectly (r2 = 0.96) from a small dataset of 1200 examples across 3 sources and extrapolate to predict the model error on datasets of up to 4000 examples. We show this high accuracy continues to hold on a real-world task oriented dialogue system (r2 = 0.93), a multi-domain machine translation system (r2 = 0.83), and boolean question answering with weak supervision (r2 = 0.86). In each of the cases, our proposed approach substantially outperforms the best baseline, with the baselines performing worse-than-random in both the machine translation and question answering tasks.\nRelated work Quantifying the effect of data composition on model performance is closely related to the classical ideas of optimal experimental design, as well as more recent machine learning methods such as active learning and data valuation.\nOur work will draw inspiration from the classical V -optimal experimental design (John & Draper, 1975) as a way to understand how model performance will change with the data collection policies. However, our approach differs substantially beyond this. Instead of making strong linearity assumptions and identifying closed form formulas for model performance, we treat identifying the impact of data sources on errors as itself a prediction problem, which allows us to quantify these effects for neural networks and non-separable objectives.\nActive learning provides methods for incrementally selecting new points to rapidly reduce a loss (Hanneke, 2007). These approaches only consider the problem of optimal data collection and do not seek to predict model performance under all data collection strategies (including suboptimal ones), which is critical when making cost-performance tradeoffs across data sources. The model performance predictions produced in our work complements existing work on active learning by providing accurate forecasts of model performance under different data collection strategies.\nFinally, data valuation methods such as the Shapley value attempt to assign estimate the impact of a data source on model performance (Ghorbani & Zou, 2019; Jia et al., 2019; Ghorbani et al., 2020; Yoon et al., 2019). These approaches are natural when pricing data sources as part of a market mechanism (Ohrimenko et al., 2019; Agarwal et al., 2019) due to the axiomatic properties of the Shapley value. Our approach differs in that we seek simply to estimate the performance of a model rather than to assign a single price to examples from a data source. This difference means that axioms such as additivity that are critical for the Shapley value are not relevant for our goal. We show that for the purpose of predicting errors, a rational function (rather than a linear cost) follows naturally from optimal experimental design. Our experiments also show that our rational function approximation provides better model performance predictions than a linear, additive model." }, { "heading": "2 PROBLEM STATEMENT AND EMPIRICAL OBSERVATIONS", "text": "Our goal is to predict the performance of a model as a function of the number of training samples n as well as the dataset composition q, where qk represents the fraction of the training data drawn from data source k. We will now define this goal more formally in terms of the training data distribution, model fitting, and test loss.\nThe training data consists of an n-sample training set pn,q that is created by sampling from the mixture p := ∑ k∈[K] qkpk where pk are data generating distributions for each of theK data sources\nand qk are mixture weights with qk ≥ 0 and ∑ k∈[K] qk = 1. Using this dataset, we learn a\nprediction model θ̂ that incurs loss `(θ̂;x, y) for a training example (x, y). The fitted model is the empirical loss minimizer, which we define as\nθ̂(pn,q) := arg min θ∈Θ Epn,q [`(θ;x, y)] .\nThe performance of this classifier is evaluated on a test distribution which may differ from the training distribution by a covariate shift (i.e. p(y | x) = ptest(y | x)). We are interested in model performance as a function of the data size and composition (and not a fixed empirical distribution\npn,q) and thus our goal is to predict the model’s expected excess loss over draws in both the training and test distributions,\nL(n, q) := E [ `(θ̂(pn,q);x, y) ] − inf\nθ E [`(θ;x, y)] .\nEstimating L requires that we hypothesize a relationship between (n, q) and the expected model loss. Following earlier observations by Hestness et al. (2017), we expect a log-linear relationship between L(n, q) and log(n) for any fixed q, which implies a possible approximation as\nlog(L(n, q)) ≈ log(V (n, q)) := α(q) log(n) + C(q). (2) We now examine this hypothesis in a simple toy example.\nLinear toy data: We will start with the simplest nontrivial example of linear least-squares regression to study L(n, q). In this example, there are two data sources over x ∈ R2. The first data source has substantial variability on the first coordinate x0 but not x1 and vice versa for the second data source. The overall generative process is\ny | x ∼ [0.5, 1]>x+ z ∼ Bern(q) ∼ N(0, 1) x | z = 0 ∼ N ( 0, [ 1 0 0 0.001 ]) x | z = 1 ∼ N ( 0, [ 0.001 0 0 1 ]) .\nLet L(n, q) be the excess squared loss of a linear least squares model trained with n samples from a mixture q and evaluated on a test distribution with q = 0.5. What will L(n, q) look like? Figure 1a shows a clear linear relationship between log dataset size (log(n)) and log(L(n, q)). The intercept of the linear relationship seems to vary with the data mixture q, but the slope seems constant.\nExamining Figure 1a more closely, we find that the extremes of using either data source exclusively (blue / purple lines) performs worse than a mix suggesting that log(L(n, q)) is unlikely to be linear in q. Intuitively, we can think of each data distribution as having a different strength (i.e. more variance in either x0 or x1) and combining the two results in a better data distribution than either alone. We can see this more clearly when we estimate the intercept for each of these lines (Figure 1b). The estimated intercepts show a U-shaped curve that rapidly increases as q → 0 or q → 1 and is generally flat from 0.2 to 0.8." }, { "heading": "3 METHOD AND THEORY", "text": "We have observed that in the case of a simple linear regression, the log-error not only follows the relationship outlined in equation 2, but also that the slope α is constant as we vary the data composition (and we will further validate this claim on more complex tasks and models in subsequent sections). This observation shows we may be able to further simplify the log-linear approximation as log(L(n, q)) ≈ log(V (n, q)) := −α log(n) + log(C(q)). Now note that this functional form decouples the data size n and mixture proportions C(q) into two terms. This is the key observation of our work: log(V (n, q)) has a very simple dependence on n,\nand the more complex term C(q) has no dependence on n. Therefore we can cast this as a learning problem, where we learn α and a parametric function Cλ(q) based on the model’s error over a range of q and small n, and extrapolate this for large n using the log-linear dependence of log V on n.\nConcretely, given a dataset with {n1 . . . nk} we can generate a subsampled dataset with n̂k ∼ Unif(0, nk) samples from each source. This results in a training set with data size n̂ = ∑ k n̂k and composition q̂k = n̂kn̂ . We fit a model to this subsampled data and compute its loss L(n̂, q̂). Given the triple (n̂, q̂, L(n̂, q̂)) we can now simply fit the hypothesized functional form,\nmin λ,α\nEq̂,n̂ [ (log(L(n̂, q̂))− α log(n̂) + log(Cλ(q̂)))2 ] .\nThe experimental data does not specify the functional form of Cλ(q) except that it should handle convex functions like those seen in Figure 1b. We will now study V (n, q) theoretically and argue that a natural choice is the rational function\nCλ(q) := M∑ i=1 ( K∑ k=1 λikqk )−1 .\nIn the subsequent sections, we will study three settings: ordinary linear regression, M-estimation, and nonparametric regression and show that our hypothesized log-linear approximation arises naturally in all three cases." }, { "heading": "3.1 LINEAR REGRESSION", "text": "We begin by characterizing L(n, q) in the linear regression case, where we can derive closed form expressions for the expected loss as a function of training data. Our setting is d-dimensional, nsample linear regression, defined as y = x>β + with i.i.d. ∼ N(0, 1). Our training data follows x ∼ p := ∑ k∈[K] qkpk where each data source has full-rank second moments Σk := Ex∼pk [ xx> ] .\nDefine the ordinary least squares estimator β̂ := (X>X)−1X>Y in terms of the featuresX ∈ Rn×d and Y ∈ Rn. The excess test loss of this estimator over any x∗ ∼ p∗ and y∗ := x∗>β + is defined as\nL(n, q) = E[‖x∗(β − β̂)‖22].\nThe theory of V-optimal experimental design(Pukelsheim, 2006) allows us to characterize this excess loss. Proposition 3.1. The excess expected loss for ordinary least squares trained on a mixture q with data size n and subgaussian x follows\nlog(L(n, q)) = − log(n) + log Tr Σ∗ ∑ k∈[K] qkΣk −1 \n︸ ︷︷ ︸ C(q)\n+O (√ log(1/δ)√ n ) ,\nwith probability at least 1− δ where Σ∗ := Ex∼p∗ [ xx> ] and Σk := Ex∼pk [ xx> ] .\nWe will defer all proofs to the appendix due to space constraints. Clearly C(q) is not linear even in this simple case, and the terms for qk appear within an inverse. Naively, we might hypothesize that it behaves much more closely to a linear rational function (i.e. ( ∑ i λiqi)\n−1) and this intuition will turn out to be correct whenever Σ∗ and Σk are approximately diagonalizable. Corollary 3.1. Let P be an orthogonal matrix which approximately simultaneously diagonalizes P−1Σ∗P = D∗, P−1ΣkP = Dk + Rk for diagonal some matrices D. Then for full-rank Σ∗ and sufficiently small Rk,\nTr Σ∗ ∑ k∈[K] qkΣk −1 = ∑ i∈[d] D∗ii∑ k qkDk,ii + o ( ‖ ∑ k qkRk‖F ) .\nThe first order term exactly matches the hypothesized C(q) as a rational function with d terms and validates this choice for linear regression. To interpret this corollary, the approximate diagonalizability condition states that the eigenvectors for Σ∗ and Σk coincide, and thatD∗ii andDk,ii are these eigenvalues. The ratio D ∗ ii∑\nk qkDk,ii measures the ratio of variance in the test distribution to that of the\ntraining distribution for the i-th eigenvector.\nThe key observation is that the variance (i.e. the information each data source contributes to a particular coordinate i) is linear, but the dependence of model error to training variance is inverse and that there are d different coordinates making the overall dependence of errors on data composition nonlinear. There are clear qualitative differences between a linear and rational function approximation to C(q), with the rational function being strongly convex with diminishing returns in q." }, { "heading": "3.2 GENERAL M ESTIMATORS", "text": "We might rightfully ask whether this kind of approximation continues to hold for nonlinear models and losses like neural networks. The same analysis as above can be extended to the asymptotic behavior of a substantially more general class of models known as M -estimators, which are empirical loss minimizers of a differentiable loss.\nFor the regression case, we relied on a closed-form characterization of β. For M-estimators we will use asymptotic normality under the sampling distribution, Theorem 3.1 (van der Vaart (1998)). Consider a twice differentiable loss ` whose gradients are bounded and Donsker. Let θn be an estimator which fulfills the approximate first-order optimality condition with minimizer θ∞,\nEpn [∇`(y, x; θn)] = o(n−1/2) and Ep[∇`(y, x; θ∞)] = 0.\nIf θn p→ θ∞ and both I−1θ∞ := Ep[H`(y, x; θ∞)] −1 and Σθ∞ := Ep[∇`(y, x; θ∞)∇`(y, x; θ∞)>] exist, √\nn(θn − θ∞)→ N ( 0, I−1θ∞Σθ∞I −1 θ∞ ) .\nNow that we have the asymptotic distribution of the M -estimator, we can quantify the (asymptotic) form of C(q) with respect to a test distribution p∗ simply by taking the Taylor expansion of the loss at θ∞. Corollary 3.2. Under the conditions of Theorem 3.1, let `(y, x; θ) = − log pθ(y | x) and there exists some θ∗ = θ∞ such that pθ∗(y | x) = p(y | x) then\nlog(L(n, q)) = − log(n) + log Tr Σ∗(∑\nk\nqkΣk\n)−1+ o(n−1) .\nfor Σk := Epk [H`(y, x; θ∗)] and Σ∗ := Ep∗ [H`(y, x; θ∗)]\nThis result relies on two additional assumptions: the loss is a log loss, and the model is wellspecified. The first assumption is weak, as many models today use log softmax type losses. The well-specified assumption is stronger but may be reasonable for nearly nonparametric functions such as neural networks. For a less simple but more general result, see Corollary A.1 in the appendix.\nThis has the same functional form as before: C(q) is the trace of a test distribution dependent matrix Σ∗ and the inverse of data source matrices Σk. The difference now is that instead of covariances, we are looking at the Hessian of the parameters with respect to the unknown optimal model θ∗. Applying the simultaneous diagonalization argument from earlier once again results in a rational function that is captured by C(q)." }, { "heading": "3.3 NONPARAMETRIC MODELS", "text": "Finally, we show that the same relationship holds for nonparametric models such as kernel smoothing or binning. Our goal will be to estimate some ground truth map y = f(x) + for i.i.d N(0, 1) and f a differentiable L-Lipschitz function. The quality of an estimate will be measured by some twice-differentiable loss `(y, x) with bounded first two derivatives.\nGiven n samples (x1, y1) . . . (xn, yn) ∈ [0, 1]d × R drawn i.i.d from some density p = ∑ k qkpk, one natural estimator for this problem is the nonparametric binning estimator f̂ which we define in terms of axis-aligned hypercubes Bδ(x, S) := {x′ ∈ S : bx′/δc = bx/δc}. Let Xn := {x1 . . . xn} then we can define our estimator,\nf̂δ(x) := 1 |Bδ(x,Xn)| ∑\nxi∈Bδ(x,Xn)\nyi.\nAssuming we choose δ and n sufficiently large that each bin concentrates to its expected value, we have the following error estimate Proposition 3.2. Let Bδ(x, pk) = Ex′∼pk [|Bδ(x, {x′})|] be the probability of drawing some x′ ∼ pk in the same bin as x, and assume Bδ(x, pk) is bounded away from zero. Then\nL(n, q) := E[`(f̂δ(x), x)− `(f(x), x)] = E [\n`′′(f(x), x)∑ k qkBδ(x, pk)\n] +O (√ log(γ−1) + d log(δ)√\n2n\n) +O(Lδ √ d+ L2δ2d),\nholds with probability at least 1− γ, where the expectation is taken with respect to draws of y.\nOnce again, we see a rational function in q, with no further approximation needed. Each bin is a term in the rational function approximation with weight `′′(x)." }, { "heading": "4 EXPERIMENTS", "text": "We have seen that a rational function is a reasonable approximation to C(q) across 3 different settings. We will now show that this is the case in practice, and additionally thatC(q) can be accurately estimated using a few models trained on small datasets. The resulting estimates of model performance are accurate for models with an order of magnitude more data.\nBaselines and implementation Our evaluations focus on our ability to predict the loss incurred by a model L(n, q). To do so, we will compare the rational function approximation procedure against several natural baselines for predicting the loss of a model. Each of the baselines correspond to a different assumption about the functional form of log(V (n, q)) that we use to approximate log(L(n, q)).\nDatasize: Assume a functional form of log(V (n, q)) = α log(n) + c ignoring the data composition and dependence on q.\nLinear: Assume a functional form of log(V (n, q)) = α log(n) + β>q + c. This is the natural approach if we treat log(V (n, q)) as linear in q and log-linear in n.\nAblation and Shapley: further constrain the linear baseline by setting β to either the log-Shapley value obtained as the marginal contribution of a data source (for the Shapley baselines) or the logratio of losses obtained after removing a data source (ablation). We use this approach as we found it to dominate the usual assumption of treating V (n, q) as being linear in the Shapley value.\nAs the baselines are all linear in log(n) and q we solve the optimal parameters for these models in closed form with least squares regression. We will refer to our approach as Rational, and we fit this using the Adagrad (Duchi et al., 2010) optimizer with 20000 steps and learning rate set via cross validation over the interval [0.005, 0.5]. The number of terms in the rational function sum is set to be one more than the number of data sources in all experiments." }, { "heading": "4.1 FOCUSED EVALUATION: AMAZON SENTIMENT", "text": "We now consider the Amazon sentiment prediction dataset in (Mansour et al., 2009) where the goal is to predict Amazon ratings for books (from 0 to 5 stars) using bag-of-words features from the reviews. The training data comes from 3 domains that differ from the test data: kitchen, DVD, and electronics reviews. The model is a standard ridge regularized regression model. Our experimental setup for estimating model loss is the following: we uniformly randomly sample the dataset size for\neach source (resulting in between 0 and 1200 examples for each source), and train a model on this dataset. We measure the test error via average squared loss on the books domain.\nWe fit V (n, q) with 4 terms for C(q) by minimizing the squared loss with respect to log-error on models containing 0-1200 examples total. We then use V (n, q) to predict log-error on the models trained on 1200-3600 examples from each domain. The results of this extrapolation task are shown in Table 2. Our V (n, q) estimate is nearly perfect (r2 = 0.96) and extrapolate from the low data to high data regime without issue. This correlation is substantially higher than either using data set size (r2 = −0.65), a linear model (r2 = 0.76) and even better the training error of the best additive model (r2 = 0.87).\nThe data size predictor has a negative r2 on the extrapolation setting which may seem surprising. However, this can happen whenever a predictor fails to perform better than predicting the mean of the test set. It is nontrivial to predict the mean of the test set in an extrapolation setting, and in this case, data size estimates are generally uninformative as data from the kitchen domain is not useful for predicting book review scores." }, { "heading": "4.2 BROAD EVALUATION: SEMANTIC PARSING, TRANSLATION, AND QUESTION ANSWERING", "text": "We now perform a broader evaluation of the 3 methods (linear, rational, and datasize) on 3 tasks that violate our assumptions about model performance prediction. We excluded the two ablation based methods as they are special cases of the linear model, and generally performed worse.\nTask-oriented dialogue We perform this analysis on a real world task-oriented dialogue system based upon the SMCalFlow dataset and model (Andreas et al., 2020). The task differs from the Amazon setting in two ways: the model is a nonlinear neural model for which there is no closed form optimal experimental design and the task is semantic parsing which is a more complex structured prediction problem. There are 105727 total dialogues across 4 data sources consisting of a wizardof-oz style crowdsourced dialogues, paraphrases of existing dialogues, on-policy dialogues between the system and crowdworkers, and hand-crafted dialogues by expert data scientists. We sample the number of dialogues for each source with a uniform distribution to determine q and then further subsample each data source by [0.1, 0.3, 0.7, 1.0] to vary n. Test errors are measured by whether the execution of the model matches human references.\nWe fit V (n, q) with 5 terms forC(q) on 10 models containing less than 16,000 examples, and testing on 19 models containing between 16,000 and 100,000 examples. The results in Table 2 show our approach is accurate (r2 = 0.93) and outperforms baselines including data size (r2 = 0.69) and the additive model (r2 = 0.91). While the bootstrap intervals for the r2 for the rational function approximation is sufficiently wide that it contains the mean r2 estimate of the linear model, a more powerful paired difference test between the linear and rational approximations shows that this gap is statistically significant at a 5% level for the bootstrap distribution.\nMachine translation Thus far, we have evaluated on separable losses such as mean squared error, or model accuracy. We now show that our approach to predicting model performance continues to work for non-separable losses such as BLEU for machine translation. Our task is the standard multi-domain machine translation dataset from Koehn & Knowles (2017). We use the preprocessed data, model, and hyperparameters from Hu et al. (2019) for a baseline sequence to sequence machine translation model. The model is trained on 4 data sources: Acquis (legal text), EMEA (parliamentary proceedings), IT (IT assistance), and Koran (translations of the Quran). Evaluation is performed on the Acquis test set using sacrebleu to compute BLUE (Post, 2018).\nTo estimate the performance of models under varying data composition, we subsample up to 300,000 sentences from each data source, fit the estimators on 19 datasets of size less than 600,000 total sentences, and evaluate on 11 datasets of size 600,000 to 1,200,000. Since BLEU is a similarity measure and is penalized by reference ambiguity, we consider 50-BLEU to be the excess error. The rational function approximation is the only procedure to achieve a positive r2 (0.83) among the baselines. The difference in prediction accuracies is apparent when plotting predicted and observed log-loss (Figure 2). The linear model even has low training set r2, suggesting that the relationship between data composition and performance is fundamentally nonlinear.\nMultitask question answering Finally, we consider a multitask learning problem where some of the data sources are auxiliary tasks that may not directly be useful for the test time task. This breaks the covariate shift assumption that has been implicit throughout this paper. The target task is the BoolQ question answering dataset , and we train this model using a combination of 4 data sources: the MNLI entailment task (Williams et al. (2018), 50,000 examples subsampled), STS sentence similarity judgment task (Cer et al. (2017), 5749 examples), MRPC paraphrasing task (Dolan et al. (2004), 3668 examples), and the BoolQ training set (Clark et al. (2019) 9427 examples). We use the GLUE data with the Jiant package to train a baseline BERT based model for this task (Wang et al., 2019b).\nThe challenge with this task is that only the BoolQ training set provides direct supervision for the test-time task, and the other data sources provide weak supervision that may or may not be helpful in the downstream problem. The model performance estimates are fitted on 6 datasets with up to 25,000 total examples and evaluated on 17 datasets with more than 25,000 examples. As expected, the data size based performance estimates are catastrophically bad, resulting in negative correlations. The linear estimates do not seem to extrapolate well to the test set. The rational function approximation is the only one of the three to provide positive r2 on this task." }, { "heading": "5 DISCUSSION", "text": "In this work, we’ve proposed a new approach to predicting the performance of a prediction model as a function of training data composition that consists of measuring model accuracies for small n and a range of q and fitting a parametric model V (n, q) := −α log(n) + ∑m i=1( ∑K k=1 λikqk)\n−1. We show that this parametric model is a natural approximation to model performance for a range of models, and accurately predicts the empirical performance of models in an extrapolation setting. Our work is the first step in going beyond closed-form estimates of model performance or additivity assumptions. It is an open question whether the same approach can scale to large numbers of data sources, and we hope to explore this in future work." }, { "heading": "A APPENDIX", "text": "Proposition 3.1. The excess expected loss for ordinary least squares trained on a mixture q with data size n and subgaussian x follows\nlog(L(n, q)) = − log(n) + log Tr Σ∗ ∑ k∈[K] qkΣk −1 \n︸ ︷︷ ︸ C(q)\n+O (√ log(1/δ)√ n ) ,\nwith probability at least 1− δ where Σ∗ := Ex∼p∗ [ xx> ] and Σk := Ex∼pk [ xx> ] .\nProof We will begin by deriving the excess loss for a fixed set of test examples X∗ ∈ Rm×d and training examples X ∈ Rn×d. We are interested in the excess loss, which can be written in a simple form due to strict exogenity of the least squares regression,\n‖X∗(β − β̂)‖22/m = Tr(X∗>X∗ ( X>X )−1 )/m.\nThis is a classic instance of V-optimal design. We reproduce this result for clarity. Let e := (β − β̂) then,\n‖X∗(β − β̂)‖22/m = Tr(e>X>∗X∗e)/m = Tr(X∗>X∗ ( X>X )−1 )/m.\nThe challenge now is to bound the expected loss, L(n, q) = E [ Tr ( X∗>X∗ ( X>X )−1)] /m.\nThe expectation with respect to X∗ is straightforward, but the expectation with respect to X is challenging as E[X>X]−1 6= E[ ( X>X )−1 ]. Since x is subgaussian, we can make use of matrix concentration to show this term concentrates at appropriate rate. Let Σ := E[X>X] then by Vershynin (2010, Corollary 5.50) for sufficiently large n,\n‖X>Xn−1 − Σ‖op ≤ C √\nlog(1/δ)√ n\nwith probability 1 − δ and constant C depending on the subgaussianity parameter of X . Now we can expand the empirical covariance in terms of the expectation and a residual ∆ = X>X/n − Σ using the identity (Σ + ∆)−1 = Σ−1 + ∑∞ t=1 Σ −1(∆Σ−1)t as\nL(n, q) = Tr ( Σ∗Σ−1n−1 ) + n−1E [ Tr ( ∞∑ t=1 Σ−1(∆Σ−1)t )] .\nwhenever the series is convergent (‖Σ−1‖op > ‖∆‖op).\nFor sufficiently large n, this series will converge (as σmin(Σ) is a constant and ‖∆‖op = O(n−1/2)) and the first term in the series is the dominant one. Using the trace inequality Tr(A−1BA−1) ≤ d2‖A−1‖2op‖B‖op,\nL(n, q) = Tr ( Σ∗Σ−1n−1 ) +O\n(√ log(1/δ)\nn3/2\n) .\nFinally, we make use of the fact that X is drawn from a mixture with component moments Σk := Ex∼pk [xx>] and take the Taylor expansion of log(L(n, q)) at L(n, q) = Tr(Σ∗Σ−1n−1) to obtain,\nlog(L(n, q)) = − log(n) + log Tr Σ∗(∑\nk\nqkΣk )−1+O(√log(1/δ)√ n ) .\nwith probability 1− δ.\nCorollary 3.1. Let P be an orthogonal matrix which approximately simultaneously diagonalizes P−1Σ∗P = D∗, P−1ΣkP = Dk + Rk for diagonal some matrices D. Then for full-rank Σ∗ and sufficiently small Rk,\nTr Σ∗ ∑ k∈[K] qkΣk −1 = ∑ i∈[d] D∗ii∑ k qkDk,ii + o ( ‖ ∑ k qkRk‖F ) .\nProof Since Σ∗ is full rank, we can apply the local expansion (A + B)−1 = A−1 +∑∞ i=1A −1 (BA−1)i along with the trace bound |Tr(A−1BA−1)| ≤ ‖A−1‖2F ‖B‖F Tr\n( Σ∗( ∑ k Σk) −1 ) = Tr ( D∗( ∑ k qk(Dk +Rk)) −1 )\n= ∑ i∈[d] D∗ii∑ k qkDk,ii + o ( ‖ ∑ k qkRk‖F ) .\nWhere the last line uses the local expansion and trace bound for sufficiently small B.\nTheorem 3.1 (van der Vaart (1998)). Consider a twice differentiable loss ` whose gradients are bounded and Donsker. Let θn be an estimator which fulfills the approximate first-order optimality condition with minimizer θ∞,\nEpn [∇`(y, x; θn)] = o(n−1/2) and Ep[∇`(y, x; θ∞)] = 0.\nIf θn p→ θ∞ and both I−1θ∞ := Ep[H`(y, x; θ∞)] −1 and Σθ∞ := Ep[∇`(y, x; θ∞)∇`(y, x; θ∞)>] exist, √\nn(θn − θ∞)→ N ( 0, I−1θ∞Σθ∞I −1 θ∞ ) .\nProof\nFirst, we take the first order approximation to the population minimizer\nEp[∇`(y, x; θn)] = Ep[∇`(y, x; θ∞)] + Ep[H`(y, x; θ∞)]>(θn − θ∞) + o(‖θn − θ∞‖)\nAssuming the existence of I−1θ∞ and using the approximate first-order optimality conditions for both θ∞ and θn we can solve for the parameter difference as\n√ n(θn − θ∞)\n= √ nI−1θ∞ (Ep[∇`(y, x; θn)]− Ep[∇`(y, x; θ∞)]) + o( √ n‖θn − θ∞‖) = √ nI−1θ∞ (Ep[∇`(y, x; θn)]− Epn [∇`(y, x; θn)]) + o(1 + √ n‖θn − θ∞‖)\nNow since∇` is Donsker over each coordinate and θn → θ∞ in probability, we can obtain uniform concentration on the gradients (van der Vaart, 1998) √ n (Ep[∇`(y, x; θn)]− Epn [∇`(y, x; θn)]− Ep[∇`(y, x; θ∞)] + Epn [∇`(y, x; θ∞)]) = o(1+ √ n‖θn−θ∞‖).\nSubstituting this into the earlier equality allows us to replace θn with θ∞, √ n(θn − θ∞) = √ nI−1θ∞ (Epn [∇`(y, x; θ∞)]− Ep[∇`(y, x; θ∞)]) + o(1 + √ n‖θn − θ∞‖).\nSince ∇` is bounded and thus has finite third moments, Epn [∇`(y, x; θ∞)] obeys the central limit theorem with distribution N(Ep[∇`(y, x; θ∞)],Σθ∞). Finally, the lower order terms on the right asymptotically vanish since θn → θ∞ and we obtain the stated result.\nCorollary A.1. Under the conditions of Theorem 3.1 and either Ep∗ [∇`(y, x; θ∞)] = 0 or E[θn] = θ∞ + o(n −1),\nL(n, q) := E[`(y, x; θn)]− E[`(y, x; θ∞)] = n−1Tr ( Ep∗ [H`(y, x; θ∞)]I−1θ∞Σθ∞I −1 θ∞ ) + o(n−1).\nProof Taking a second order Taylor expansion,\nE[Ep∗ [`(y, x; θn)]] = Ep∗ [`(y, x; θ∞)] + E[Ep∗ [∇`(y, x; θ∞)](θ∞ − θn)>]\n+ E[Ep∗ [(θ∞ − θn])H`(y, x; θ∞)(θ∞ − θn)>] + o(n−1)\nThe first-order term is at most o(n−1) by the additional assumption. Either our asymptotic parameter estimate is also a stationary point for the test distribution and Ep∗ [∇`(y, x; θ∞)] = 0, or our estimator is unbiased and E[θ∞ − θn] = o(n−1). The second order term can be simplified via the asymptotic normality of θn as\nE[Ep∗ [(θ∞− θn)H`(y, x; θ∞)(θ∞− θn)>]] = n−1Tr ( Ep∗ [H`(y, x; θ∞)]I−1θ∞Σθ∞I −1 θ∞ ) + o(n−1).\nCorollary 3.2. Under the conditions of Theorem 3.1, let `(y, x; θ) = − log pθ(y | x) and there exists some θ∗ = θ∞ such that pθ∗(y | x) = p(y | x) then\nlog(L(n, q)) = − log(n) + log Tr Σ∗(∑\nk\nqkΣk\n)−1+ o(n−1) .\nfor Σk := Epk [H`(y, x; θ∗)] and Σ∗ := Ep∗ [H`(y, x; θ∗)]\nProof The statement follows almost definitionally. By the conditions of the corollary statement, the model is a well-specified maximum likelihood estimator, and the fisher information and hessian coincide, Σθ∞ = Iθ∞ . Simplifying the expression in Corollary A.1 and expanding Σθ∞ into its k components gives the desired result.\nProposition 3.2. Let Bδ(x, pk) = Ex′∼pk [|Bδ(x, {x′})|] be the probability of drawing some x′ ∼ pk in the same bin as x, and assume Bδ(x, pk) is bounded away from zero. Then\nL(n, q) := E[`(f̂δ(x), x)− `(f(x), x)] = E [\n`′′(f(x), x)∑ k qkBδ(x, pk)\n] +O (√ log(γ−1) + d log(δ)√\n2n\n) +O(Lδ √ d+ L2δ2d),\nholds with probability at least 1− γ, where the expectation is taken with respect to draws of y.\nProof Note that by definition of y, whenever |Bδ(x,Xn)| > 0, f̂δ(x) has mean close to f(x) with the deviation controlled by the Lipschitz constant of f ,\nEy|x[f̂δ(x)] = f(x) +O(Lδ √ d)\nand variance (considering |Bδ(x,Xn)| fixed),\nVary|x[f̂δ(x)] = 1 + Var [f(x) | x ∈ Bδ(x,Xn)]\n|Bδ(x,Xn)| =\n1 +O(Lδ √ d)\n|Bδ(x,Xn)| .\nTaking the second order Taylor approximation to ` at f(x) we get\nE[`(f̂δ(x), x)]− E[`(f(x), x)] = E[`′(f(x), x)(f̂δ(x)− f(x))] + E[`′′(f(x), x)(f(x)− f̂δ(x))2/2] + o(E[(f(x)− f̂δ(x))2])\n= O(Lδ √ d+ L2δ2d) + E[`′′(f(x), x)Vary|x[f̂δ(x)]] + o(E[(f(x)− f̂δ(x))2])\n= E [ `′′(f(x), x)|Bδ(x,Xn)|−1/2 ] +O(Lδ √ d+ L2δ2d).\nThe third line follows from the expectation bound, as well as applying the bias-variance decomposition to the second order term. The last line follows from the variance identity above, and applying the same bias-variance decomposition on the o(E[(f(x)− f̂δ(x))2]) term. As with the linear regression case, we cannot simply take expectations as there is a small but nonzero probability that |Bδ(x,Xn)| is zero. We will show this happens with low probability. By Hoeffding’s inequality, each of the δ−d bins will concentrate towards their expected values\nP (∣∣∣∣|Bδ(x,Xn)|n−1 − E′x∼p[Bδ(x, {x′})]∣∣∣∣ > √ log(2γ−1δ−d)√ 2n ) ≤ 1− δdγ.\nApplying the union bound, we have concentration at the same rate over all δ−d bins with probability 1− γ. Now note that we can write\nE [ `′′(f(x), x)|Bδ(x,Xn)|−1 ] = 1 n E [ `′′(f(x), x)\n1\nE[Bδ(x, {x′})] + |Bδ(x,Xn)|n−1 − E[Bδ(x, {x′})]\n] .\nWe can take the Taylor approximation of the ratio at the expectation, which gives us E [ `′′(f(x), x)|Bδ(x,Xn)|−1 ] = 1 n E [ `′′(f(x), x) 1 E[Bδ(x, {x′})] +O(|Bδ(x,Xn)|n−1 − E[Bδ(x, {x′})]) ] .\nNow with probability at least 1− γ, E [ `′′(f(x), x)|Bδ(x,Xn)|−1 ] = 1 n E [ `′′(f(x), x)\n1\nE[Bδ(x, {x′})]\n] +O (√ log(γ−1) + d log(δ)√\n2n\n) .\nPlugging this into our earlier expression, and expanding the expectation in terms of each of the data sources completes the proof." } ]
2,020
null
SP:a52f70b4b90309b1553f59e8730e6378ad57b684
[ "This paper presents a graph-network-based architecture for learning to perform mesh-based simulations, which can be run more efficiently than the full, \"ground-truth\" simulations. The experiments demonstrate that the proposed method is able to learn to simulate a wide range of different physical scenarios. Moreover, the presented results also demonstrate an ability to generalize to configurations different from the one seen in training." ]
Mesh-based simulations are central to modeling complex physical systems in many disciplines across science and engineering. Mesh representations support powerful numerical integration methods and their resolution can be adapted to strike favorable trade-offs between accuracy and efficiency. However, highdimensional scientific simulations are very expensive to run, and solvers and parameters must often be tuned individually to each system studied. Here we introduce MESHGRAPHNETS, a framework for learning mesh-based simulations using graph neural networks. Our model can be trained to pass messages on a mesh graph and to adapt the mesh discretization during forward simulation. Our results show it can accurately predict the dynamics of a wide range of physical systems, including aerodynamics, structural mechanics, and cloth. The model’s adaptivity supports learning resolution-independent dynamics and can scale to more complex state spaces at test time. Our method is also highly efficient, running 1-2 orders of magnitude faster than the simulation on which it is trained. Our approach broadens the range of problems on which neural network simulators can operate and promises to improve the efficiency of complex, scientific modeling tasks.
[ { "affiliations": [], "name": "GRAPH NETWORKS" }, { "affiliations": [], "name": "Tobias Pfaff" }, { "affiliations": [], "name": "Meire Fortunato" }, { "affiliations": [], "name": "Alvaro Sanchez-Gonzalez" }, { "affiliations": [], "name": "Peter W. Battaglia" } ]
[ { "authors": [ "MS Albergo", "G Kanwar", "PE Shanahan" ], "title": "Flow-based generative models for markov chain monte carlo in lattice field theory", "venue": "Physical Review D,", "year": 2019 }, { "authors": [ "Ferran Alet", "Adarsh Keshav Jeewajee", "Maria Bauza Villalonga", "Alberto Rodriguez", "Tomas Lozano-Perez", "Leslie Kaelbling" ], "title": "Graph element networks: adaptive, structured computation and memory", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "V Bapst", "T Keck", "A Grabska-Barwinska", "C Donner", "ED Cubuk", "SS Schoenholz", "A Obika", "AWR Nelson", "T Back", "D Hassabis" ], "title": "Unveiling the predictive power of static structure in glassy systems", "venue": "(vol 16,", "year": 2020 }, { "authors": [ "Peter W Battaglia", "Jessica B Hamrick", "Victor Bapst", "Alvaro Sanchez-Gonzalez", "Vinicius Zambaldi", "Mateusz Malinowski", "Andrea Tacchetti", "David Raposo", "Adam Santoro", "Ryan Faulkner" ], "title": "Relational inductive biases, deep learning, and graph networks", "venue": "arXiv preprint arXiv:1806.01261,", "year": 2021 }, { "authors": [ "Filipe de Avila Belbute-Peres", "Thomas D. Economon", "J. Zico Kolter" ], "title": "Combining differentiable PDE solvers and graph neural networks for fluid flow prediction", "venue": "In Proceedings of the 37th International Conference on Machine Learning", "year": 2020 }, { "authors": [ "Saakaar Bhatnagar", "Yaser Afshar", "Shaowu Pan", "Karthik Duraisamy", "Shailendra Kaushik" ], "title": "Prediction of aerodynamic flow fields using convolutional neural networks", "venue": "Computational Mechanics,", "year": 2019 }, { "authors": [ "Frank J Bossen", "Paul S Heckbert" ], "title": "A pliant method for anisotropic mesh generation", "venue": "In 5th Intl. Meshing Roundtable,", "year": 1996 }, { "authors": [ "Ricardo Branco", "FV Antunes", "JD Costa" ], "title": "A review on 3d-fe adaptive remeshing techniques for crack growth modelling", "venue": "Engineering Fracture Mechanics,", "year": 2015 }, { "authors": [ "Michael M. Bronstein", "Joan Bruna", "Yann LeCun", "Arthur Szlam", "Pierre Vandergheynst" ], "title": "Geometric deep learning: going beyond euclidean data", "venue": null, "year": 2016 }, { "authors": [ "German Capuano", "Julian J Rimoli" ], "title": "Smart finite elements: A novel machine learning application", "venue": "Computer Methods in Applied Mechanics and Engineering,", "year": 2019 }, { "authors": [ "Emmanuel de Bezenac", "Arthur Pajot", "Patrick Gallinari" ], "title": "Deep learning for physical processes: Incorporating prior scientific knowledge", "venue": "Journal of Statistical Mechanics: Theory and Experiment,", "year": 2019 }, { "authors": [ "Thomas D Economon", "Francisco Palacios", "Sean R Copeland", "Trent W Lukaczyk", "Juan J Alonso" ], "title": "Su2: An open-source suite for multiphysics simulation and design", "venue": "Aiaa Journal,", "year": 2016 }, { "authors": [ "Thibault Groueix", "Matthew Fisher", "Vladimir G Kim", "Bryan C Russell", "Mathieu Aubry" ], "title": "A papier-mâché approach to learning 3d surface generation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Xiaoxiao Guo", "Wei Li", "Francesco Iorio" ], "title": "Convolutional neural networks for steady flow approximation", "venue": "In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining,", "year": 2016 }, { "authors": [ "Rana Hanocka", "Amir Hertz", "Noa Fish", "Raja Giryes", "Shachar Fleishman", "Daniel Cohen-Or" ], "title": "Meshcnn: a network with an edge", "venue": "ACM Transactions on Graphics (TOG),", "year": 2019 }, { "authors": [ "Daniel Holden", "Bang Chi Duong", "Sayantan Datta", "Derek Nowrouzezahrai" ], "title": "Subspace neural physics: Fast data-driven interactive simulation", "venue": "In Proceedings of the 18th annual ACM SIGGRAPH/Eurographics Symposium on Computer Animation,", "year": 2019 }, { "authors": [ "Sara Hooker" ], "title": "The hardware lottery", "venue": "arXiv preprint arXiv:2009.06489,", "year": 2020 }, { "authors": [ "Gurtej Kanwar", "Michael S. Albergo", "Denis Boyda", "Kyle Cranmer", "Daniel C. Hackett", "Sébastien Racanière", "Danilo Jimenez Rezende", "Phiala E. Shanahan" ], "title": "Equivariant flow-based sampling for lattice gauge theory", "venue": "Phys. Rev. Lett.,", "year": 2020 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "L’ubor Ladickỳ", "SoHyeon Jeong", "Barbara Solenthaler", "Marc Pollefeys", "Markus Gross" ], "title": "Data-driven fluid simulations using regression forests", "venue": "ACM Transactions on Graphics (TOG),", "year": 2015 }, { "authors": [ "Sangseung Lee", "Donghyun You" ], "title": "Data-driven prediction of unsteady flow over a circular cylinder using deep learning", "venue": "Journal of Fluid Mechanics,", "year": 2019 }, { "authors": [ "Yunzhu Li", "Jiajun Wu", "Russ Tedrake", "Joshua B. Tenenbaum", "Antonio Torralba" ], "title": "Learning particle dynamics for manipulating rigid bodies, deformable objects, and fluids", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ran Luo", "Tianjia Shao", "Huamin Wang", "Weiwei Xu", "Xiang Chen", "Kun Zhou", "Yin Yang" ], "title": "Nnwarp: Neural network-based nonlinear deformation", "venue": "IEEE transactions on visualization and computer graphics,", "year": 2018 }, { "authors": [ "Steffen Marburg", "Bodo Nolte" ], "title": "Computational acoustics of noise propagation in fluids: finite and boundary element methods, volume 578", "venue": null, "year": 2008 }, { "authors": [ "Rahul Narain", "Armin Samii", "James F. O’Brien" ], "title": "Adaptive anisotropic remeshing for cloth simulation", "venue": "ACM Trans. Graph.,", "year": 2012 }, { "authors": [ "Rahul Narain", "Tobias Pfaff", "James F. O’Brien" ], "title": "Folding and crumpling adaptive sheets", "venue": "ACM Trans. Graph.,", "year": 2013 }, { "authors": [ "Charlie Nash", "Yaroslav Ganin", "SM Eslami", "Peter W Battaglia" ], "title": "Polygen: An autoregressive generative model of 3d meshes", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "SK Panthi", "N Ramakrishnan", "KK Pathak", "JS Chouhan" ], "title": "An analysis of springback in sheet metal bending using finite element method (fem)", "venue": "Journal of Materials Processing Technology,", "year": 2007 }, { "authors": [ "D Pardo", "L Demkowicz", "C Torres-Verdin", "M Paszynski" ], "title": "A self-adaptive goal-oriented hpfinite element method with electromagnetic applications. part ii: Electrodynamics", "venue": "Computer methods in applied mechanics and engineering,", "year": 2007 }, { "authors": [ "Tobias Pfaff", "Rahul Narain", "Juan Miguel De Joya", "James F O’Brien" ], "title": "Adaptive tearing and cracking of thin sheets", "venue": "ACM Transactions on Graphics (TOG),", "year": 2014 }, { "authors": [ "Ravi Ramamurti", "William Sandberg" ], "title": "Simulation of flow about flapping airfoils using finite element incompressible flow solver", "venue": "AIAA journal,", "year": 2001 }, { "authors": [ "Zhengyong Ren", "Jingtian Tang" ], "title": "3d direct current resistivity modeling with unstructured mesh by adaptive finite-element method", "venue": "Geophysics, 75(1):H7–H17,", "year": 2010 }, { "authors": [ "Alvaro Sanchez-Gonzalez", "Nicolas Heess", "Jost Tobias Springenberg", "Josh Merel", "Martin Riedmiller", "Raia Hadsell", "Peter Battaglia" ], "title": "Graph networks as learnable physics engines for inference and control", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Alvaro Sanchez-Gonzalez", "Jonathan Godwin", "Tobias Pfaff", "Rex Ying", "Jure Leskovec", "Peter W. Battaglia" ], "title": "Learning to simulate complex physics with graph networks", "venue": "In Proceedings of the 37th International Conference on Machine Learning", "year": 2020 }, { "authors": [ "Franco Scarselli", "Marco Gori", "Ah Chung Tsoi", "Markus Hagenbuchner", "Gabriele Monfardini" ], "title": "The graph neural network model", "venue": "IEEE Transactions on Neural Networks,", "year": 2008 }, { "authors": [ "Christoph Schwarzbach", "Ralph-Uwe Börner", "Klaus Spitzer" ], "title": "Three-dimensional adaptive higher order finite element simulation for geo-electromagnetics—a marine csem example", "venue": "Geophysical Journal International,", "year": 2011 }, { "authors": [ "Nils Thuerey", "Konstantin Weißenow", "Lukas Prantl", "Xiangyu Hu" ], "title": "Deep learning methods for reynolds-averaged navier–stokes simulations of airfoil flows", "venue": "AIAA Journal,", "year": 2020 }, { "authors": [ "Kiwon Um", "Xiangyu Hu", "Nils Thuerey" ], "title": "Liquid splash modeling with neural networks", "venue": "In Computer Graphics Forum,", "year": 2018 }, { "authors": [ "Benjamin Ummenhofer", "Lukas Prantl", "Nils Thürey", "Vladlen Koltun" ], "title": "Lagrangian fluid simulation with continuous convolutions", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Rui Wang", "Karthik Kashinath", "Mustafa Mustafa", "Adrian Albert", "Rose Yu" ], "title": "Towards physics-informed deep learning for turbulent flow prediction", "venue": "In ACM SIGKDD international conference on knowledge discovery and data mining,", "year": 2020 }, { "authors": [ "Emo Welzl" ], "title": "Smallest enclosing disks (balls and ellipsoids)", "venue": "In New results and new trends in computer science,", "year": 1991 }, { "authors": [ "Martin Wicke", "Daniel Ritchie", "Bryan M. Klingner", "Sebastian Burke", "Jonathan R. Shewchuk", "James F. O’Brien" ], "title": "Dynamic local remeshing for elastoplastic simulation", "venue": "ACM Trans. Graph.,", "year": 2010 }, { "authors": [ "Steffen Wiewel", "Moritz Becher", "Nils Thuerey" ], "title": "Latent space physics: Towards learning the temporal evolution of fluid flow", "venue": "In Computer Graphics Forum,", "year": 2019 }, { "authors": [ "You Xie", "Erik Franz", "Mengyu Chu", "Nils" ], "title": "Thuerey. Tempogan: A temporally coherent, volumetric gan for super-resolution fluid flow", "venue": "ACM Trans. Graph.,", "year": 2018 }, { "authors": [ "Abdelaziz Yazid", "Nabbou Abdelkader", "Hamouine Abdelmadjid" ], "title": "A state-of-the-art review of the x-fem for computational fracture mechanics", "venue": "Applied Mathematical Modelling,", "year": 2009 } ]
[ { "heading": "1 INTRODUCTION", "text": "State-of-the art modeling of complex physical systems, such as deforming surfaces and volumes, often employs mesh representations to solve the underlying partial differential equations (PDEs). Mesh-based finite element simulations underpin popular methods in structural mechanics [31, 48], aerodynamics [13, 34], electromagnetics [32], geophysics [35, 39], and acoustics [26]. Meshes also support adaptive representations, which enables optimal use of the resource budget by allocating greater resolution to regions of the simulation domain where strong gradients are expected or more accuracy is required, such as the tip of an airfoil in an aerodynamics simulation. Adaptive meshing enables running simulations at accuracy and resolution levels impossible with regular discretization schemes [8, 27] (Figure 3b).\nDespite their advantages, mesh representations have received relatively little attention in machine learning. While meshes are sometimes used for learned geometry processing [9] and generative models of shapes [15, 29], most work on predicting high-dimensional physical systems focuses on grids, owing to the popularity and hardware support for CNN architectures [19]. We introduce a method for predicting dynamics of physical systems, which capitalizes on the advantages of adaptive mesh representations. Our method works by encoding the simulation state into a graph, and performing computations in two separate spaces: the mesh-space, spanned by the simulation mesh, and the Euclidean world-space in which the simulation manifold is embedded (see Figure 3a). By passing messages in mesh-space, we can approximate differential operators that underpin the internal dynamics of most physical systems. Message-passing in world-space can estimate external dynamics, not captured by the mesh-space interactions, such as contact and collision. Unstructured irregular meshes, as opposed to regular grids, support learning dynamics which are independent of resolution, allowing variable resolution and scale at runtime. By learning a map of desired resolution over the mesh (sizing field), together with a local remesher, our method can even adaptively change\n∗equal contribution Videos of all our experiments can be found at https://sites.google.com/view/meshgraphnets\nthe discretization during rollouts, budgeting greater computational resources for important regions of the simulation domain.\nTogether, our method allows us to learn the dynamics of vastly different physical systems, from cloth simulation over structural mechanics to fluid dynamics directly from data, providing only very general biases such as spatial equivariance. We demonstrate that by using mesh-space computation we can reliably model materials with a rest state such as elastics, which are challenging for meshfree prediction models [37]. MESHGRAPHNETS outperform particle- and grid-based baselines, and can generalize to more complex dynamics than those on which it was trained." }, { "heading": "2 RELATED WORK", "text": "Modelling high-dimensional physics problems with deep learning algorithms has become an area of great research interest in fields such as computational fluid dynamics. High resolution simulations are often very slow, and learned models can provide faster predictions, reducing turnaround time for workflows in engineering and science [16, 6, 49, 20, 1]. Short run times are also a desirable property for fluid simulation in visualization and graphics [46, 41, 47]. Learned simulations can be useful for real-world predictions where the physical model, parameters or boundary conditions are not fully known [12]. Conversely, the accuracy of predictions can be increased by including specialized knowledge about the system modelled in the form of loss terms [43, 23], or by physics-informed feature normalization [40].\nThe methods mentioned above are based on convolutional architectures on regular grids. Although this is by far the most widespread architecture for learning high-dimensional physical systems, recently there has been an increased interest in particle-based representations, which are particularly attractive for modelling the dynamics of free-surface liquids and granular materials. Ladicky et al. [22] use random forests to speed up liquid simulations. Various works [24, 42, 37] use graph neural networks (GNNs) [38, 4] to model particle-based granular materials and fluids, as well as glassy dynamics [3]. Learned methods can improve certain aspects of classical FEM simulations, e.g. more accurate handling of strongly nonlinear displacements [25] or learned elements which directly map between forces and displacements [10]. Finally, dynamics of high dimensional systems can be learned in reduced spaces. Holden et al. [18] performs PCA decomposition on cloth data, and\nlearns a correction model to improve accuracy of subspace simulation. These models are however very domain-specific, and the expression range is limited due to the use of the linear subspace.\nThere is increased attention in using meshes for learned geometry and shape processing [9, 29, 17]. But despite mesh-based simulations being the tool of choice in mechanical engineering and related disciplines, adaptive mesh representations have not seen much use in machine learning for physics prediction, with a few notable exceptions [5, 2]. Belbute-Peres et al. [5] embed a differentiable aerodynamics solver in a graph convolution (GCN) [21] prediction pipeline for super-resolution in aerodynamics predictions. Our method has similarities, but without a solver in the loop, which potentially makes it easier to use and adapt to new systems. In Section 5 we show that MESHGRAPHNETS are better suited for dynamical prediction than GCN-based architectures. Finally, Graph Element Networks [2] uses meshes over 2D grid domains to more efficiently compute predictions and scene representations. Notably they use small planar systems (< 50 nodes), while we show how to scale mesh-based predictions to complex 3D systems with thousands of nodes." }, { "heading": "3 MODEL", "text": "We describe the state of the system at time t using a simulation mesh M t = (V,EM ) with nodes V connected by mesh edges EM . Each node i ∈ V is associated with a reference mesh-space coordinate ui which spans the simulation mesh, and additional dynamical quantities qi that we want to model. Eulerian systems (Figure 2c,d) model the evolution of continuous fields such as velocity over a fixed mesh, and qi sample these fields at the mesh nodes. In Lagrangian systems, the mesh represents a moving and deforming surface or volume (e.g. Figure 2a,b), and contains an extra world-space coordinate xi describing the dynamic state of the mesh in 3D space, in addition to the fixed mesh-space coordinate ui (Figure 3a)." }, { "heading": "3.1 LEARNING FORWARD DYNAMICS", "text": "The task is to learn a forward model of the dynamic quantities of the mesh at time t+1 given the current mesh M t and (optionally) a history of previous meshes {M t−1, ...,M t−h}. We propose MESHGRAPHNETS, a graph neural network model with an Encode-Process-Decode architecture [4, 37], followed by an integrator. Figure 1 shows a visual scheme of the MESHGRAPHNETS architecture. Domain specific information on the encoding and integration can be found in Section 4.\nEncoder The encoder encodes the current mesh M t into a multigraph G = (V,EM , EW ). Mesh nodes become graph nodes V , and mesh edges become bidirectional mesh-edges EM in the graph. This serves to compute the internal dynamics of the mesh. For Lagrangian systems, we add world edges EW to the graph, to enable learning external dynamics such as (self-) collision and contact,\nwhich are non-local in mesh-space.1 World-space edges are created by spatial proximity: that is, given a fixed-radius rW on the order of the smallest mesh edge lengths, we add a world edge between nodes i and j if |xi − xj | < rW , excluding node pairs already connected in the mesh. This encourages using world edges to pass information between nodes that are spatially close, but distant in mesh space (Figure 3a).\nNext, we encode features into graph nodes and edges. To achieve spatial equivariance, positional features are provided as relative edge features. We encode the relative displacement vector in mesh space uij = ui−uj and its norm |uij | into the mesh edges eMij ∈ EM . Then, we encode the relative world-space displacement vector xij and its norm |xij | into both mesh edges eMij ∈ EM and world edges eWij ∈ EW . All remaining dynamical features qi, as well as a one-hot vector indicating node type, are provided as node features in vi.\nFinally, the concatenated features above are encoded into a latent vector of size 128 at each node and edge, using the encoder MLPs M , W , V for mesh edges eMij , world edges e W ij , and nodes vi respectively. See sections 4 and A.1 for more details on input encoding.\nProcessor The processor consists of L identical message passing blocks, which generalize GraphNet blocks [36] to multiple edge sets. Each block contains a separate set of network parameters, and is applied in sequence to the output of the previous block, updating the mesh edge eMij , world edge eWij , and node vi embeddings to e ′M ij , e ′W ij , v ′ i respectively by\ne′ M ij ← fM (eMij ,vi,vj) , e′ W ij ← fW (eWij ,vi,vj) , v′i ← fV (vi, ∑ j e′ M ij , ∑ j e′ W ij ) (1)\nwhere fM , fW , fV are implemented using MLPs with a residual connection.\nDecoder and state updater For predicting the time t+1 state from the time t input, the decoder uses an MLP δV to transform the latent node features vi after the final processing step into one or more output features pi.\nWe can interpret the output features pi as (higher-order) derivatives of qi, and integrate them using a forward-Euler integrator with ∆t = 1 to compute the next-step dynamical quantity qt+1i . For firstorder systems the output pi is integrated once to update qt+1i = pi + q t i, while for second-order integration happens twice: qt+1i = pi + 2q t i − qt−1. Additional output features pi are also used to make direct predictions of auxiliary quantities such as pressure or stress. For domain-specific details on decoding, see Section 4. Finally, the output mesh nodes V are updated using qt+1i to produce M t+1. For some systems, we dynamically adapt the mesh after each prediction step; this is explained in the following section.\n1From here on, any mention of world edges and world coordinates applies only to Lagrangian systems; they are omitted for Eulerian systems." }, { "heading": "3.2 ADAPTIVE REMESHING", "text": "Adaptive remeshing algorithms generally consist of two parts: identifying which regions of the simulation domain need coarse or fine resolution, and adapting the nodes and their connections to this target resolution. Only the first part requires domain knowledge of the type of physical system, which usually comes in the form of heuristics. For instance, in cloth simulation, one common heuristic is the refinement of areas with high curvature to ensure smooth bending dynamics (Figure 3b), while in computational fluid dynamics, it is common to refine around wall boundaries where high gradients of the velocity field are expected.\nIn this work we adopt the sizing field methodology [27]. The sizing field tensor S(u) ∈ R2×2 specifies the desired local resolution by encoding the maximally allowed oriented, edge lengths in the simulation mesh. An edge uij is valid if and only if uTijSi uij ≤ 1, otherwise it is too long, and needs to be split2. Given the sizing field, a generic local remeshing algorithm can simply split all invalid edges to refine the mesh, and collapse as many edges as possible, without creating new invalid edges, to coarsen the mesh. We denote this remeshing process as M ′ = R(M,S). Learned remeshing To leverage the advantages in efficiency and accuracy of dynamic remeshing, we need to be able to adapt the mesh at test time. Since remeshing requires domain knowledge, we would however need to call the specific remesher used to generate the training data at each step during the model rollout, reducing the benefits of learning the model. Instead, we learn a model of the sizing field (the only domain-specific part of remeshing) using the same architecture as in Section 3.1 and train a decoder output pi to produce a sizing tensor for each node. At test time, for each time step we predict both the next simulation state and the sizing field, and use a generic, domainindependent remesher R to compute the adapted next-step mesh as M t+1 = R(M̂ t+1, Ŝt+1). We demonstrate this on triangular meshes, Section A.3 describes the simple generic remesher that we use for this purpose. While the sizing field is agnostic to the mesh type, other mesh types may require different local remeshers; for tetrahedral meshes a method such as Wicke et al. [45] could be used, while quad meshes can simply be split into triangular meshes." }, { "heading": "3.3 MODEL TRAINING", "text": "We trained our dynamics model by supervising on the per-node output features pi produced by the decoder using a L2 loss between pi and the corresponding ground truth values p̄i. Similarly, the sizing field model is trained with an L2 loss on the ground truth sizing field. If sizing information is not available in the training data, e.g. not exposed by the ground truth simulator, we can still estimate a compatible sizing field from samples of simulator meshes, and use this estimate as labels (details in Section A.3.1)." }, { "heading": "4 EXPERIMENTAL DOMAINS", "text": "We evaluated our method on a variety of systems with different underlying PDEs, including cloth, structural mechanics, incompressible and compressible fluids (Figure 2). Training and test data was produced by a different simulator for each domain. The simulation meshes range from regular to highly irregular: the edge lengths of dataset AIRFOIL range between 2 ·10−4m to 3.5m, and we also simulate meshes which dynamically change resolution over the course of a trajectory. Full details on the datasets can be found in Section A.1.\nOur structural mechanics experiments involve a hyper-elastic plate, deformed by a kinematic actuator, simulated with a quasi-static simulator (DEFORMINGPLATE). Both actuator and plate are part of the Lagrangian tetrahedral mesh, and are distinguished by a one-hot vector for the corresponding node type ni. We encode the node quantities ui,xi,ni in the mesh, and predict the Lagrangian velocity ẋi, which is integrated once to form the next position xt+1i . As a second output, the model predicts the von-Mises stress σi at each node.\nOur cloth experiments involve a flag blowing in the wind (FLAGDYNAMIC) and a piece of cloth interacting with a kinematic sphere (SPHEREDYNAMIC) on an adaptive triangular mesh, which\n2This formulation allows different maximal edge lengths depending on the direction. For e.g. a mesh bend around a cylinder, it allows to specify shorter edge lengths in the bent dimension than along the cylinder.\nchanges resolution at each time step. The dataset FLAGSIMPLE shares the setup of FLAGDYNAMIC, but uses a static mesh and ignores collisions. The node type ni distinguishes cloth and obstacle/boundary nodes, and we encode inputs ui,xi,ni as above, but since this is a fully dynamic second order system, we additionally provide h = 1 steps of history, by including the velocity estimate ẋti = x t i − x t−1 i as a node feature. The decoder outputs acceleration ẍi which is integrated twice.\nOur incompressible fluid experiments use the CYLINDERFLOW dataset, which simulates the flow of water around a cylinder on a fixed 2D Eulerian mesh. The mesh contains the node quantities ui,ni,wi, where wi is a sample of the momentum field at the mesh nodes. In all fluid domains, the node type distinguishes fluid nodes, wall nodes and inflow/outflow boundary nodes. The network predicts change in momentum ẇi, which is integrated once, and a direct prediction of the pressure field p.\nOur compressible fluid experiments use the AIRFOIL dataset, which simulates the aerodynamics around the cross-section of an airfoil wing. We model the evolution of momentum3 w and density ρ fields, and hence the 2D Eulerian mesh encodes the quantities ui,ni,wi, ρi. We treat this as a first order system and predict change in momentum ẇi and density ρ̇i, as well as pressure pi." }, { "heading": "5 RESULTS", "text": "We tested our MESHGRAPHNETS model on our four experimental domains (Section 4), and compared it to three different baseline models. Our main findings are that MESHGRAPHNETS are able to produce high-quality rollouts on all domains, outperforming particle- and grid-based baselines, while being significantly faster than the ground truth simulator, and generalizing to much larger and more complex settings at test time.\nVideos of rollouts, as well as comparisons, can be found at https://sites.google.com/view/ meshgraphnets. Visually the dynamics remain plausible and faithful to the ground truth. Table 1 shows 1-step prediction and rollout errors in all of our datasets, while qualitative and quantitative comparisons are provided in Figure 4 and Figure 5. Even though our model was trained on next-step predictions, model rollouts remain stable for thousands of steps. This video shows a model trained on trajectories of 400 steps rolled out for 40000 steps.\nLearned remeshing We trained both a dynamics and a sizing field model to perform learned dynamic remeshing during rollout on FLAGDYNAMIC and SPHEREDYNAMIC. We compare learned remeshing variants with sizing model learned from labeled sizing data, as in Section 3.2, as well as\n3In visualizations, we show velocity, calculated as momentum w divided by density ρ.\nfrom estimated targets, as in Section A.3.1. As a baseline, we ran our forward model on the ground truth mesh sequence. As observed in the video, all learned remeshing variants are able to shift the resolution to the new folds as they appear in the cloth, yield equally plausible dynamics, and are on par4 in terms of quantitative performance (Figure 5c). Thus, our learned remeshing method provides the benefits of adaptive remeshing, which can be substantive in some domains, without requiring a domain-specific remesher in the loop.\nComputational efficiency Our approach is consistently faster than ground truth solvers by one to two orders of magnitude on all domains (Table 1). We believe this is due to our model being able to take much larger timesteps than classical solvers, and avoiding performance bottlenecks. Additionally, classical general-purpose solvers on irregular domains, such as those studied in this paper, often do not scale well on hardware accelerators, while our model is built from neural network building blocks, highly suitable for hardware acceleration. A more detailed breakdown of performance on e.g. hardware setup is available in the appendix (section A.5.1). Our model’s strong efficiency advantage means it may be applicable in situations where computing costs are otherwise prohibitive.\nGeneralization Our MESHGRAPHNETS model generalizes well outside of the training distribution, with respect to underlying system parameters, mesh shapes, and mesh size. This is because the architectural choice of using relative encoding on graphs has shown to be very conducive to\n4Note that the comparison to ground truth requires interpolating to the ground truth mesh, incurring a small interpolation penalty for learned remeshing models.\ngeneralization [37]. Also, by forcing the network to make predictions on very irregularly-shaped and dynamically changing meshes, we encourage learning resolution-independent physics.\nIn AIRFOIL, we evaluate the model on steeper angles (−35◦...35◦ vs −25◦...25◦ in training) and higher inflow speeds (Mach number 0.7...0.9 vs 0.2...0.7 in training). In both cases, the behavior remains plausible (video) and RMSE raises only slightly from 11.5 at training to 12.4 for steeper angles and 13.1 for higher inflow speeds. We also trained a model on a FLAGDYNAMIC variant with wind speed and directions varying between trajectories, but constant within each trajectory. At inference time, we can then vary wind speed and direction freely (video). This shows that the local physical laws our models learns can extrapolate to untrained parameter ranges.\nWe also trained a model in the FLAGDYNAMIC domain containing only simple rectangular cloth, and tested its performance on three disconnected fish-shaped flags (video). Both the learned dynamics model and the learned remesher generalized to the new shape, and the predicted dynamics were visually similar to the ground truth sequence. In a more extreme version of this experiment, we test that same model on a windsock with tassels (Figure 4b, video). Not only has the model never seen a non-flat starting state during training, but the dimensions are also much larger — the mesh averages at 20k nodes, an order of magnitude more than seen in training. This result shows the strength of learning resolution and scale-independent models: we do not necessarily need to train on costly high-resolution simulation data; we may be able to learn to simulate large systems that would be too slow on conventional simulators, by training on smaller examples and scaling up at inference time. A more in-depth analysis on scaling can be found in the appendix A.5.3.\nComparison to mesh-free GNS model We compared our method to the particle-based method GNS [37] on the fixed-mesh dataset FLAGSIMPLE to study the importance of mesh-space embedding and message-passing. As in GNS, the encoder builds a graph with fixed radius connectivity (10-20 neighbors per node), and relative world-space position embedded as edge features. As GNS lacks the notion of cloth’s resting state, error accumulates dramatically and the simulation becomes unstable, with slight improvements if providing 5 steps of history (Figure 5b).\nWe also explored a hybrid method (GNS+mesh-pos) which adds a mesh-space relative position feature uij to the GNS edges. This yields rollout errors on par with our method (flattening after 50 steps due to decoherence in both cases), however, it tends to develop artifacts such as entangled triangles, which indicate a lack of reliable understanding of the mesh surface (video). On irregularly spaced meshes (FLAGSIMPLE), GNS+mesh-pos was not able to produce stable rollouts at all. A fixed connectivity radius will always oversample high-res regions, and undersample low-res regions of the mesh, leading to instabilities and high rollout errors (Figure 5b, right). We conclude that both having access to mesh-space positions as well as passing messages along the mesh edges are crucial for making predictions on irregularly spaced meshes.\nConversely, we found that passing message purely in mesh-space, without any world-space edges, also produces substandard results. On FLAGDYNAMIC and SPHEREDYNAMIC we observe an increase in rollout RMSE of 51% and 92% respectively, as (self-)collisions are harder to predict without world edges. In the latter case this is particularly easy to see: the obstacle mesh and cloth mesh are not connected, so without world edges, the model cannot compute their interaction at all.\nComparison to GCNs To study the role of the graph network architecture, we tested our model against GCNs [21], which do not compute messages on edges. We adopted the GCN architecture from Belbute-Peres et al. [5] (without the super-resolution component) and trained it in the same setup as in our approach, including e.g. training noise and integration. We replicated results on the aerodynamical steady-state prediction task it was designed for (see Section A.4.2). On the much richer AIRFOIL task, however, GCN was unable to obtain stable rollouts. This is not simply a question of capacity; we created a hybrid (GCN-MLP) with our model (linear layers replaced by 2-hidden-layer MLPs + LayerNorm; 15 GCN blocks instead of 6), but the rollout quality was still poor (Figure 5a, video). We also ran an ablation of MESHGRAPHNETS without relative encoding in edges, for which absolute positional values are used as node features. This version performed much worse than our main model, yielding visual artifacts in the rollouts, and a rollout RMSE of 26.5 in AIRFOIL. This is consistent with our hypothesis that the GCN performs worse due to the lack of relative encoding and message computing, which makes the GCN less likely to learn local physical laws and more prone to overfitting.\nComparison to grid-based methods (CNNs) Arguably the most popular methods for predicting physical systems are grid-based convolutional architectures. It is fundamentally hard to simulate Lagrangian deforming meshes with such methods, but we can compare to grid-based methods on the Eulerian 2D domains CYLINDERFLOW and AIRFOIL, by interpolating the ROI onto a 128×128 grid. We implemented the UNet architecture from Thürey et al. [40], and found that on both datasets, MESHGRAPHNETS outperforms the UNet in terms of RMSE (Figure 5a). While the UNet was able to make reasonable predictions on larger scales on AIRFOIL, it undersampled the important wake region around the wingtip (Figure 4a), even while using four times more cells to span a region 16 times smaller than our method (Figure A.1). We observe similar behavior around the obstacle in CYLINDERFLOW. Additionally, as seen in the video, the UNet tends to develop fluctuations during rollout. This indicates that predictions over meshes presents advantages even in flat 2D domains.\nKey hyperparameters We tested several architecture variants and found our method is not very sensitive to many choices, such as latent vector width, number of MLP layers and their sizes. Nonetheless we identified two key parameters which influence performance (Figure 5d). Increasing the number of graph net blocks (message passing steps) generally improves performance, but it incurs a higher computational cost. We found that a value of 15 provides a good efficiency/accuracy trade-off for all the systems considered. Second, the model performs best given the shortest possible history (h=1 to estimate ẋ in cloth experiments, h=0 otherwise), with any extra history leading to overfitting. This differs from GNS [37], which used h ∈ 2...5 for best performance." }, { "heading": "6 CONCLUSION", "text": "MESHGRAPHNETS are a general-purpose mesh-based method which can accurately and efficiently model a wide range of physical systems, generalizes well, and can be scaled up at inference time. Our method may allow more efficient simulations than traditional simulators, and because it is differentiable, it may be useful for design optimization or optimal control tasks. Variants tailored to specific physical domains, with physics-based auxiliary loss terms, or energy-conserving integration schemes have the potential to increase the performance further. Finally, learning predictions on meshes opens the door for further work on resolution-adaptivity. For example, instead of learning adaptive meshing from ground truth data, we could learn a discretization which directly optimizes for prediction accuracy, or even performance on a downstream task. This work represents an important step forward in learnable simulation, and offers key advantages for modeling complex systems in science and engineering." }, { "heading": "ACKNOWLEDGMENTS", "text": "We would like to thank Danilo Rezende, Jonathan Godwin, Charlie Nash, Oriol Vinyals, Matt Hoffman, Kimberly Stachenfeld, Jessica Hamrick, Piotr Trochim, Emre Karagozler and our reviewers for valuable discussions, implementation help and feedback on the work and manuscript." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 DATASET DETAILS", "text": "Below we list details for all of our datasets. “System” describes the underlying PDE: cloth, hyperelasticity or compressible and incompressible Navier-Stokes flow. We used ArcSim [27] for simulating the cloth datasets, SU2 [13] for compressible flows, and COMSOL [11] for incompressible flow and hyperelastic simulations. Hyper-elasticity and cloth are simulated using linear elements. Each dataset consists of 1000 training, 100 validation and 100 test trajectories, each containing 250- 600 time steps. Meshing can be either regular, i.e. all edges having similar length, irregular, i.e. edge lengths vary strongly in different regions of the mesh or dynamic, i.e. change at each step of the simulation trajectory. For Lagrangian systems, the world edge radius rW is provided. Our model operates on the simulation time step ∆t listed below. However, for each output time step, the solvers compute several internal time steps (16 for ArcSim, 100 for SU2, adaptive for COMSOL). As a quasi-static simulation, DEFORMINGPLATE does not have a time step.\nDataset System Solver Mesh type Meshing #steps ∆t s rW\nFLAGSIMPLE cloth ArcSim triangle 3D regular 400 0.02 — FLAGDYNAMIC cloth ArcSim triangle 3D dynamic 250 0.02 0.05 SPHEREDYNAMIC cloth ArcSim triangle 3D dynamic 500 0.01 0.05 DEFORMINGPLATE hyper-el. COMSOL tetrahedral 3D irregular 400 — 0.03 CYLINDERFLOW incompr. NS COMSOL triangle 2D irregular 600 0.01 — AIRFOIL compr. NS SU2 triangle 2D irregular 600 0.008 —\nNext, we list input encoding for mesh edges eMij , world edges e W ij and nodes vi, as well as the predicted output for each system.\nSystem Type inputs eMij\ninputs eWij\ninputs vi\noutputs pi history h\nCloth Lagrangian uij , |uij |,xij , |xij | xij , |xij | ni, (xti−x t−1 i ) ẍi 1 Hyper-El. Lagrangian uij , |uij |,xij , |xij | xij , |xij | ni ẋi, σi 0 Incomp. NS Eulerian uij , |uij | — ni,wi ẇi, pi 0 Compr. NS Eulerian uij , |uij | — ni,wi, ρi ẇi, ρ̇i, pi 0\nAll second-derivative output quantities (̈) are integrated twice, while first derivative outputs (̇) are integrated once as described in Section 3.1; all other outputs are direct predictions, and are not integrated. The one-hot node type vector ni allows the model to distinguish between normal and kinematic nodes. Normal nodes are simulated, while kinematic either remain fixed in space (such as the two nodes which keep the cloth from falling), or follow scripted motion (as the actuator in DEFORMINGPLATE). For scripted kinematic nodes, we additionally provide the next-step worldspace velocity xt+1i − xti as input; this allows the model to predict next-step positions which are consistent with the movement of the actuator. In the variant of FLAGDYNAMIC with varying wind speeds (generalization experiment in Section 5), the wind speed vector is appended to the node features.\nIn the dynamically meshed datasets (FLAGDYNAMIC, SPHEREDYANMIC), the mesh changes between steps, and there is no 1:1 correspondence between nodes. In this case, we interpolate dynamical quantities from previous meshes M t−1, ...,M t−h as well as M t+1 into the current mesh M t using barycentric interpolation in mesh-space, in order to provide history and targets for each node." }, { "heading": "A.2 ADDITIONAL MODEL DETAILS", "text": "" }, { "heading": "A.2.1 ARCHITECTURE AND TRAINING", "text": "The MLPs of the Encoder M , W , V , the Processor fM , fW , fV , and Decoder δV are ReLUactivated two-hidden-layer MLPs with layer and output size of 128, except for δV whose output size matches the prediction pi. All MLPs outputs except δV are normalized by a LayerNorm. All input and target features are normalized to zero-mean, unit variance, using dataset statistics.\nFor training, we only supervise on the next step in sequence; to make our model robust to rollouts of hundreds of steps we use training noise (see Section A.2.2). Models are trained on a single v100 GPU with the Adam optimizer for 10M training steps, using an exponential learning rate decay from 10−4 to 10−6 over 5M steps." }, { "heading": "A.2.2 TRAINING NOISE", "text": "We used the same training noise strategy as in GNS [37] to make our model robust to rollouts of hundreds of steps. We add random normal noise of zero mean and fixed variance to the most recent value of the corresponding dynamical variable (Section A.2.3). When choosing how much noise to add, we looked at the one-step model error (usually related to the standard deviation of the targets in the dataset) and scanned the noise magnitude around that value on a logarithmic scale using two values for each factor of 10. For the exact numbers for each dataset, see Table A.2.3.\nIn the cases where the dataset is modelled as a first-order system (all, except cloth domains); we adjust the targets according to the noise, so that the model decoder produces an output that after integration would have corrected the noise at the inputs. For example, in DEFORMINGPLATE, assume the current position of a node is xti = 2, and x̃ t i = 2.1 after adding noise. If the next position is xt+1i = 3, the target velocity for the decoder ẋi = 1 will be adjusted to ˜̇xi = 0.9, so that after\nintegration, the model output x̃t+1i matches the next step x t+1 i effectively correcting for the added noise, i.e. : x̃t+1i = x̃ t i + ˜̇xi = 3 ≡ x t+1 i .\nIn the second-order domains (cloth), the model decoder outputs acceleration ẍi from the input position xti and velocity ẋ t i = x t i − x t−1 i (as in GNS). As with other systems, we add noise to the position xti, which indirectly results on a noisy derivative ẋ t i estimate. In this case, due to the strong dependency between position and velocity, it is impossible to adjust the targets to simultaneously correct for noise in both values. For instance, assume xt−1i = 1.4, x t i = 2, x t+1 i = 3, which implies ẋti = 0.6, ẋ t+1 i = 1, and ground truth acceleration ẍi = 0.4. After adding 0.1 of noise the inputs are x̃ti = 2.1 ⇒ ˜̇xti = 0.7. At this point, we could use a modified acceleration target of ˜̈xPi = 0.2, so that after integration, the next velocity is ˜̇x t+1 i = ˜̇xti + ˜̈x P = 0.9, and the next position x̃t+1i = x̃ t i + ˜̇x t+1 i = 3 ≡ x t+1 i , effectively correcting for the noise added to the position. However, note that in this case the predicted next step velocity ˜̇xt+1i = 0.9 does not match the ground truth ẋt+1i = 1. Similarly, if we chose a modified target acceleration of ˜̈x V i = 0.3, the next step velocity ˜̇xt+1i = 1 would match the ground truth, correcting the noise in velocity, but the same would not be true for the next step position x̃t+1i = 3.1. Empirically, we treated how to correct the noise for cloth simulation as a hyperparameter γ ∈ [0, 1] which parametrizes a weighted average between the two options: ˜̈xi = γ ˜̈xPi + (1− γ)˜̈xVi . Best performance was achieved with γ = 0.1. Finally, when the model takes more than one step of history (h > 1) (e.g. in the ablation from Figure 5d on FLAGDYNAMIC), the noise is added in a random walk manner with a per-step variance such as the variance at the last step matches the target variance (in accordance with GNS [37])." }, { "heading": "A.2.3 HYPERPARAMETERS", "text": "" }, { "heading": "A.3 A DOMAIN-INVARIANT LOCAL REMESHER FOR TRIANGULAR MESHES", "text": "A local remesher [27, 28, 33] changes the mesh by iteratively applying one of three fundamental operations: splitting an edge to refine the mesh, collapsing an edge to coarsen it, and flipping an edge to change orientation and to preserve a sensible aspect ratio of its elements. Edge splits create a new node whose attributes (position, etc.), as well as the associated sizing tensor, are obtained by averaging values of the two nodes forming the split edge. Collapsing removes a node from the mesh, while edge flips leave nodes unaffected.\nedge split edge ip edge collapse\ni j i j i j\nk\nl\nGiven the sizing field tensor Si at each node i, we can define the following conditions for performing edge operations:\n• An edge connecting node i and j should be split if it is invalid, i.e. uTijSijuij > 1 with the averaged sizing tensor Sij = 12 (Si + Sj).\n• An edge should be collapsed, if the collapsing operation does not create any new invalid edges.\n• An edge should be flipped if the an-isotropic Delaunay criterion [7]\n(ujk × uik)uTilSAujl < uTjkSAuik(uil × ujl) , SA = 1\n4 (Si + Sj + Sk + Sl)\nis satisfied. This optimizes the directional aspect ratio of the mesh elements.\nWe can now implement a simple local remesher by applying these operations in sequence. First, we split all possible mesh edges to refine the mesh (in descending order of the metric uTijSijuij), then flip all edges which should be flipped. Next, we collapse all edges we can collapse (in ascending order of the metric uTijSijuij) to coarsen the mesh as much as possible, and finally again flip all possible edges to improve mesh quality." }, { "heading": "A.3.1 ESTIMATING SIZING FIELD TARGETS", "text": "If no sizing field is available to train the sizing model, we can estimate it from a sequence of meshes. That is, for two consecutive meshes M t, M t+1 we want to find the sizing field S that would have induced this transition with a local remesher, i.e. M t+1 = R(M(t),S). To do this, we assume that the remesher is near-optimal, that is, all resulting edges are valid, yet maximum-length under the metric S. For each Si associated with the node i, this can be expressed as:\nSi = argmax ∑ j∈Ni uTijSi uij , s.t.∀j ∈ Ni : uTijSiuij ≤ 1 (2)\nThis problem corresponds to finding the minimum-area, zero-centred ellipse containing the points uij , and can be solved efficiently using the MINIDISK algorithm [44]." }, { "heading": "A.4 ADDITIONAL BASELINE DETAILS", "text": "" }, { "heading": "A.4.1 BASELINE TRAINING", "text": "Baseline architectures were trained within our general training framework, sharing the same normalization, noise and state-update strategies. We optimized the training hyperparameters separately in each case." }, { "heading": "A.4.2 GCN BASELINE", "text": "We re-implemented the base GCN architecture (without the super-resolution component) from Belbute-Peres et al. [5]. To replicate the results, and ensure correctness of our implementation of the baseline, we created a dataset AIRFOILSTEADY which matches the dataset studied in their work. It uses the same solver and a similar setup as our dataset AIRFOIL, except that it has a narrower range of angle of attack (−10◦...10◦ vs−25◦...25◦ in AIRFOIL). The biggest difference is that the prediction task studied in their paper is not a dynamical simulation as our experiments, but a steady-state prediction task. That is, instead of unrolling a dynamics model for hundreds of time steps, this task consists of directly predicting the final steady-state momentum, density and pressure fields, given only two scalars (Mach number m, angle of attack α) as well as the target mesh positions ui— essentially learning a parametrized distribution.\nIn AIRFOILSTEADY, the GCN predictions are visually indistinguishable to the ground truth, and qualitatively match the results reported in Belbute-Peres et al. [5] for their ”interpolation regime” experiments. We also trained our model in AIRFOILSTEADY, as a one-step direct prediction model (without an integrator), with encoding like in AIRFOIL (see Section A.1), but where each node is conditioned on the global Mach numberm and angle of attack α), instead of density and momentum. Again, results are visually indistinguishable from the ground truth (video), and our model outperforms GCN in terms of RMSE (ours 0.116 vs GCN 0.159). This is remarkable, as our models’ spatial equivariance bias works against this task of directly predicting a global field. This speaks of the flexibility of our architecture, and indicates that it can be used for tasks beyond learning local physical laws for which it was designed." }, { "heading": "A.4.3 GRID (CNN) BASELINE", "text": "We re-implemented the UNet architecture of Thurey et al. [40] to exactly match their open-sourced version of the code. We used a batch size of 10. The noise parameters from Section A.2.3 are absolute noise scale on momentum 6e-2 for CYLINDERFLOW, and 1e1 on momentum and 1.5e-2 on density in the AIRFOIL dataset." }, { "heading": "A.5 ADDITIONAL ANALYSIS", "text": "" }, { "heading": "A.5.1 PERFORMANCE", "text": "In the table below, we show a detailed breakdown of per-step timings of our model run on CPU (8-core workstation) or a single v100 GPU. tmodel measures inference time of the graph neural network, while tfull measures the complete rollout, including remeshing and graph recomputation. The ground truth simulation (tGT) was run on the same 8-core workstation CPU. On our datasets, inference uses between 1-2.5GB of memory, including model variables and system overhead.\nDataset CPU\ntmodel ms/step\nCPU tfull\nms/step GPU tmodel ms/step GPU tfull ms/step tGT ms/step CPU speedup GPU speedup\nFLAGSIMPLE 186 187 19 19 4166 22.3 214.7 FLAGDYNAMIC 534 1593 43 837 26199 16.4 31.3 SPHEREDYNAMIC 221 402 32 140 1610 4.0 11.5 DEFORMINGPLATE 172 174 24 33 2893 16.6 89.0 CYLINDERFLOW 166 168 21 23 820 4.9 35.3 AIRFOIL 497 499 37 38 11015 22.1 289.1\nThe NN bulding blocks used in our model are highly optimized for hardware acceleration. However, our ground truth solvers (ArcSim, COMSOL and SU2) do not support GPUs; and more broadly, solvers have varying levels of optimization for different hardware, so we find it hard to provide a ’true’ hardware-agnostic performance comparison. We do note a few trends.\nIn the simulation regime studied in this paper (i.e. general-purpose simulations on complex, irregular domains) classical GPU solvers tend to be comparably hard to implement and they do not scale very well, thus many packages do not provide such support. As an example of a general-purpose solver with partial GPU support, ANSYS shows limited speedups of 2x-4x on GPU, even under optimal conditions [30, 14]. On the other hand, evaluating our model on the same CPU hardware as the ground truth solvers, it still achieves speedups between 4x-22x, even in this setting which is suboptimal for NN models.\nIn practice, using a single GPU, we see speedups of 11x-290x compared to ArcSim, COMSOL and SU2, and users of such simulators with access to a GPU could benefit from these speedups." }, { "heading": "A.5.2 ERROR METRICS", "text": "Rollout RMSE is calculated as the root mean squared error of the position in the Lagrangian systems and of the momentum in the Eulerian systems, taking the mean for all spatial coordinates, all mesh nodes, all steps in each trajectory, and all 100 trajectories in the test dataset. The error bounds in Table 1 and the error bars in Figure 5(a-c) indicate standard error of the RMSE across 100 trajectories. Error bars in Figure 5(d) correspond to min/median/max performance across 3 seeds.\nIn FLAGSIMPLE and FLAGDYNAMIC, we observed decoherence after the first 50 steps (Figure 5b), due to the chaotic nature of cloth simulation. Since the dynamics of these domains are stationary, we use the rollout error in the first 50 steps of the trajectory for the comparison shown in the bar plots, as a more discerning metric for result quality. However, the reported trends also hold when measured over the whole trajectory.\nIn AIRFOIL, we compute the RMSE in a region of interest around the wing (Figure A.1 middle), which corresponds to the region shown in figures and videos. For comparisons with grid-based methods, we map the predictions on the grid to the ground truth mesh to compute the error." }, { "heading": "A.5.3 ADDITIONAL ANALYSIS ON GENERALIZATION AND SCALING", "text": "We ran inference of our model trained on the FLAGDYNAMIC domain (with learned remeshing), on several scaled-up and scaled-down versions of FLAGDYNAMIC, and the generalization experiment WINDSOCK and FISHFLAG (see Section 5). In Figure A.3, we report the error compared to the respective ground-truth simulations.\nWhen evaluating the 50-step RMSE rollout error in FLAGDYNAMIC we do not observe systematic trends of the error as function of the simulation size, indicating that the model performs similarly well on larger and smaller systems. The error when generalizing to new shapes (WINDSOCK, FISHFLAG) is slightly higher, but comparable.\nThe RMSE rollout error evaluated on the full trajectory shows a stronger correlation with the system size. However, we believe this simply tracks the systematic positional error incurred due to decoherence (e.g. a small angle perturbation due to decoherence incurs a higher positional error at the tip of the flag the larger the flag is), and as shown in Figure 5b, decoherence becomes the main source of error after the first 50 steps of the simulation in this domain." } ]
2,021
null
SP:a8ae05c783e0c619cb859c4ad6da479529bf7af4
[ "The proposed method works as follows. Given samples are partitioned into two parts; one is for classifier training and the other is for data synthesizer training. Both are trained in a differentially private manner. After training, the DP synthesizer generates samples and the DP classifier labels them so that the resulting samples can be used as training samples. By the post-processing theorems, the resulting are differentially private, which are published as synthesized samples." ]
Machine learning practitioners frequently seek to leverage the most informative available data, without violating the data owner’s privacy, when building predictive models. Differentially private data synthesis protects personal details from exposure, and allows for the training of differentially private machine learning models on privately generated datasets. But how can we effectively assess the efficacy of differentially private synthetic data? In this paper, we survey four differentially private generative adversarial networks for data synthesis. We evaluate each of them at scale on five standard tabular datasets, and in two applied industry scenarios. We benchmark with novel metrics from recent literature and other standard machine learning tools. Our results suggest some synthesizers are more applicable for different privacy budgets, and we further demonstrate complicating domain-based tradeoffs in selecting an approach. We offer experimental learning on applied machine learning scenarios with private internal data to researchers and practitioners alike. In addition, we propose QUAIL, a two model hybrid approach to generating synthetic data. We examine QUAIL’s tradeoffs, and note circumstances in which it outperforms baseline differentially private supervised learning models under the same budget constraint.
[]
[ { "authors": [ "Martin Abadi", "Andy Chu", "Ian Goodfellow", "H Brendan McMahan", "Ilya Mironov", "Kunal Talwar", "Li Zhang" ], "title": "Deep learning with differential privacy", "venue": "In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security,", "year": 2016 }, { "authors": [ "Joshua Allen", "Bolin Ding", "Janardhan Kulkarni", "Harsha Nori", "Olga Ohrimenko", "Sergey Yekhanin" ], "title": "An algorithmic framework for differentially private data analysis on trusted processors", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Christian Arnold", "Marcel Neunhoeffer" ], "title": "Really useful synthetic data–a framework to evaluate the quality of differentially private synthetic data", "venue": "arXiv preprint arXiv:2004.07740,", "year": 2020 }, { "authors": [ "Kamalika Chaudhuri", "Claire Monteleoni", "Anand D Sarwate" ], "title": "Differentially private empirical risk minimization", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Dingfan Chen", "Ning Yu", "Yang Zhang", "Mario Fritz" ], "title": "Gan-leaks: A taxonomy of membership inference attacks against gans", "venue": null, "year": 1909 }, { "authors": [ "Bolin Ding", "Janardhan Kulkarni", "Sergey Yekhanin" ], "title": "Collecting telemetry data privately", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Stelios Doudalis", "Ios Kotsogiannis", "Samuel Haney", "Ashwin Machanavajjhala", "Sharad Mehrotra" ], "title": "One-sided differential privacy", "venue": "arXiv preprint arXiv:1712.05888,", "year": 2017 }, { "authors": [ "Dheeru Dua", "Casey Graff. UCI machine learning repository." ], "title": "URL http://archive", "venue": "ics.uci.edu/ml.", "year": 2017 }, { "authors": [ "Cynthia Dwork", "Frank McSherry", "Kobbi Nissim", "Adam Smith" ], "title": "Calibrating noise to sensitivity in private data analysis", "venue": "In Theory of Cryptography Conference,", "year": 2006 }, { "authors": [ "Cynthia Dwork", "Aaron Roth" ], "title": "The algorithmic foundations of differential privacy", "venue": "Foundations and Trends in Theoretical Computer Science,", "year": 2014 }, { "authors": [ "Du D. Duan Z. Li A. (eds) Dwork", "Cynthia In: Agrawal M" ], "title": "Differential privacy: A survey of results", "venue": "In Theory and Applications of Models of Computation, pp. Lecture Notes in Computer Science,", "year": 2008 }, { "authors": [ "Vitaly Feldman", "Ilya Mironov", "Kunal Talwar", "Abhradeep Thakurta" ], "title": "Privacy amplification by iteration", "venue": "IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS),", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Moritz Hardt", "Guy N Rothblum" ], "title": "A multiplicative weights mechanism for privacy-preserving data analysis", "venue": "IEEE 51st Annual Symposium on Foundations of Computer Science,", "year": 2010 }, { "authors": [ "Moritz Hardt", "Katrina Ligett", "Frank McSherry" ], "title": "A simple and practical algorithm for differentially private data release", "venue": "In Advances in Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "Michael Hay", "Ashwin Machanavajjhala", "Gerome Miklau", "Yan Chen", "Dan Zhang" ], "title": "Principled evaluation of differentially private algorithms using dpbench", "venue": "In Proceedings of the 2016 International Conference on Management of Data,", "year": 2016 }, { "authors": [ "Jamie Hayes", "Luca Melis", "George Danezis", "Emiliano De Cristofaro" ], "title": "Logan: Membership inference attacks against generative models", "venue": "Proceedings on Privacy Enhancing Technologies,", "year": 2019 }, { "authors": [ "Briland Hitaj", "Giuseppe Ateniese", "Fernando Perez-Cruz" ], "title": "Deep models under the gan: information leakage from collaborative deep learning", "venue": "In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security,", "year": 2017 }, { "authors": [ "Bargav Jayaraman", "David Evans" ], "title": "Evaluating differentially private machine learning in practice", "venue": "In 28th {USENIX} Security Symposium ({USENIX} Security 19),", "year": 2019 }, { "authors": [ "Zhanglong Ji", "Zachary C Lipton", "Charles Elkan" ], "title": "Differential privacy and machine learning: a survey and review", "venue": "arXiv preprint arXiv:1412.7584,", "year": 2014 }, { "authors": [ "James Jordon", "Jinsung Yoon", "Mihaela van der Schaar" ], "title": "Measuring the quality of synthetic data for use in competitions", "venue": "arXiv preprint arXiv:1806.11345,", "year": 2018 }, { "authors": [ "James Jordon", "Jinsung Yoon", "Mihaela van der Schaar" ], "title": "Pate-gan: Generating synthetic data with differential privacy guarantees", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Haoran Li", "Li Xiong", "Lucila Ohno-Machado", "Xiaoqian Jiang" ], "title": "Privacy preserving rbf kernel support vector machine", "venue": "BioMed research international,", "year": 2014 }, { "authors": [ "Jingcheng Liu", "Kunal Talwar" ], "title": "Private selection from private candidates", "venue": "In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing,", "year": 2019 }, { "authors": [ "Mario Lucic", "Karol Kurach", "Marcin Michalski", "Sylvain Gelly", "Olivier Bousquet" ], "title": "Are gans created equal? a large-scale study", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Ashwin Machanavajjhala", "Xi He", "Michael Hay" ], "title": "Differential privacy in the wild: A tutorial on current practices & open challenges", "venue": "In Proceedings of the 2017 ACM International Conference on Management of Data,", "year": 2017 }, { "authors": [ "Frank McSherry", "Kunal Talwar" ], "title": "Mechanism design via differential privacy", "venue": "Annual IEEE Symposium on Foundations of Computer Science", "year": 2007 }, { "authors": [ "Sumit Mukherjee", "Yixi Xu", "Anusua Trivedi", "Juan Lavista Ferres" ], "title": "Protecting gans against privacy attacks by preventing overfitting", "venue": null, "year": 2001 }, { "authors": [ "Hung Nguyen", "Di Zhuang", "Pei-Yuan Wu", "Morris Chang" ], "title": "Autogan-based dimension reduction for privacy", "venue": "preservation. Neurocomputing,", "year": 2020 }, { "authors": [ "Nicolas Papernot", "Martı́n Abadi", "Ulfar Erlingsson", "Ian Goodfellow", "Kunal Talwar" ], "title": "Semisupervised knowledge transfer for deep learning from private training data", "venue": "arXiv preprint arXiv:1610.05755,", "year": 2016 }, { "authors": [ "Noseong Park", "Mahmoud Mohammadi", "Kshitij Gorde", "Sushil Jajodia", "Hongkyu Park", "Youngmin Kim" ], "title": "Data synthesis based on generative adversarial networks", "venue": "arXiv preprint arXiv:1806.03384,", "year": 2018 }, { "authors": [ "Fabian Pedregosa", "Gaël Varoquaux", "Alexandre Gramfort", "Vincent Michel", "Bertrand Thirion", "Olivier Grisel", "Mathieu Blondel", "Peter Prettenhofer", "Ron Weiss", "Vincent Dubourg" ], "title": "Scikit-learn: Machine learning in python", "venue": "Journal of machine learning research,", "year": 2011 }, { "authors": [ "NhatHai Phan", "Xintao Wu", "Dejing Dou" ], "title": "Preserving differential privacy in convolutional deep belief networks", "venue": "Machine learning,", "year": 2017 }, { "authors": [ "Steven Ruggles", "Catherine Fitch", "Diana Magnuson", "Jonathan Schroeder" ], "title": "Differential privacy and census data: Implications for social and economic research", "venue": "In AEA papers and proceedings,", "year": 2019 }, { "authors": [ "Or Sheffet" ], "title": "Private approximations of the 2nd-moment matrix using existing techniques in linear regression", "venue": "arXiv preprint arXiv:1507.00056,", "year": 2015 }, { "authors": [ "Reza Shokri", "Vitaly Shmatikov" ], "title": "Privacy-preserving deep learning", "venue": "In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security,", "year": 2015 }, { "authors": [ "Joshua Snoke", "Aleksandra" ], "title": "Slavković. pmse mechanism: Differentially private synthetic data with maximal distributional similarity", "venue": "In International Conference on Privacy in Statistical Databases,", "year": 2018 }, { "authors": [ "Reihaneh Torkzadehmahani", "Peter Kairouz", "Benedict Paten" ], "title": "Dp-cgan: Differentially private synthetic data and label generation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2019 }, { "authors": [ "Jaideep Vaidya", "Basit Shafiq", "Anirban Basu", "Yuan Hong" ], "title": "Differentially private naive bayes classification", "venue": "In 2013 IEEE/WIC/ACM International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT),", "year": 2013 }, { "authors": [ "Giuseppe Vietri", "Grace Tian", "Mark Bun", "Thomas Steinke", "Zhiwei Steven Wu" ], "title": "New oracleefficient algorithms for private synthetic data release", "venue": "arXiv preprint arXiv:2007.05453,", "year": 2020 }, { "authors": [ "Liyang Xie", "Kaixiang Lin", "Shu Wang", "Fei Wang", "Jiayu Zhou" ], "title": "Differentially private generative adversarial network", "venue": "arXiv preprint arXiv:1802.06739,", "year": 2018 }, { "authors": [ "Jingjing Xu", "Xuancheng Ren", "Junyang Lin", "Xu Sun" ], "title": "Dp-gan: diversity-promoting generative adversarial network for generating informative and diversified text", "venue": "arXiv preprint arXiv:1802.01345,", "year": 2018 }, { "authors": [ "Lei Xu", "Maria Skoularidou", "Alfredo Cuesta-Infante", "Kalyan Veeramachaneni" ], "title": "Modeling tabular data using conditional GAN", "venue": "CoRR, abs/1907.00503,", "year": 2019 }, { "authors": [ "Matei Zaharia", "Andrew Chen", "Aaron Davidson", "Ali Ghodsi", "Sue Ann Hong", "Andy Konwinski", "Siddharth Murching", "Tomas Nykodym", "Paul Ogilvie", "Mani Parkhe" ], "title": "Accelerating the machine learning lifecycle with mlflow", "venue": "IEEE Data Eng. Bull.,", "year": 2018 }, { "authors": [ "Xinyang Zhang", "Shouling Ji", "Ting Wang" ], "title": "Differentially private releasing via deep generative model (technical report)", "venue": "arXiv preprint arXiv:1801.01594,", "year": 2018 }, { "authors": [ "Zuhe Zhang", "Benjamin Rubinstein", "Christos Dimitrakakis" ], "title": "On the differential privacy of bayesian inference", "venue": "arXiv preprint arXiv:1512.06992,", "year": 2015 }, { "authors": [ "Jingwen Zhao", "Yunfang Chen", "Wei Zhang" ], "title": "Differential privacy preservation in deep learning: Challenges, opportunities and solutions", "venue": "IEEE Access,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Maintaining an individual’s privacy is a major concern when collecting sensitive information from groups or organizations. A formalization of privacy, known as differential privacy, has become the gold standard with which to protect information from malicious agents (Dwork, TAMC 2008). Differential privacy offers some of the most stringent known theoretical privacy guarantees (Dwork et al., 2014). Intuitively, for some query on some dataset, a differentially private algorithm produces an output, regulated by a privacy parameter , that is statistically indistinguishable from the same query on the same dataset had any one individual’s information been removed. This powerful tool has been adopted by researchers and industry leaders, and has become particularly interesting to machine learning practitioners, who hope to leverage privatized data in training predictive models (Ji et al., 2014; Vietri et al., 2020).\nBecause differential privacy often depends on adding noise, the results of differentially private algorithms can come at the cost of data accuracy and utility. However, differentially private machine learning algorithms have shown promise across a number of domains. These algorithms can provide tight privacy guarantees while still producing accurate predictions (Abadi et al., 2016). A drawback to most methods, however, is in the one-off nature of training: once the model is produced, the privacy budget for a real dataset can be entirely consumed. The differentially private model is therefore inflexible to retraining and difficult to share/verify: the output model is a black box.\nThis can be especially disadvantageous in the presence of high dimensional data that require rigorous training techniques like dimensionality reduction or feature selection (Hay et al., 2016). With limited budget to spend, data scientists cannot exercise free range over a dataset, thus sacrificing model quality. In an effort to remedy this, and other challenges faced by traditional differentially private methods for querying, we can use differentially private techniques for synthetic data generation, investigate the privatized data, and train informed supervised learning models.\nIn order to use the many state-of-the-art methods for differentially private synthetic data effectively in industry domains, we must first address pitfalls in practical analysis, such as the lack of realistic\nbenchmarking (Arnold & Neunhoeffer, 2020). Benchmarking is non-trivial, as many new stateof-the-art differentially private synthetic data algorithms leverage generative adversarial networks (GANs), making them expensive to evaluate on large scale datasets (Zhao et al., 2019). Furthermore, many of state-of-the-art approaches lack direct comparisons to one another, and by nature of the privatization mechanisms, interpreting experimental results is non-trivial (Jayaraman & Evans, 2019). New metrics presented to analyze differentially private synthetic data methods may themselves need more work to understand, especially in the domain of tabular data (Ruggles et al., 2019; Machanavajjhala et al., 2017).\nTo that end, our contributions in this paper are 3-fold. (1) We introduce more realisitic benchmarking. Practitioners commonly collect state-of-the-art approaches for comparison in a shared environment (Xu et al., 2019). We provide our evaluation framework, with extensive comparisons on both standard datasets and our real-world, industry applications. (2) We provide experimentation on novel metrics at scale. We stress the tradeoff between synthetic data utility and statistical similarity, and offer guidelines for untried data. (3) We present a straightforward and pragmatic enhancement, QUAIL, that addresses the tradeoff between utility and statistical similarity. QUAIL’s simple modification to a differentially private data synthesis architecture boosts synthetic data utility in machine learning scenarios without harming summary statistics or privacy guarantees." }, { "heading": "2 BACKGROUND", "text": "Differential Privacy (DP) is a formal definition of privacy offering strong assurances against various re-identification and re-construction attacks (Dwork et al., 2006; 2014). In the last decade, DP has attracted significant attention due to its provable privacy guarantees and ability to quantify privacy loss, as well as unique properties such as robustness to auxiliary information, composability enabling modular design, and group privacy (Dwork et al., 2014; Abadi et al., 2016)\nDefinition 1. (Differential Privacy Dwork et al. (2006)) A randomized function K provides ( , δ)differential privacy if ∀S ⊆ Range(K), all neighboring datasets D, D̂ differing on a single entry,\nPr[K(D) ∈ S] ≤ e · Pr[K(D̂) ∈ S] + δ, (1)\nThis is a standard definition of DP, implying that the outputs of differentially private algorithm for datasets that vary by a single individual are indistinguishable, bounded by the privacy parameter . Here, is a non-negative number otherwise known as the privacy budget. Smaller values more rigorously enforce privacy, but often decrease data utility. An important property of DP is its resistance to post-processing. Given an ( , δ)-differentially private algorithm K : D → O, and f : O → Ó an arbitrary randomized mapping, f ◦ K : D → Ó is also differentially private. Currently, the widespread accessibility of data has increased data protection and privacy regulations, leading to a surge of research into applied scenarios for differential privacy (Allen et al. (2019); Ding et al. (2017); Doudalis et al. (2017). There have been several studies into protecting individual’s privacy during model training Li et al. (2014); Zhang et al. (2015); Feldman et al. (2018). In particular, several studies have attempted to solve the problem of preserving privacy in deep learning (Phan et al. (2017); Abadi et al. (2016); Shokri & Shmatikov (2015); Xie et al. (2018); Zhang et al. (2018); Jordon et al. (2018b); Torkzadehmahani et al. (2019)). Here, two main techniques for training models with differential privacy are discussed:\nDP-SGD Differentially Private Stochastic Gradient Descent (DP-SGD), proposed by Abadi et al. (2016), is one of the first studies to make the Stochastic Gradient Descent (SGD) computation differential private. Intuitively, DPSGD minimizes its loss function while preserving differential privacy by clipping the gradient in the optimization’s l2 norm to reduce the model’s sensitivity, and adding noise to protect privacy. Further details can be found in the Appendix.\nPATE Private Aggregation of Teacher Ensembles (PATE) Papernot et al. (2016) provided PATE, which functions by first deploying multiple teacher models that are trained on disjoint datasets, then deploying the teacher models on unseen data to make predictions. On unseen data, the teacher models “vote” to determine the label; here random noise is introduced to privatize the results of the vote. The random noise is generated following the Laplace Lap(λ) distribution. PATE further introduces student models, which try to train a model, but only have access to the privatized labels garnered from the teacher’s vote. By training multiple teachers on disjoint datasets and adding noise\nto the output predicted by those teacher models, the student cannot relearn an individual teacher’s model or related parameters." }, { "heading": "2.1 PRIVACY PRESERVING SYNTHETIC DATA MODELS", "text": "Synthetic data generation techniques, such as generative adversarial networks (GANs) (Goodfellow et al. (2014); Arjovsky et al. (2017); Xu et al. (2019)), have become a practical way to release realistic fake data for various explorations and analyses. Although these techniques are able to generate high-quality fake data, they may also reveal user sensitive information and are vulnerable to re-identification and/or membership attacks (Hayes et al. (2019); Hitaj et al. (2017); Chen et al. (2019)). Therefore, in the interest of data protection, these techniques must be formally privatized. In recent years, researchers have combined data synthesis methods with DP solutions to allow for the release of data with high utility while preserving an individual’s privacy ( Xie et al. (2018); Jordon et al. (2018b); Park et al. (2018); Mukherjee et al. (2019)). Below, we briefly discuss three popular differentially private data synthesizers, evaluated in this paper.\nMWEM Multiplicative Weights Exponential Mechanism (MWEM) proposed by Hardt et al. (2012) is a simple yet effective technique for releasing differentially private datasets. It combines Multiplicative Weights (Hardt & Rothblum, 2010) with the Exponential Mechanism (McSherry & Talwar, 2007) to achieve differential privacy. The Exponential Mechanism is a popular mechanism for designing -differentially private algorithms that select for a best set of results R using a scoring function s(B, r). Informally, s(B, r) can be thought of as the quality of a result r for a dataset B. MWEM starts with a dataset approximation and uses the Multiplicative Weights update rule to improve the accuracy of the approximating distribution by selecting for informative queries using the Exponential Mechanism. This process of updates iteratively improves the approximation.\nDPGAN Following Abadi et al. (2016)’s work, a number of studies utilized DP-SGD and GANs to generate differential private synthetic data (Xie et al., 2018; Torkzadehmahani et al., 2019; Xu et al., 2018). These models inject noise to the GAN’s discriminator during training to enforce differential privacy. DP’s guarantee of post-processing privacy means that privatizing the GAN’s discriminator enforces differential privacy on the parameters of the GAN’s generator, as the GAN’s mapping function between the two functions does not involve any private data. We use the Differentially Private Generative Adversarial Network (DPGAN) Xie et al. (2018) as one of our benchmark synthesizers. DPGAN leverages the Wasserstein GAN proposed by Arjovsky et al. (2017), adds noise on the gradients, and clips the model weights only, ensuring the Lipschitz property of the network. DPGAN has been evaluated on image data and Electronic Health Records (EHR) in the past.\nPATE-GAN Jordon et al. (2018b) modified the Private Aggregation of Teacher Ensembles (PATE) framework to apply to GANs in order to preserve the differential privacy of synthetic data. Similarly to DPGAN, PATE-GAN only applies the PATE mechanism to the discriminator. The dataset is first partitioned into k subsets, and k teacher discriminators are initialized. Each teacher discriminator is trained to discriminate between a subset of the original data and fake data generated by Generator. The student discriminators are then trained to distinguish real data and fake data using the labels generated by an ensemble of teacher discriminators with random noise added. Lastly, the generator is trained to fool the student discriminator. Jordon et al. (2018b) claim that this method outperforms DPGAN for classification tasks, and present supporting results." }, { "heading": "3 ENHANCING PERFORMANCE", "text": "The QUAIL Hybrid Method As we explored generating differentially private synthetic data, we noted a disconnect between the distribution of epsilon, or privacy budget, and the algorithm’s application. Generating synthetic data to provide summary statistics necessitates an even distribution of budget across the entire privatization effort; we cannot know a user’s query in advance. We may want to reallocate the budget, however, for a known supervised learning task.\nQUAIL (Quail-ified Architecture to Improve Learning) is a simple, two model hybrid approach to enhancing the utility of a differentially private synthetic dataset for machine learning tasks. Intuitively, QUAIL assembles a DP supervised learning model in tandem with a DP synthetic data model to produce synthetic data with machine learning potential. Algorithm 1 describes the procedure more formally.\nAlgorithm 1: QUAIL pseudocode Input: Dataset D, supervised learning target dimension r′, budget > 0, split factor\n0 < p < 1, size n samples to generate, a differentially private synthesizer M(D, ), and a differentially private supervised learning model C(D, , t) (t is supervisory signal i.e. target dimension). We let X be the universe of samples, and N denote the set of all non-negative integers. Thus, N |X| is all databases in universe X , as described in Section 2.3 of Dwork et al. (2014).\n1 Split; Split the budget: M = ∗ p and C = ∗ (1− p). Create DM , which is identical to D except r′ 6∈ DM . 2 In parallel; • Train differentially private supervised learning model: C(D, C , r′) to produce Cr′(s) : N\n|X| → R1, which can map any arbitrary s ∈ N |X| to an output label. • Train differentially private synthesizer: M(DM , M ) : N |X| → R2 to produce synthesizer MDM , which produces synthetic data S ∈ N |X|.\n3 Sample; 1. Using MDM , generate synthetic dataset SDM with n samples. 2. For each sample si ∈ SDM , apply Cr′(si) = ri i.e. apply model to each synthetic\ndatapoint to produce a supervised learning target output ri. 3. Transform SDM → SR. For each row si ∈ SDM , si = [si, ri] s.t. ∀si, si ∈ dom(D) i.e.\nappend ri to each row si so that SR is now in same domain as D, the original dataset.\nOutput: Return SR, a synthetic dataset with n samples, where each sample in SR has target dimension ri produced by the supervised learner Cr′\nTheorem 3.1 (QUAIL follows the standard composition theorem for ( , δ)-differential privacy). The QUAIL method preserves the differential privacy guarantees of C(R, C , r′) and M(RM , M ) by the standard composition rules of differential privacy (Dwork et al., 2014).\nProof. Let the first ( , δ)-differentially private mechanism M1 : N |X| → R1 be C(R, C , r′). Let the second ( , δ)-differentially private mechanism M2 : N |X| → R2 be M(RM , M ). Fix 0 < p < 1, M = p ∗ and C = (1 − p) ∗ , then by construction, Pr[M1(x)=(r1,r2)]Pr[M2(y)=(r1,r2)] ≥ exp(−( M + C)), which satisfies the differential privacy constraints for a privacy budget of M + C = total. For more details, see the appendix.\nDifferentially Private GANs for Tabular Data In this paper, we focus on tabular synthetic data, and explored state-of-the-art methods for generating tabular data with GANs. CTGAN is a state-ofthe-art GAN for generating tabular data presented by Xu et al. (2019). We made CTGAN differentially private using the aforementioned techniques, DP-SGD and PATE. CTGAN addresses specific challenges that a vanilla GAN faces when generating tabular data, such as mode-collapse and continuous data following a non-Gaussian distribution (Xu et al., 2019). To model continuous data with multi-model distributions, it leverages mode-specific normalization. In addition, CTGAN introduces a conditional generator, which can generate synthetic rows conditioned by specific discrete columns. CTGAN further trains by sampling, which explores discrete values more evenly.\nDP-CTGAN Inspired by Xie et al. (2018)’s DPGAN work, we applied DP-SGD to the CTGAN architecture (details can be found in Figure 1 in the Appendix). Similarly to DPGAN, in applying DP-SGD to CTGAN we add random noise to the discriminator and clip the norm to make it differentially private. Based on the post-processing property(Dwork et al., 2014) that any randomized mapping of a differentially private output is also differentially private, the generator is guaranteed to be differentially private when the generator is trained to maximize the probability of D(G(z)). In CTGAN, the authors add the cross-entropy loss between conditional vector and produced set of onehot discrete vectors into the generator loss. To guarantee differential privacy with the generator, we\nremoved the cross-entropy loss when calculating generator loss. Thus, the generator is differentially private as well. See Figure 1 in Appendix for a diagram.\nPATE-CTGAN Drawing from work on PATE-GAN, we applied the PATE framework to CTGAN (Jordon et al., 2018b). Similarly to PATE-GAN, we partitioned our original dataset into k subsets and trained k differentially private teacher discriminators to distinguish real and fake data. In order to apply the PATE framework, we further modified CTGAN’s teacher discriminator training: instead of using one generator to generate samples, we initialize k conditional generators for each subset of data (shown in Figure 2 in the appendix).\nFigure 1: Block diagram of DP-CTGAN model.\nFigure 2: Teacher Discriminator of PATECTGAN model." }, { "heading": "4 EVALUATION: METRICS, INFRASTRUCTURE AND PUBLIC BENCHMARKS", "text": "We focus on two sets of metrics in our benchmarks: one for comparing the distributional similarity of two datasets and another for comparing the utility of synthetic datasets given a specific predictive task. These two dimensions should be viewed as complementary, and in tandem they capture the overall quality of the synthetic data.\nDistributional similarity To provide a quantitative measure for comparison of synthetically generated datasets, we use a relatively new metric for assessing synthetic data quality: propensity score mean-squared error (pMSE) ratio score. Proposed by Snoke & Slavković (2018), pMSE provides a statistic to capture the distributional similarity between two datasets. Given two datasets, we combine the two together with an indicator to label which set a specific observation comes from. A discriminator is then trained to predict these indicator labels. To calculate pMSE, we simply compute the mean-squared error of the predicted probabilities for this classification task. If our model is unable to discern between these classes, then the two datasets are said to have high distributional similarity. To help limit the sensitivity of this metric to outliers, Snoke & Slavković (2018) propose transforming pMSE to a ratio by leveraging an approximation to the null distribution. For the ratio, we simply divide the pMSE by the expectation of the null distribution. A ratio score of 0 implies the two datasets are identical.\nMachine Learning Utility Given the context of this paper, we aim to provide quantitative measures for approximating the utility of differentially private synthetic data in regards to machine learning tasks. Specifically, we used three metrics: AUCROC and F1-score, two traditional utility measures, and the synthetic ranking agreement (SRA), a more recent measure. SRA can be thought of as the probability that a comparison between any two algorithms on the synthetic data will be similar to comparisons of the same two algorithms on the real data (Jordon et al., 2018a). Descriptions of each metric can be found in the Appendix.\nEvaluation Infrastructure The design of our pipeline addressed scalability concerns, allowing us to benchmark four computationally expensive GANs on five high dimensional datasets across the privacy budgets = [0.01, 0.1, 0.5, 1.0, 3.0, 6.0, 9.0], averaged across 12 runs. We used varying compute, including CPU nodes (24 Cores, 224 GB RAM, 1440 GB Disk) and GPU nodes GPU (4 x NVIDIA Tesla K80). Despite extensive computational resources, we could not adequately address the problem of hyperparameter tuning differentially private algorithms for machine learning tasks, which is an open research problem (Liu & Talwar, 2019). In our case, a grid search was computationally intractable: for each run of the public datasets on all synthesizers, Car averaged 1.27 hours, Mushroom averaged 8.33 hours, Bank averaged 13.30 hours, Adult averaged 14.47 hours and Shopping averaged 27.37 hours. We trained our GANs using the experimentally determined\nhyperparameters, and were informed by prior work around each algorithm. We include a description of the parameters used for each synthesizer in the appendix.\nRegarding F1-score and AUC-ROC: We averaged across the maximum performance of five classification models: an AdaBoost classifier, a Bagging classifier, a Logistic Regression classifier, Multilayer Perceptron classifier, and a Random Forest classifier. We decided to focus on one classification scenario specifically: train-synthetic test-real or TSTR, which was far more representative of applied scenarios than train-synthetic test-synthetic. We compare these values to train-real test-real (TRTR).\nExperimental Results: Public Datasets We ran experiments on five public real datasets, which helped inform the applied scenario discussed in Section 5. Full details of the experiments can be found in the Appendix, Figures 25-30. We will refer to the datasets as Adult, Car, Mushroom, Bank and Shopping, and will discuss a handful of the results here in terms of their machine learning utility and their statistical similarity to the real data. The individual synthesizer’s are color coded consistently across plots, and their performance is tracked according to dataset (so “dpctgan car” tracks the graphed metric for DPCTGAN on the Car dataset).\nIn our Car evaluations in Figure 3, we see strong performance from the QUAIL variants on very low values. However, we note that for ≥ 3.0, DPCTGAN and PATECTGAN outperform even the QUAIL enhanced models. We further note that PATECTGAN performs remarkably well on the pMSE metric across values in Figure 28b. In our Mushroom evaluations in Figure 28a, QUAIL variants also outperformed other synthesizers. However, PATECTGAN’s exhibits the best statistical similarity (pMSE score) with larger . In our evaluations on the Adult dataset in Figure 30a, while PATECTGAN performs well, DPCTGAN performs best when ≥ 3.0. Our findings suggest that generally, with larger budgets ( ≥ 3.0), PATECTGAN improves on other synthesizers, both in terms of utility and statistical similarity. With smaller budgets ( ≤ 1.0), DPCTGAN may perform better. Synthesizers are not able to achieve reasonable utility under low budgets ( ≤ 0.1), but DPCTGAN was able to achieve statistical similarity in this setting." }, { "heading": "5 EVALUATION: APPLIED SCENARIO", "text": "Supported by learnings from experiments on the public datasets, we evaluated our benchmark DP synthesizers on several private internal datasets, for different scenarios such as classification and regression. We show that DP synthetic data models can perform on real-world data, despite a noisy supervised learning problem and skewed distributions when compared to the more standardized public datasets.\nClassification The data used in this set of experiment include ∼100,000 samples and 30 features. The data includes only categorical columns each containing between 2 to 24 categories. One of our tasks with this dataset was to train a classification task with three classes. We faced significant challenges when managing the long-tail distribution of each feature. Figure 26, which can be found in the appendix, shows an example of data distributions for different attributes in this data.\nWe ran our evaluation suite on the applied internal data scenarios to generate the synthetic data from each DP synthesizer and benchmark standard ML models. We also applied a Logistic Regression classifier with differential privacy from IBM. (Chaudhuri et al., 2011; diffprivlib) to the real data as a baseline. Figure 5a shows the ML results from our evaluation suite. As expected, as the privacy budget increases, performance generally improves. DP-CTGAN had the highest performance without the QUAIL enhancement. QUAIL, however, improved the performance of all synthesizers. In particular, a QUAIL enhanced DPCTGAN synthesizer had the highest performance across epsilons in this experiment. In particular, these experiments demonstrated the advantages of QUAIL, combining DP synthesizers with a DP classifier for a classification model.\nRegression In this experiment, we used another internal data for the task of regression. Our dataset included 27466 and 6867 training and testing samples, respectively. The domain comprised eight categorical and 40 continuous features. After generating the DP synthetic data from each model, we used Linear Regression to predict the target variable. Figure 5b shows the results from the evaluation suite. We used RMSE as the evaluation metric. For QUAIL boosting, we used a Linear Regression model with differential privacy from IBM (Sheffet, 2015; diffprivlib). We also compared the DP synthesizers with a “vanilla” DP Linear Regression (DPLR) using real data.\nIn this experiment, PATECTGAN outperformed other models and even improved on the RMSE (root-mean-squared-error) when compared to the real data for budget > 1.0. For QUAIL-enhanced models, the RMSE is considerably larger than the real and other DP synthetic data. We attribute this to a weakness of the embedded regression model (DP Linear Regression) in QUAIL for this data scenario. Based on our observations, small privacy budgets ( < 10.0) for DP Linear Regression significantly affects its performance. However, as shown in Figure 5b, we still see some boost on the QUAIL variant synthesizers when compared to the “vanilla” DP Linear Regression. For distributional similarity comparison, please refer to Figure 27 in the appendix.\nQUAIL Evaluations QUAIL’s hyperparameter, the split factor p where 0 < p < 1, determines the distribution of budget between classifier and synthesizer. We generated classification task datasets with 10000-50000 samples, 7 feature columns and 10 output classes using the make classification package from Scikit-learn (Pedregosa et al., 2011). We experimented with the values p = [0.1, 0.3, 0.5, 0.7, 0.9], and report on results, varying budget = [1.0, 3.0, 10.0]. See the appendix for complete results and a list of DP classifiers we experimented on embedding in QUAIL.\nOur figures represent the delta δ in F1 score between training the classifier C(R, C , r′) on the original dataset (the “vanilla” scenario) (F1v), and training a Random Forest classifier on the differentially private synthetic dataset produced by applying QUAIL to an hybrid of C(R, C , r′) and one of our benchmark synthesizers M(D, M ) (F1q). We plot δ = F1v − F1q across epsilon splits and datasizes. Positive to highly positive deltas are grey→red, indicating the “vanilla” scenario outperformed the QUAIL scenario. Small or negative deltas are blue, indicating the QUAIL scenario matched, or even outperformed, the “vanilla” scenario. Each cell contains δ for some p on datasets |10000− 50000|. In our results we use DP Gaussian Naive Bayes (DP-GNB) as C(R, C , r′) (F1v),\nand trained a Random Forest Classifier on data generated by QUAIL (F1q) (recall QUAIL combines C(R, C , r ′) and a DP synthesizer) (Vaidya et al., 2013; diffprivlib). We average across 75 runs.\nNote the correlation between epsilon split, datasize and classification performance when embedding PATECTGAN in QUAIL, shown in Figure 6, suggesting that a higher p split value increases the likelihood of outperforming C(R, C , r′). For an embedded MWEM synthesizer, seen in Figure 7, the relationship between split, scale and performance was more ambiguous. In general, a higher split factor p, which assigns more budget to the differentially private classifierC(D, M , t) could improve the utility of the overall synthetic dataset. However, any perceived improvements were highly dependant on the differentially private synthesizer used. Our QUAIL results are agnostic to the embedded supervised learning algorithm C(R, C , r′), as they depict relative performance, though different methods of supervised learning are more suitable to certain domains. Future work might explore alternative classifiers or regression models, and how purposefully overfitting the model C(R, C , r′) could contribute to improved synthetic data.\nPeeling Back QUAIL: Analysis of via clustering and direct comparison By first assessing the TSNE clustering in Figures 8 and 9, we see that not only is the synthetic data produced by QUAIL very similar to the real data, but the accuracy of the labeling for the embedded model (in this case, DPLR) is also very similar. Further investigation into data scale revealed that the QUAIL method takes advantage of allocating excess epsilon when datasets are large. As datascale increases, the sensitivity of the differentially private model decreases and so less epsilon can be used more efficiently. Thus, we see that an exaggerated difference between DPLR embedded in QUAIL (with an epsilon of 2.4) and DPLR with an epsilon of 3.0 for a dataset of 20,000 samples. In this case, the embedded DPLR model accuracy suffers, and so does the learning utility of the produced synthetic data. Conversely, as we increase the data size to 50,000 and 100,000 samples, we see that the internal model (with epsilon 2.4) can match the performance of the vanilla model (with epsilon 3.0). Then, the synthetic dataset serves only to augment the performance by small but significant margin (in Figure 10, we see a bump of three percent to f1 score.)\nTime Performance Analysis of QUAIL: Making supervised learning more efficient QUAIL benefits the efficiency of training intensive GANs. In Table 1, the time performance of QUAIL is compared with non-Quail methods. Specifically, we select two epsilons ( = 3.0 and = 6.0) and two QUAIL split factors (p = 0.9 and p = 0.5). From this table, it can be seen that in all GAN-based models, QUAIL can improve time efficiency considerably. This is more noticeable as the epsilon increases where training time for models such as DPCTGAN and DPGAN skyrockets." }, { "heading": "6 PUNCHLINES", "text": "We summarize our findings in the following punchlines, concise takeaways from our work for researchers and applied practitioners exploring DP synthetic data.\n1. Holistic Performance. No single model performed best always (but PATECTGAN performed well often). Model performance was domain dependent, with continuous/categorical features, dataset scale and distributional complexity all affecting benchmark results. However, in general, we found\nthat PATECTGAN had better utility and statistical similarity in scenarios with high privacy budget ( >= 3.0) when compared to the other synthesizers we benchmarked. Conversely, with low privacy budget ( <= 1.0) we found that DPCTGAN had better utility, but PATECTGAN may still be better in terms of statistical similarity.\n2. Computational tradeoff. Our highest performant GANs were slow, and MWEM is fast. PATECTGAN and DPCTGAN, while being our most performant synthesizers, were also the slowest to train. With GANs, more computation often correlates with higher performance (Lucic et al., 2018). On categorical data, MWEM performed competitively, and is significantly faster to train in any domain.\n3. Using AUC-ROC and F1 Score. One should calculate both, especially to best understand QUAIL’s tradeoffs. Our highest performing models by F1 Score often had QUAIL enhancements, which sometimes, but not always, detrimentally affected AUC-ROC. Without both metrics, one risks using a QUAIL enhancement for a model with high training accuracy that struggles to generalize.\n4. Using pMSE. pMSE can be used alongside ML utility metrics to balanced experiments. pMSE concisely captures statistical similarity, and allows practitioners to easily balance utility against the distributional quality of their synthetic data.\n5. Enhancing with QUAIL. QUAIL’s effectiveness depends far more on the quality of the embedded differentially private classifier than on the synthesizer. QUAIL showed promising results in almost all the scenarios we evaluated. Given confidence in the embedded “vanilla” differentially private classifier, QUAIL can be used regularly to improve the utility of DP synthetic data.\n6. Reservations for use in applied scenarios. Applied DP for ML is hard, thanks to scale and dimensionality. Applied scenarios we presented assessed large datasets, leading to high computational costs that makes tuning performance difficult. Dimensionality is tricky to deal with in large, sparse, imbalanced private applied scenarios (like we faced with internal datasets). Practitioners may want to investigate differentially private feature selection or dimensionality reductions before training. We are aware of work being done to embed autoencoders into differentially private synthesizers, and view this a promising approach (Nguyen et al., 2020)." }, { "heading": "7 CONCLUSION", "text": "With this paper, we set out to assess the efficacy of differentially private synthetic data for use on machine learning tasks. We surveyed an histogram based approach (MWEM) and four differentially private GANs for data synthesis (DPGAN, PATE-GAN, DPCTGAN and PATECTGAN). We evaluated each approach using an extensive benchmarking pipeline. We proposed and evaluated QUAIL, a straightforward method to enhance synthetic data utility in ML tasks. We reported on results from two applied internal machine learning scenarios. Our experiments favored PATECTGAN when the privacy budget ≥ 3.0, and DPCTGAN when the privacy budget ≤ 1.0. We discussed nuances of domain-based tradeoffs and offered takeaways across current methods of model selection, training and benchmarking. As of writing, our experiments represent one of the largest efforts at benchmarking differentially private synthetic data, and demonstrates the promise of this approach when tackling private real-world machine learning problems." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "B METHODS", "text": "" }, { "heading": "B.1 DP-SGD DETAILED STEPS", "text": "The detailed training steps are as follows:\n1. A batch of random samples is taken and the gradient for each sample is computed\n2. For each computed gradient g, it is clipped to g/max(1, ‖g‖2C ), whereC is a clipping bound hyperparameter.\n3. A Gaussian noise (N (0, σ2C2I)) (where σ is the noise scale) is added to the clipped gradients and the model parameters are updated.\n4. Finally, the overall privacy cost ( , δ) is computed using a privacy accountant method." }, { "heading": "B.2 DESCRIPTIONS OF METRICS: F1-SCORE, AUC-ROC AND SRA", "text": "F1-score measures the accuracy of a classifier, essentially calculating the mean between precision and recall and favoring the lower of the two. It varies between 0 and 1, where 1 is perfect performance.\nAUC-ROC: Area Under the Receiver Operating Characteristic (AUC-ROC) represents the Receiver Operating Characteristic curve in a single number between 0 and 1. This provides insight into the true positive vs. false positive rate of the classifier.\nSRA: SRA can be thought of as the probability that a comparison between any two algorithms on the synthetic data will be similar to comparisons of the same two algorithms on the real data. SRA compares train-synthetic test-real (i.e. TSTR, which uses differentially private synthetic data to train the classifier, and real data to test) with train-real test-real (TRTR, which uses differentially private synthetic data to train and test the classifier)\nFurther Motivation Machine learning practitioners often need a deep understanding of data in order to train predictive models. That can be incredibly difficult when data is private. Training oneoff, blackbox “vanilla” DP classifiers cannot be retrained, as this risks individual privacy, making parameter tuning and feature selection incredibly difficult with these models. Differentially private synthetic data allows practitioners to treat data normally, without further privacy considerations, giving them an opportunity to fine tune their models." }, { "heading": "C QUAIL", "text": "" }, { "heading": "C.1 QUAIL FURTHER DETAILS", "text": "We evaluated with a few vanilla differentially private classifiers C(R, C , r′):\n1. Logistic Regression classifier with differential privacy. (Chaudhuri et al., 2011; diffprivlib)\n2. Gaussian Naive Bayes with differential privacy. (Vaidya et al., 2013; diffprivlib)\n3. Multi-layer Perceptron (Neural Network) with differential privacy. (Abadi et al., 2016)\nTheorem C.1. Standard Composition Theorem (Dwork et al., 2014) Let M1 : N |X| → R1 be an 1-differentially private algorithm, and let M2 : N |X| → R1 be 2-differentially private algorithm. Then their combination, defined to be M1,2 → R1XR2 by the mapping: M1,2(x) = (M1(x),M2(x)) is 1 + 2-differentially private.\nProof. Let x, y ∈ N |X| be such that ||x− y||1 < 1. Fix any (r1, r2) ∈ R1XR2. Then: Pr[M1,2(x) = (r1, r2)]\nPr[M1,2(y) = (r1, r2)] = Pr[M1(x) = r1]Pr[M2(x) = r2] Pr[M1(y) = r1]Pr[M2(y) = r2] (2)\n= ( Pr[M1(x) = r1] Pr[M1(y) = r1] )( Pr[M2(x) = r2] Pr[M2(y) = r2] ) (3)\n≤ exp( 1)exp( 2) (4) = exp( 1 + 2) (5)\nBy symmetry, Pr[M1,2(x)=(r1,r2)]Pr[M1,2(y)=(r1,r2)] ≥ exp(−( 1 + 2))\nProof. QUAIL: full proof of differential privacy Let the first ( , δ)-differentially private mechanism M1 : N\n|X| → R1 be C(R, C , r′). Let the second ( , δ)-differentially private mechanism M2 : N |X| → R2 be M(RM , M ). Fix 0 < p < 1, then by construction, with original budget B = ,\nB = = p ∗ + (1− p) ∗ (6) M = p ∗ (7) C = (1− p) ∗ (8)\nBy the Standard Composition Theorem (9) Pr[MC,M (x) = (r1, r2)] Pr[MC,M (y) = (r1, r2)] ≥ exp(−( M + C)) (10)\nPr[MC,M (x) = (r1, r2)] Pr[MC,M (y) = (r1, r2)] ≥ exp(−(B)) (11)" }, { "heading": "C.2 QUAIL FULL RESULTS", "text": "" }, { "heading": "D EVALUATIONS", "text": "" }, { "heading": "D.1 DESCRIPTION OF INFRASTRUCTURE", "text": "Our experimental pipeline provides an extensible interface for loading datasets from remote hosts, specifically from the UCI ML Dataset repository (Dua & Graff, 2017). For each evaluation dataset,\nwe launch a process that synthesizes datasets for each privacy budget ( s) specified on each synthesizer specified. Once the synthesis is complete, the pipeline launches a secondary process that analyzes the synthetic data, training classifiers and running the previously mentioned novel metrics. The run is launched, and the results are logged, using MLFlow runs (Zaharia et al., 2018) with an Azure Machine Learning compute-cluster backend. Our compute used CPU nodes STANDARD NC24r (24 Cores, 224 GB RAM, 1440 GB Disk) and GPU nodes GPU (4 x NVIDIA Tesla K80). We highly encourage future work into hyperparameter tuning for differentially private machine learning tasks, and believe our evaluation pipeline could be of some use in that effort." }, { "heading": "D.2 DETAILS ON DATA", "text": "Results presented on the Public Datasets are averaged across 12 runs. SRA results were moved to the appendix after difficulty interpreting their significance, although there are potential trends that warrant further exploration.\nBelow is a list of parameters used in training:" }, { "heading": "DPCTGAN", "text": "embedding dim =128 , gen dim =(256 , 2 5 6 ) , d i s d i m =(256 , 2 5 6 ) , l 2 s c a l e =1e −6 , b a t c h s i z e =500 , epochs =300 ,\n(a) Real Data AUC-ROC: 0.09\n(a) SRA results for MWEM\nbank car shopping mushroom adult epsilons\n0.01 0.4 0.1 0.3 0.9 0.3 0.10 0.4 0.3 0.6 0.9 0.5 0.50 0.7 0.2 0.6 0.7 0.6 1.00 0.7 0.1 0.6 0.8 0.8 3.00 0.3 0.3 0.5 0.9 0.2 6.00 0.5 0.3 0.6 0.9 0.2 9.00 0.6 0.3 0.6 0.7 0.4\n(b) SRA results for PATEGAN\nbank car shopping mushroom adult epsilons\n0.01 0.8 0.5 0.8 0.6 0.4 0.10 0.7 0.4 0.7 0.3 0.8 0.50 0.5 0.7 0.4 0.9 0.5 1.00 0.6 0.4 0.8 0.9 0.8 3.00 0.8 0.7 0.7 1.0 0.7 6.00 0.6 0.9 0.6 0.3 0.8 9.00 0.5 0.6 0.5 0.7 0.6\npack =1 , l o g f r e q u e n c y =True , d i s a b l e d d p = F a l s e , t a r g e t d e l t a =None , s igma = 5 , m a x p e r s a m p l e g r a d n o r m = 1 . 0 , v e r b o s e =True , l o s s = ’ w a s s e r s t e i n ’" }, { "heading": "PATECTGAN", "text": "embedding dim =128 , gen dim =(256 , 2 5 6 ) , d i s d i m =(256 , 2 5 6 ) , l 2 s c a l e =1e −6 , epochs =300 , pack =1 ,\n(a) SRA results for DPGAN\nbank car shopping mushroom adult epsilons\n0.01 0.6 0.2 0.7 0.7 0.4 0.10 0.4 0.1 0.7 0.7 0.7 0.50 0.4 0.4 0.9 0.6 0.3 1.00 0.8 0.2 0.7 1.0 0.5 3.00 0.9 0.4 0.2 1.0 0.9 6.00 0.9 0.6 0.5 0.6 0.8 9.00 0.4 0.2 0.9 0.9 0.7\n(b) SRA results for PATECTGAN\nbank car shopping mushroom adult epsilons\n0.01 0.4 0.4 0.7 1.0 0.5 0.10 0.7 0.3 0.5 0.9 0.5 0.50 0.5 0.8 0.7 0.9 0.4 1.00 0.5 0.3 0.8 0.6 0.5 3.00 0.7 0.4 0.5 0.8 0.3 6.00 0.5 0.4 0.5 1.0 0.4 9.00 0.4 0.3 0.5 0.8 0.5\n(a) SRA results for DPCTGAN\nbank car shopping mushroom adult epsilons\n0.01 0.5 0.1 0.8 0.9 0.0 0.10 0.3 0.1 0.6 0.9 0.2 0.50 0.9 0.2 0.5 0.7 0.3 1.00 0.4 0.2 0.5 0.9 0.0 3.00 0.1 0.1 0.0 0.8 0.8 6.00 0.1 0.5 0.1 0.8 0.8 9.00 0.0 0.5 0.2 1.0 0.7\n(b) SRA results for QUAIL (MWEM)\nbank car shopping mushroom adult epsilons\n0.01 0.6 0.7 1.0 1.0 0.6 0.10 0.8 0.5 1.0 0.4 0.4 0.50 0.2 0.6 0.9 0.4 0.2 1.00 0.5 0.5 0.6 0.5 0.6 3.00 0.5 0.5 0.3 0.4 0.6 6.00 0.7 0.5 0.8 0.3 0.7 9.00 0.7 0.4 0.2 0.4 0.4\n(a) SRA results for QUAIL (PATEGAN)\nbank car shopping mushroom adult epsilons\n0.01 0.6 0.3 0.7 0.3 0.8 0.10 0.4 0.5 0.7 0.4 0.6 0.50 0.2 0.5 0.7 0.3 0.6 1.00 0.7 0.5 0.8 0.3 0.2 3.00 0.4 0.4 0.2 0.3 0.5 6.00 0.2 0.5 0.3 0.4 0.0 9.00 0.6 0.5 0.6 0.4 0.4\n(b) SRA results for QUAIL (DPGAN)\nbank car shopping mushroom adult epsilons\n0.01 0.5 0.7 0.9 1.0 0.2 0.10 0.6 0.6 0.5 0.9 0.6 0.50 0.2 0.5 0.5 0.5 0.3 1.00 0.5 0.6 0.9 0.3 0.6 3.00 0.6 0.5 0.2 0.5 0.5 6.00 0.6 0.5 0.3 0.5 0.8 9.00 0.6 0.5 0.0 0.5 0.2\nl o g f r e q u e n c y =True , d i s a b l e d d p = F a l s e , t a r g e t d e l t a =None , s igma = 5 , m a x p e r s a m p l e g r a d n o r m = 1 . 0 , v e r b o s e =True , l o s s = ’ c r o s s e n t r o p y ’ , b i n a r y = F a l s e , b a t c h s i z e = 500 , t e a c h e r i t e r s = 5 , s t u d e n t i t e r s = 5 , d e l t a = 1e −5" }, { "heading": "DPGAN", "text": "b i n a r y = F a l s e , l a t e n t d i m =64 , b a t c h s i z e =64 , epochs =1000 , d e l t a =1e −5\n(a) SRA results for QUAIL (PATECTGAN)\nbank car shopping mushroom adult epsilons\n0.01 0.9 0.3 0.6 0.3 0.5 0.10 0.4 0.6 0.9 0.3 0.9 0.50 0.7 0.5 0.5 0.3 0.6 1.00 0.0 0.5 0.9 0.3 0.9 3.00 0.6 0.4 0.1 0.3 0.7 6.00 0.7 0.4 0.8 0.4 0.9 9.00 0.7 0.4 0.9 0.3 0.8\n(b) SRA results for QUAIL (DPCTGAN)\nbank car shopping mushroom adult epsilons\n0.01 0.8 0.2 0.9 0.5 0.2 0.10 0.6 0.5 0.8 0.3 0.3 0.50 0.1 0.4 0.7 0.3 0.4 1.00 0.7 0.5 0.5 0.3 0.4 3.00 0.1 0.4 0.8 0.4 0.5 6.00 0.0 0.4 0.7 0.3 0.1 9.00 0.3 0.4 0.1 0.5 0.1" }, { "heading": "PATEGAN", "text": "b i n a r y = F a l s e , l a t e n t d i m =64 , b a t c h s i z e =64 , t e a c h e r i t e r s =5 , s t u d e n t i t e r s =5 , d e l t a =1e −5" } ]
2,020
null
SP:959ed37c07a831c71c5dd586a5940313e62b7018
[ "A novel recall loss (RecallCE) that considers dynamically-changing class recalls is proposed in this paper to mitigate class imbalance in long-tailed recognition problems. The class recalls are estimated using either the current batch statistics or an exponential moving average, depending on the number of class (or class diversity) present in training batches. Relationships between RecallCE and existing widely-used loss functions are mathematically shown. RecallCE performs competitively with existing loss functions on semantic segmentation tasks and outperform them on image classification tasks." ]
Class imbalance is a fundamental problem in computer vision applications such as semantic segmentation and image classification. Specifically, uneven class distributions in a training dataset often result in unsatisfactory performance on underrepresented classes. Many works have proposed to weigh the standard cross entropy loss function with pre-computed weights based on class statistics such as the number of samples and class margins. There are two major drawbacks to these methods: 1) constantly up-weighing minority classes can introduce excessive false positives especially in semantic segmentation; 2) many recent works discovered that pre-computed weights have adversarial effects on representation learning. In this regard, we propose a hard-class mining loss by reshaping the vanilla cross entropy loss such that it weights the loss for each class dynamically based on changing recall performance. We show mathematically that the novel recall loss changes gradually between the standard cross entropy loss and the well-known inverse frequency cross entropy loss and balances precision and accuracy. We first demonstrate that the proposed loss effectively balances precision and accuracy on semantic segmentation datasets, and leads to significant performance improvement over other state-of-the-art loss functions used in semantic segmentation, especially on shallow networks. On image classification, we design a simple two-head training strategy to show that the novel loss function improves representation learning on imbalanced datasets. We outperform the previously best performing method by 5.7% on Place365-LT and by 1.1% on iNaturalist.
[]
[ { "authors": [ "Vijay Badrinarayanan", "Alex Kendall", "Roberto Cipolla" ], "title": "Segnet: A deep convolutional encoderdecoder architecture for image segmentation", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2017 }, { "authors": [ "Maxim Berman", "Amal Rannen Triki", "Matthew B Blaschko" ], "title": "The lovász-softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Kaidi Cao", "Colin Wei", "Adrien Gaidon", "Nikos Arechiga", "Tengyu Ma" ], "title": "Learning imbalanced datasets with label-distribution-aware margin loss", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Robin Chan", "Matthias Rottmann", "Fabian Hüger", "Peter Schlicht", "Hanno Gottschalk" ], "title": "Application of decision rules for handling class imbalance in semantic segmentation", "venue": null, "year": 1901 }, { "authors": [ "Liang-Chieh Chen", "George Papandreou", "Florian Schroff", "Hartwig Adam" ], "title": "Rethinking atrous convolution for semantic image segmentation", "venue": "arXiv preprint arXiv:1706.05587,", "year": 2017 }, { "authors": [ "Marius Cordts", "Mohamed Omran", "Sebastian Ramos", "Timo Rehfeld", "Markus Enzweiler", "Rodrigo Benenson", "Uwe Franke", "Stefan Roth", "Bernt Schiele" ], "title": "The cityscapes dataset for semantic urban scene understanding", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Yin Cui", "Menglin Jia", "Tsung-Yi Lin", "Yang Song", "Serge Belongie" ], "title": "Class-balanced loss based on effective number of samples", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Qi Dong", "Shaogang Gong", "Xiatian Zhu" ], "title": "Imbalanced deep learning by minority class incremental rectification", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "Tom Eelbode", "Jeroen Bertels", "Maxim Berman", "Dirk Vandermeulen", "Frederik Maes", "Raf Bisschops", "Matthew B Blaschko" ], "title": "Optimization for medical image segmentation: Theory and practice when evaluating with dice score or jaccard index", "venue": "IEEE Transactions on Medical Imaging,", "year": 2020 }, { "authors": [ "David Eigen", "Rob Fergus" ], "title": "Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Munawar Hayat", "Salman Khan", "Syed Waqas Zamir", "Jianbing Shen", "Ling Shao" ], "title": "Gaussian affinity for max-margin class imbalanced learning", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Bingyi Kang", "Saining Xie", "Marcus Rohrbach", "Zhicheng Yan", "Albert Gordo", "Jiashi Feng", "Yannis Kalantidis" ], "title": "Decoupling representation and classifier for long-tailed recognition", "venue": "arXiv preprint arXiv:1910.09217,", "year": 2020 }, { "authors": [ "Salman Khan", "Munawar Hayat", "Syed Waqas Zamir", "Jianbing Shen", "Ling Shao" ], "title": "Striking the right balance with uncertainty", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Salman H Khan", "Munawar Hayat", "Mohammed Bennamoun", "Ferdous A Sohel", "Roberto Togneri" ], "title": "Cost-sensitive learning of deep feature representations from imbalanced data", "venue": "IEEE transactions on neural networks and learning systems,", "year": 2017 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Tsung-Yi Lin", "Priya Goyal", "Ross Girshick", "Kaiming He", "Piotr Dollár" ], "title": "Focal loss for dense object detection", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Ziwei Liu", "Zhongqi Miao", "Xiaohang Zhan", "Jiayun Wang", "Boqing Gong", "Stella X Yu" ], "title": "Large-scale long-tailed recognition in an open world", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Hyun Oh Song", "Yu Xiang", "Stefanie Jegelka", "Silvio Savarese" ], "title": "Deep metric learning via lifted structured feature embedding", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Md Atiqur Rahman", "Yang Wang" ], "title": "Optimizing intersection-over-union in deep neural networks for image segmentation", "venue": "In International symposium on visual computing,", "year": 2016 }, { "authors": [ "German Ros", "Laura Sellart", "Joanna Materzynska", "David Vazquez", "Antonio M Lopez" ], "title": "The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Seyed Sadegh Mohseni Salehi", "Deniz Erdogmus", "Ali Gholipour" ], "title": "Tversky loss function for image segmentation using 3d fully convolutional deep networks", "venue": "In International Workshop on Machine Learning in Medical Imaging,", "year": 2017 }, { "authors": [ "Abhinav Shrivastava", "Abhinav Gupta", "Ross Girshick" ], "title": "Training region-based object detectors with online hard example mining", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Carole H Sudre", "Wenqi Li", "Tom Vercauteren", "Sebastien Ourselin", "M Jorge Cardoso" ], "title": "Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In Deep learning in medical image analysis and multimodal learning for clinical decision support", "venue": null, "year": 2017 }, { "authors": [ "Saeid Asgari Taghanaki", "Yefeng Zheng", "S Kevin Zhou", "Bogdan Georgescu", "Puneet Sharma", "Daguang Xu", "Dorin Comaniciu", "Ghassan Hamarneh" ], "title": "Combo loss: Handling input and output imbalance in multi-organ segmentation", "venue": "Computerized Medical Imaging and Graphics,", "year": 2019 }, { "authors": [ "Grant Van Horn", "Oisin Mac Aodha", "Yang Song", "Yin Cui", "Chen Sun", "Alex Shepard", "Hartwig Adam", "Pietro Perona", "Serge Belongie" ], "title": "The inaturalist species classification and detection dataset", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Jiaqi Wang", "Wenwei Zhang", "Yuhang Zang", "Yuhang Cao", "Jiangmiao Pang", "Tao Gong", "Kai Chen", "Ziwei Liu", "Chen Change Loy", "Dahua Lin" ], "title": "Seesaw loss for long-tailed instance segmentation", "venue": null, "year": 2008 }, { "authors": [ "Xiao Zhang", "Zhiyuan Fang", "Yandong Wen", "Zhifeng Li", "Yu Qiao" ], "title": "Range loss for deep face recognition with long-tailed training data", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Bolei Zhou", "Agata Lapedriza", "Aditya Khosla", "Aude Oliva", "Antonio Torralba" ], "title": "Places: A 10 million image database for scene recognition", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2017 }, { "authors": [ "Boyan Zhou", "Quan Cui", "Xiu-Shen Wei", "Zhao-Min Chen" ], "title": "Bbn: Bilateral-branch network with cumulative learning for long-tailed visual recognition", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Dataset imbalance is an important problem for many computer vision tasks such as semantic segmentation and image classification. In semantic segmentation, imbalance occurs as a result of natural occurrence and varying sizes of different classes. For example, in an outdoor driving segmentation dataset, light poles and pedestrians are considered minority classes compared to large classes such as building, sky, and road. These minority classes are often more important than large classes for safety reasons. In image classification, imbalance can occur as a result of data collection. Some classes are more difficult to obtain data for than others. For example, the inaturalist dataset (Van Horn et al., 2018) has collected images of over 8000 natural species. Since some species are rare, the dataset exhibits the notorious long-tail distribution. When presented with imbalanced datasets, the standard cross entropy loss often yields unsatisfactory results as the training process naturally biases towards large classes resulting in low accuracy and precision on small classes.\nResearchers have studied the imbalance problem for classification, detection, and segmentation extensively. Most prior research has been on designing balanced loss functions. We classify existing loss functions under three categories: region-based losses, statistics-balanced losses and performance-balanced losses. Region-based losses directly optimize region metrics (e.g., Jaccard index (Rahman & Wang, 2016)) and are mainly popular in medical segmentation applications; Statistics-balanced losses (e.g., LDAM (Cao et al., 2019), Class-Balanced (CB) loss (Cui et al., 2019)) up/down weighs the contribution of a class based on its class margin or class size; however, they tend to encourage excessive false positives in minority classes to improve mean accuracy especially in segmentation. A recent study in Zhou et al. (2020) also shows that the weighting undermines the generic representation learning capability of the feature extractors; Performancebalanced losses (e.g., focal loss (Lin et al., 2017)) use a certain performance indicator to weigh\nthe loss of each class. As an example, focal loss assumes that the difficulty of a class is correlated with imbalance and can be reflected by the predicted confidence. However, it has not been very successful in other applications for dealing with imbalance as reported by Cui et al. (2019). We investigate the reasons of failure in Appendix A.1. Besides various losses, another thread focuses on training strategies to decouple classifier and representation learning in image classification such as the two-stage (Kang et al., 2020) and two-branch (Zhou et al., 2020) approaches. The decoupling approaches have shown state-of-the-art performance compared to other carefully designed losses. As studied by Zhou et al. (2020), statistics-balanced losses might even negatively affect representation learning because they always upweigh a minority class and ignores many more examples from the large classes.\nWe propose a novel performance-balanced loss using the recall metric to address the imbalance problem. The recall loss down/up weighs a class based on the training recall performance of that class. It is an example of hard class mining as supposed to the hard example mining strategy in the focal loss. Unlike the statistics-balanced losses, the recall loss dynamically changes its weights with training based on per-class recall performance (see fig. 1(a)). The dynamism is the key to overcome many drawbacks of the statistics-balanced losses. In our experiments, the CB loss improves accuracy at the expense of Intersection over Union (IOU) which considers false positives in semantic segmentation. However, our recall loss can effectively balance between precision and recall of each class, and hence, it improves accuracy but maintains a competitive IOU. Experiments on two benchmark semantic segmentation datasets demonstrate that the proposed recall loss shows significantly better performance than state-of-the-art loss functions used in prior works. We also show that while statistics-balanced losses negatively affect representation learning, the recall loss improves representation learning for imbalanced image classification and achieves state-of-the-art results with our simple decoupled network (fig. 1(b),(c)) on two common benchmarks. Specifically, we outperform previous state-of-the-art methods on Place-LT by 5.7% and iNaturalist2018 by 1.1%.\nOur main contributions are summarized below.\n• We introduce a novel loss function based on the metric recall. Recall loss weighs the standard cross entropy loss for each class with its instantaneous training recall performance.\n• The proposed recall loss learns a better semantic segmentation model that provides improved and balanced performance of accuracy and IOU. We demonstrate the loss on both synthetic and real semantic segmentation datasets.\n• The proposed loss also improves feature learning in image classification. We show state-of-the-art results on two common classification benchmarks with a simple decoupled network." }, { "heading": "2 RELATED WORK", "text": "Imbalance in Image Classification. Various losses have been proposed to deal with imbalance or long-tail distributions in image classification. Cost-sensitive loss (Khan et al., 2017) proposes to\niteratively optimize both the model parameters and also a cost-sensitive layer which is integrated into the cost function (more in Appendix B). Lifted Loss (Oh Song et al., 2016) considers all positive and negative pairs in a mini-batch. Range loss (Zhang et al., 2017) pushes examples in the same class together while forcing different class centers away from each other. More complicated marginbased approaches, (Dong et al., 2018; Khan et al., 2019; Hayat et al., 2019) are discussed in the Appendix B. Class-Balanced Loss (Cui et al., 2019) motivates a weighted cross entropy loss with the concept of effective number of samples in each class. LDAM (Cao et al., 2019) also derives a weighted cross entropy loss based on margins between classes. However, DecoupleRC (Kang et al., 2020) pointed out that balanced losses might negatively affect the representation learning process; hence, classifier learning and representation learning should be separated. OLTR (Liu et al., 2019) first learns a good representation and uses an attention mechanism to learn a balanced classifier. In the same spirit, DRW (Cao et al., 2019) uses a two-stage training, and BBN (Zhou et al., 2020) proposes a two-branch network with a custom training schedule. Both methods emphasize generic representation learning in the beginning and rebalancing the small classes at a later stage. However, both methods require extensive experiments for finding a good learning schedule. Drawing from the same idea, we design a Simple Decoupled Network (SDN) that uses two classification heads where one head is responsible for feature extractor training and the other for classifier training.\nImbalance in Image Segmentation. In image segmentation, Dice and Jaccard indices (Intersection over Union) are commonly used as the evaluation metrics. However, the most common training criterion, cross entropy, does not directly optimize these metrics. In medical imaging, researchers proposed to optimize soft/surrogate version of these indices. SoftIOU (Rahman & Wang, 2016) proposes to optimize a soft version of the Jaccard index; Lovasz Softmax (Berman et al., 2018) also optimizes the Jaccard index based on the Lovasz convex extension; SoftDice (Sudre et al., 2017) optimizes a soft version of the Dice index and similarly softTversky (Salehi et al., 2017) optimizes a soft Tversky index. Table 1 in Appendix 3.4 provides an overview of the different indices. However, concerns have been raised in Taghanaki et al. (2019) on whether these losses consider the tradeoff between false positives and false negatives. We show that they also tend to yield high mean accuracy at the expense of lower mean IOU, whereas our loss improves accuracy while maintaining a competitive IOU.\nImbalance in Object Detection. Imbalance is also a problem in object detection where the foreground-background imbalance is extreme and undermines learning. Online Hard Example Mining (OHEM) (Shrivastava et al., 2016) proposes to find hard examples by ranking the losses and only keeping those with the highest losses. Seesaw Loss (Wang et al., 2020) proposes to dynamically weight the cross entropy loss with cumulative class ratios. Focal Loss (FL) (Lin et al., 2017) chooses to down weigh easy samples and emphasize hard samples by weighting each sample by 1 p where p is the predicted probability for the sample. The weight for each sample dynamically changes with training and the method never completely discards any samples. Focal loss is especially successful because it is easy to implement and proves effective in the binary foreground-background imbalance setting. We compare the proposed method with these losses on image classification and semantic segmentation." }, { "heading": "3 RECALL LOSS", "text": "" }, { "heading": "3.1 MOTIVATION: FROM INVERSE FREQUENCY LOSS TO RECALL LOSS", "text": "To motivate our proposed loss, we first analyze the standard cross entropy loss. Let {xn, yn}8n 2 {1, ..., N}, where xn 2 Rd, yn 2 {1, ..., C} denote the set of training data and corresponding labels. Let Pn denotes the predictive softmax-distribution over all classes for input xn and P in denotes the probability of the i-th class. The cross entropy loss used in multiclass classification is defined as:\nCE = NX\nn=1\nlog(P ynn ) = CX\nc=1\nX\nn:yn=c\nlog(P ynn ) = CX\nc=1\nNc log(P c) (1)\nwhere P c = ( Q\nn:yn=c P ynn ) 1/Nc denotes the geometric mean confidence of class c and Nc denotes the number of samples in class c. As shown in Eq. 1, the conventional cross entropy optimizes the geometric mean confidence of each class weighted by the number of pixels in each class. When there is a significant class imbalance in the dataset, the loss function biases towards large classes as a result of larger Nc.\nOne commonly used loss for imbalanced datasets is inverse frequency cross entropy loss (Eigen & Fergus, 2015; Badrinarayanan et al., 2017) which assigns more weight to the loss of minority classes. Let N denote the total number of pixels in the training set and Nc denotes the number of pixels belonging to class c 2 {1, .., C}. The frequency of a class is calculated as freq(c) = Nc/N . We show that while the unweighted cross entropy loss optimizes the overall confidence, the loss weighted by inverse frequency optimizes mean confidence. If we use an inverse frequency weighting, the loss is rebalanced. Note we leave out the N in freq(c) as it is shared by all classes.\nInvCE = CX\nc=1\n1\nfreq(c) Nc log(P\nc) = CX\nc=1\n1\nNc Nc log(P\nc) = CX\nc=1\nlog(P c) (2)\nAs shown in Eq. 2, the weighted loss optimizes the geometric mean of accuracy directly. However, the inverse frequency loss might not be optimal in practice because it over-weighs the minority classes and introduces excessive false positives, i.e., it sacrifices precision for recall. This problem is especially severe in semantic segmentation (Chan et al., 2019). Applying the inverse frequency loss to segmentation increases recall for each class. However, the improvement comes at the cost of excessive false positives, especially for small classes.\nWhile the inverse frequency loss solves the problem of imbalance, it focuses only on improving one aspect of the problem in classification, i.e. the recall of each class. To solve this issue, we propose to weigh the inverse frequency loss in Eq. 2 with the false negative (FNc) counts for each class. The first insight is that the FNc is bounded by the total number of samples in a class and zero, i.e.\nNc FNc 0 (3) By weighting the inverse frequency cross entropy loss in Eq. 2 by the false negative counts for each class, we obtain a moderate loss function which sits between the regular cross entropy loss and inverse frequency loss. We want to note that the idea of finding a middle ground between these two loss functions has been explored in different forms. For example, the BBN (Zhou et al., 2020) method explicitly uses an adaptor function that controls the contribution of the two losses. However, an obvious drawback is that the adaptor function needs to be extensively searched based on empirical evidence and intuition.\nRecallCE = CX\nc=1\nFNc log(P c) =\nCX\nc=1\nFNc Nc Nc log(P c) =\nCX\nc=1\nFNc FNc + TPc Nc log(P c)\n(4)\nAs Eq. 4 shows, the loss can be implemented as the regular cross entropy loss weighted by classwise false negative rate (FNR). The second insight is that minority classes are most likely more difficult to classify with higher FNR and large classes with smaller FNR. Therefore, similar to inverse frequency loss, gradients of minority classes will be boosted and gradients of majority classes will be suppressed. However, unlike frequency weighting, the weighting will not be as extreme as motivated in Eq. 3. In the next section, we will derive the final dynamic form and compare it to the other performance-balanced loss: the focal loss (Lin et al., 2017)." }, { "heading": "3.2 MOTIVATION: FROM FOCAL LOSS TO RECALL LOSS", "text": "The previous section proposed to weigh cross entropy with the false negative rate of each class. Unlike frequency and decision margin (Cao et al., 2019) which are characteristics of the dataset, FNR is a metric of a model’s performance. As we continue to update the model’s parameters, FNR changes. Therefore, the weights for each class change dynamically to reflect a model’s instantaneous performance. We rewrite Eq. 4 and introduce a subscript t to denote the time dependency.\nRecallCE = CX\nc=1\n(1 TPc,t FNc,t + TPc,t )Nc log(p c,t) =\nCX\nc=1\nX\nn:yi=c\n(1 Rc,t) log(pn,t) (5)\nwhere Rc,t is the recall for class c at optimization step t. n : yi = c denotes all samples such that the ground truth label yi is class c.\nThe other performance-balanced loss is focal loss (Lin et al., 2017). It is developed originally for background-foreground imbalance in object detection. The loss function weighs the cross entropy\nloss of each sample by 1 p where p is the predicted probability/confidence. Intuitively, hard samples will have low confidence and therefore a high weight. It can be thought of as a hard-example mining loss. To see recall loss’s resemblance to focal loss, we need to need to rewrite it slightly.\nFocalCE = NX\nn=1\n(1 pynn,t) log p yn n,t =\nCX\nc=1\nX\nn:yn=c\n(1 pn,t) log pn,t (6)\nwhere pynn,t is predicted probability of class yn for sample n at time t; is a scalar hyperparameter.\nFocal loss has been a very popular loss function for imbalance in detection. It is appealing because it dynamically adjusts the weight for each sample depending on the difficulty of the sample and model performance. However, the focal loss is not specifically effective against imbalanced classification problems as reflected by poor performance reported by many papers (Cao et al., 2019; Cui et al., 2019). By examining the similarity between Eq. 5 and Eq. 6, we argue that the proposed recall loss function can be seen as a class-wise focal loss with = 1 and the per-class metric Rc,t replacing per-sample probability pynn,t. The next section will discuss how to estimate the recall for each class." }, { "heading": "3.3 RECALL ESTIMATION", "text": "The recall loss is designed to reflect the instantaneous training performance of a model on the current input data. A straightforward way is to estimate the recall based on the current batch statistics, i.e., count false positives for each class from an entire batch. This method provides a reliable estimation of the model’s current performance if there is a sufficient number of samples for each class in the batch. Intuitively for classification, batch recall is a good estimation if the number of classes is not much larger than the batch size. For semantic segmentation, batch recall is almost always reliable since each image contains hundreds of pixels for each class. For subsequent segmentation experiments, we use the batch recall loss where the batch recall is calculated as follows:\nRc,t = TPc,t\nTPc,t + FNc,t (7)\nFor classification, estimating recall is problematic for a large number of classes. For example, the iNaturalist2018 dataset has 8,142 classes. For a batch size of 128, it is difficult to sample sufficient data for any class. To mitigate the problem, we use the Exponential Moving Average (EMA) to estimate the recall and calculate the EMA recall loss.\nR̃c,t = ↵Rc,t + (1 ↵)Rc,t 1 (8)" }, { "heading": "3.4 ANALYSIS: RECALL, PRECISION, DICE, JACCARD AND TVESKY", "text": "Why do we not use other metrics such as F1, Dice, Jaccard and Tvesky as the weights? Following previous convention, let Gc and Pc denote the set of ground truth (positive) samples and predicted samples for class c. Let FPc, TNc denote the set of false positive and true negative samples respectively for class c, and other terms are defined similarly. Recall is different from the other metrics in that it does not have false positive in the denominator and this distinction makes it ideal for weighting cross entropy loss (and others not) as shown in table 1. Referring back to Eq. 5, where recall loss is defined as weighted cross entropy by 1 Rc, replacing recall by any other metrics above would result in FP appearing in the numerator of the weights. For example, a hypothetical precision loss can be defined as following.\nPrecisionLoss CX\nc=1\nX\nn:yi=c\n✓ FPc,t\nFPc,t + TPc,t\n◆ log(pn,t) (9)\nThis formulation will introduce unexpected behavior. A large false positive count in a class will result in a large weight, which further encourages false detection for that class. This will cause the number of false positives to explode. From a different perspective, because in cross entropy loss we always penalize the ground truth samples i 2 Gc = {i : yi = c} for a class c, a proper weighting should be proportional to FNc ✓ Gc but not FPc 6⇢ Gc which does not belong to the set of ground truth samples. The same analysis can be applied to other metrics involving false positives." }, { "heading": "3.5 RECALL LOSS AS A FEATURE LEARNING LOSS FOR IMBALANCED CLASSIFICATION", "text": "Recent works (Kang et al., 2020) on the imbalanced classification problem proposed to separate representation learning and classifier learning. Intuitively, the final classifier layer is negatively affected by highly imbalanced data distributions while the convolutional neural network backbone is not as affected. In other words, representation learning benefits from all the data regardless of their class membership. It has been shown in Kang et al. (2020) that weighted losses do not produce large improvement because they can negatively affect representation learning. While we need to be careful when introducing weighted losses to train the feature extractor, some previous works (Cao et al., 2019; Zhou et al., 2020) showed that it can still be beneficial to carefully ”fine-tune” CNNs with balancing techniques towards the end of a training cycle when the learning rate has been annealed. We propose to use recall loss as a feature learning loss to replace the standard cross entropy. We experimentally show that recall loss is a better suited loss for representation learning because it considers imbalance in datasets while dynamically adjusting the weights to not bias towards any class. To apply recall loss to classification, we introduce a Simple Decoupled Network (SDN) to decouple representation and classifier learning (fig. 1(b),(c)).\nLet f✓ denote the feature extractor and {g✓, h✓} denote two classifier heads on top of the feature extractor. Generally speaking, f✓ is parameterized by a CNN and {g✓, h✓} are two separate fully connected networks. Similar to previous works (Kang et al., 2020), (Zhou et al., 2020), we design a simple decoupled network with two classification heads and a shared feature backbone as shown in fig. 1 (b). More specifically, the loss from the head g✓ backpropagates to the feature extractor f✓ while the loss from the head h✓ is stopped. The g✓ head is trained with recall loss and the h✓ head is trained with the CB loss (Cui et al., 2019). In other words, recall loss trains the feature extractor while the CB loss does not. We only use the h✓ head in inference. Therefore, this simple modification during training does not introduce any additional change to inference. The proposed method simplifies BBN (Zhou et al., 2020) in two ways. 1) Only one loss function affects the backbone. Therefore, there is no need for hand-tuning an adaptor function for controlling two losses. 2) We only use one head for inference and discard the other. This simple design proves to be effective in our experiments." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 EXPERIMENTAL SETTING", "text": "Datasets. We evaluate our recall loss on two popular large-scale outdoor semantic segmentation datasets, Synthia (Ros et al., 2016) and Cityscapes (Cordts et al., 2016). Synthia is a photorealistic synthetic dataset with different seasons, weather, and lighting conditions. Specifically, we use the Synthia-sequence Summer split for training (4400), validation (624), and testing (1272). Cityscapes consists of real photos of urban street scenes in several cities in Europe. It has 5000 annotated images for training and another 5000 for evaluation. We further show that recall loss is beneficial for feature learning in image classification on two large-scale long-tailed datasets including PlaceLT (Liu et al., 2019) and iNaturvspacealist2018 (Van Horn et al., 2018). Place-LT has 365 classes and long-tailed class distribution. It is created by sampling from the original balanced dataset (Zhou et al., 2017) following a Pareto distribution. iNaturalist2018 is a long-tailed image collection of natural species of 8142 classes. Please refer to Appendix A.2 for details on implementation.\nEvaluation Metrics. We report mean accuracy and mean IOU for semantic segmentation experiments and overall accuracy for image classification following previous works (Cao et al., 2019; Zhou et al., 2020; Liu et al., 2019; Kang et al., 2020) on these datasts. We note that both mean accuracy and mean IOU are important metrics for semantic segmentation. While a good mean IOU\nFocal RecallCross-EntropyInput Ground Truth\nindicates a balanced performance of precision and recall, mean accuracy is an indicator of the detection rate of each class, which is important for safety-critical applications such as self-driving." }, { "heading": "4.2 SEMANTIC SEGMENTATION EXPERIMENTS", "text": "For semantic segmentation experiments, we compare our method with two region-based losses, SoftIOU (Rahman & Wang, 2016) and Lovasz softamax (Berman et al., 2018). Both of these losses aim to minimize a soft/surrogate version of the metric IOU. As analyzed by Eelbode et al. (2020), the softDice loss (Sudre et al., 2017) and softTversky loss (Salehi et al., 2017) are similar to the two chosen losses. It is representative to compare to two of them. We also compare with state-ofthe-art losses for imbalanced image classification and detection such as CB-CE (Cui et al., 2019), Focal loss (Lin et al., 2017) and Online Hard Example Mining (OHEM) loss (Shrivastava et al., 2016). We keep top 70% samples in OHEM. While there are other techniques for imbalanced image classification, they require changes to learning schedules and architectures. A direct adaptation of them for semantic segmentation is not trivial. We compare with them directly on image classification datasets.\nSynthia. We first present results on the synthetic Synthia segmentation dataset in table 2. SoftIOU, Lovasz, CB-CE all improved the mean accuracy compared to the baseline cross entropy loss. However, the improvement comes at a cost of lower mean IOU. Focal loss performs similarly to the standard cross entropy loss. This is consistent with our statement that hard-example weighting in focal loss is not effective against multi-class imbalance. OHEM performs worse on both metrics. We think this is because OHEM completely discards 30% training samples at each iteration and this negatively affects feature learning. On the other hand, the proposed recall loss improves the mean accuracy metric and maintains good mean IOU. This validates our analysis that the recall loss can balance between precision and recall. The same trend is observed for both shallow backbone, resnet18, and deep backbone resnet101. However, we note that the effectiveness of the recall loss is more obvious on a less powerful feature backbone. We hypothesize that less powerful backbones are more likely to spend its limited representation capability on large class and thus bias towards them. This is important because in many applications, hardware limits the deployment of computation-heavy backbones and we need to attend to small classes for safety relying on shallow feature extractors. In fig. 2, we show per-class accuracy and IOU performance on three losses including the recall loss. We observe that both CB-CE and recall loss can improve accuracy on small classes significantly. However, CB-CE deteriorates IOU for those classes significantly while recall loss maintains competitive performance because CB-CE uses a fixed weighting which always emphasizes small classes. This observation supports our claim that recall loss balances recall and precision because of its dynamic adaptability to performance. In the fig. 2(c), we show the 1 Rc,t weights on resenet18 Synthia experiment. We observe that the weight for small classes such as bike decreases over time. This indicates that the performance of the bike class has increased.We further provide a visual comparison between a model trained with the cross-entropy loss and the proposed recall loss in fig. 3. Our method provides more fine details on small classes which are often suppressed in traditional cross entropy training.\nCityscapes. We further present results on the Cityscapes segmentation dataset with real images. As shown in table 3, softIOU, and focal loss perform similarly to the standard cross entropy loss while Lovasz, OHEM, and CB-CE are consistently worse on both of the resnet backbones. Compared to performance on Synthia, this shows that some of the losses are not robust to label noise in a real dataset. Recall loss again outperforms other losses by improving mean accuracy and maintaining a good mean IOU. In other words, recall loss improves the detection rate of small classes such as pedestrians, light poles on the road and maintains a good precision. This demonstrates the effectiveness of recall loss on both synthetic and real datasets on outdoor driving segmentation datasets." }, { "heading": "4.3 IMAGE CLASSIFICATION EXPERIMENTS", "text": "For classification experiments, we compare on two popular imbalanced datasets, Place-LT (Liu et al., 2019) and iNaturalist2018 (Van Horn et al., 2018). We compare to the methods that have achieved state-of-the-art performance on either dataset. We specifically introduce a baseline model, SDN-CE, which replaces recall loss with a standard cross entropy loss to train the feature backbone.\nPlace-LT. Following previous works (Kang et al., 2020), (Liu et al., 2019), we compare to the Lifted loss (Oh Song et al., 2016), Focal loss (Lin et al., 2017), Range Loss (Zhang et al., 2017), OLTR (Liu et al., 2019), tau-norm (Kang et al., 2020) and LWS (Kang et al., 2020). We note that Kang et al. (2020) experimented with many variants and tau-norm is the best performing one on this dataset. Table 5 shows that the recall loss outperforms other loss including SDN-CE. The result is twofold. First, the strong performance of the baseline, SDN-CE agrees with the finding in Kang et al. (2020) that imbalance affects the classifier more than the backbone and a simple decoupling trick can outperform carefully designed losses. Second, the result validates our claim that the recall loss, SDN-recall, is a more suitable feature loss for imbalanced datasets when comparing to SDN-CE. Note that we use the EMA version of the recall loss. The table 4(a) shows the results of SDN-recall with different ↵ on a validation set. We can conclude that the sensitivity of ↵ is low on the Place-LT dataset. Specifically, the ratio of the number of classes to batch size is 365:128 in this case.\niNaturalist2018. On iNaturalist we compare to LDAM-DRW (Cao et al., 2019), BBN (Zhou et al., 2020), tau-norm (Kang et al., 2020) and LWS (Kang et al., 2020). This dataset is the most challenging due to its extremely large number of classes. This presents a specific challenge to recall loss since the effectiveness of recall loss depends on a reliable estimation of the training recall for each class. Consequently, it motivated us to propose the exponential-moving average update rules. Table 6 shows the results of SDN-recall, SDN-CE, and all compared methods. SDN-recall outperforms all other methods including SDN-CE. It is worth mentioning that both LDAM-DRW and BBN proposed to finetune the feature extractor with a balanced loss using a two-stage and twobranch strategy respectively. Recall loss trains a feature backbone in an end-to-end manner and outperforms other techniques that require careful hyperparameter tuning or modification to the architectures. Table 4(b) shows the sensitivity of ↵ on this dataset. As the number of classes (8,142) is much larger than the batch size (128), a small ↵ is critical to provide a reliable recall estimation. We also present the training curves with different ↵ in the Appendix A.3. We observe that smaller ↵ yields lower training loss." }, { "heading": "5 CONCLUSION", "text": "In this paper, we proposed a novel loss function based on the metric recall. The loss function uses a hard-class mining strategy to improve model performance on imbalanced datasets. Specifically, the recall loss weighs examples in a class based on its instantaneous recall performance during training, and the weights change dynamically to reflect relative change in performance among classes. Experimentally, we demonstrated several advantages of the loss: 1) Recall loss improves accuracy while maintains a competitive IOU performance in semantic segmentation. Most notably, recall loss improves both accuracy and precision significantly in small networks, which possesses limited representation power and is more prone to biased performance due to data imbalance. 2) Recall loss works on both synthetic and real data and is robust to label noise present in real datasets. 3) The EMA version of recall loss is able to handle an extremely large numbers of classes and provides a stable improvement on representation learning. 4) Recall loss facilitates representation learning in image classification. Using a simple decoupled training strategy and the recall loss, we outperform more complicated methods on common imbalance learning benchmarks." } ]
2,020
null
SP:1ccd6cfc6dce5a3f4b0c65dd1625f71ac3225c2d
[ "The paper studies the effect of padding on artefacts in CNN feature maps and performance on image classification and object detection. It convincingly makes the case that these artefacts have a significant detrimental effect on task performance, e.g. leading to blind spots / missed detections of small objects near the image border. It also studies the effect of uneven padding in downsampling layers, where the padding may only affect some sides of the image and not others, depending on the image size. A condition is presented for when this does / does not occur. The effect of different padding methods is also studied from the perspective of foveation by computing the number of paths from an input pixel to the output. A number of practical recommendations are given." ]
We show how feature maps in convolutional networks are susceptible to spatial bias. Due to a combination of architectural choices, the activation at certain locations is systematically elevated or weakened. The major source of this bias is the padding mechanism. Depending on several aspects of convolution arithmetic, this mechanism can apply the padding unevenly, leading to asymmetries in the learned weights. We demonstrate how such bias can be detrimental to certain tasks such as small object detection: the activation is suppressed if the stimulus lies in the impacted area, leading to blind spots and misdetection. We propose solutions to mitigate spatial bias and demonstrate how they can improve model accuracy. 1 MOTIVATION Convolutional neural networks (CNNs) serve as feature extractors for a wide variety of machinelearning tasks. Little attention has been paid to the spatial distribution of activation in the feature maps a CNN computes. Our interest in analyzing this distribution is triggered by mysterious failure cases of a traffic light detector: The detector successfully detects a small but visible traffic light in a road scene. However, it fails completely in detecting the same traffic light in the next frame captured by the ego-vehicle. The major difference between both frames is a limited shift along the vertical dimension as the vehicle moves forward. Therefore, the drastic difference in object detection is surprising given that CNNs are often assumed to have a high degree of translation invariance [8; 17]. The spatial distribution of activation in feature maps varies with the input. Nevertheless, by closely examining this distribution for a large number of samples, we found consistent patterns among them, often in the form of artifacts that do not resemble any input features. This work aims to analyze the root cause of such artifacts and their impact on CNNs. We show that these artifacts are responsible for the mysterious failure cases mentioned earlier, as they can induce ‘blind spots’ for the object detection head. Our contributions are: • Demonstrating how the padding mechanism can induce spatial bias in CNNs (Section 2). • Demonstrating how spatial bias can impair downstream tasks (Section 3). • Identifying uneven application of 0-padding as a resolvable source of bias (Section 5). • Relating the padding mechanism with the foveation behavior of CNNs (Section 6). • Providing recommendations to mitigate spatial bias and demonstrating how this can prevent blind spots and boost model accuracy. 2 THE EMERGENCE OF SPATIAL BIAS IN CNNS Our aim is to determine to which extent activation magnitude in CNN feature maps is influenced by location. We demonstrate our analysis on a publicly-available traffic-light detection model [36]. This model implements the SSD architecture [26] in TensorFlow [1], using MobileNet-v1 [13] as a feature extractor. The model is trained on the BSTLD dataset [4] which annotates traffic lights in road scenes. Figure 1 shows two example scenes from the dataset. For each scene, we show two feature maps computed by two filters in the 11th convolutional layer. This layer contains 512 filters whose feature maps are used directly by the first box predictor in the SSD to detect small objects. 1 Published as a conference paper at ICLR 2021 Figure 1: Averaging feature maps per input (column marginal) and per filter (row marginal) in the last convolutional layer of a traffic light detector. Color indicates activation strength (the brighter, the higher), revealing line artifacts in the maps. These artifacts are the manifestation of spatial bias. The bottom row in Figure 1 shows the average response of each of the two aforementioned filters, computed over the test set in BSTLD. The first filter seems to respond mainly to features in the top half of the input, while the second filter responds mainly to street areas. There are visible lines in the two average maps that do not seem to resemble any scene features and are consistently present in the individual feature maps. We analyzed the prevalence of these line artifacts in the feature maps of all 512 filters. The right column in Figure 1 shows the average of these maps per scene, as well as over the entire test set (see supplemental for all 512 maps). The artifacts are largely visible in the average maps, with variations per scene depending on which individual maps are dominant. A useful way to make the artifacts stand out is to neutralize scene features by computing the feature maps for a zero-valued input. Figure 2 depicts the resulting average map for each convolutional layer after applying ReLU units. The first average map is constant as we expect with a 0-valued input. The second map is also constant except for a 1-pixel boundary where the value is lower at the left border and higher at the other three borders. We magnify the corners to make these deviations visible. The border deviations increase in thickness and in variance at subsequent layers, creating multiple line artifacts at each border. These artifacts become quite pronounced at ReLU 8 where they start to propagate inwards, resembling the ones in Figure 1. Figure 2: Activation maps for a 0 input, averaged over each layer’s filters (title format: H⇥W⇥C). 2 Published as a conference paper at ICLR 2021 It is evident that the 1-pixel border variations in the second map are caused by the padding mechanism in use. This mechanism pads the output of the previous layer with a 1-pixel 0-valued border in order to maintain the size of the feature map after applying 3x3 convolutional. The maps in the first layer are not impacted because the input we feed is zero valued. Subsequent layers, however, are increasingly impacted by the padding, as preceding bias terms do not warrant 0-valued input. It is noticeable in Figure 2 that the artifacts caused by the padding differ across the four borders. To investigate this asymmetry, we analyze the convolutional kernels (often called filters) that produce the feature maps. Figure 3 depicts a per-layer mean of these 3x3 kernels. These mean kernels exhibit different degrees of asymmetry in the spatial distribution of their weights. For example, the kernels in L1 assign (on average) a negative weight at the left border, and a positive weight at the bottom. This directly impacts the padding-induced variation at each border. Such asymmetries are related to uneven application of padding as we explain in Section 5. Figure 3: Mean kernel per convolutional layer. All kernels are 3⇥ 3, the titles show their counts. 3 IMPLICATIONS OF SPATIAL BIAS We demonstrate how feature-map artifacts can cause blind spots for the SSD model. Similar issues arise in several small-object detectors, e.g., for faces and masks, as well as in pixel-oriented tasks such as semantic segmentation and image inpainting (see supplemental for examples). Figure 4 illustrates how the SSD predicts small objects based on the feature maps of the 11-th convolutional layer. The SSD uses the pixel positions in these maps as anchors of object proposals. Each proposal is scored by the SSD to represent a target category, with ”background“ being an implicit category that is crucial to exclude irrelevant parts of the input. In addition to these scores, the SSD computes a bounding box to localize the predicted object at each anchor. We examine Figure 4: The formation of blind spots in SSD, illustrated via its box predictor internals with a zero-valued input. The predictor uses spatial anchors to detect and localize the target object at 45 ⇥ 80 possible locations based on 512 feature maps. Certain anchors are predisposed to predict background due to feature-map artifacts, as evident in the logit maps. Traffic lights at the corresponding location cannot be detected as demonstrated with a real scene (middle one in the bottom). 3 Published as a conference paper at ICLR 2021 Figure 5: (a) A map showing via color the detection score the SSD computes for a traffic light when present at various locations. The detection is muted when the stimulus lies in the area impacted by the artifacts. (b) The same map after changing the padding method to SYMMETRIC. The detection scores are rather constant except for periodic variations due to the SSD’s reliance on anchors. object proposals computed at 1:2 aspect ratio, as they resemble the shape of most traffic lights in the dataset. We visualize the resulting score maps both for the background category and for traffic lights, when feeding a 0-valued input to the SSD. We also visualize the bounding boxes of these proposals in the image space. The SSD predicts the image content to be of background category at all anchor locations, as evident from the value range in both score maps. Such predictions are expected with an input that contains no traffic lights. However, the line artifacts in the feature maps have a strong impact on the score maps. These artifacts elevate the likelihood of anchors closer to the top to be classified as background (see the yellow band in the background score map). Conversely, these anchors have significantly lower scores for the traffic light category, compared with other anchors in the feature map. Such difference in the impact on the target categories is due to the different weights the SSD assigns to the feature maps for each target. As a result, the artifacts lead to potential blind spots in which the scores for certain categories are artificially muted. To validate whether or not the blind spots hinder object detection, we examine road scenes that contain highly-visible traffic light instances in the impacted area. Figure 4-bottom shows an example of such a scene. The SSD computes a low detection score of 7% when the traffic light lies in the blind spot (see middle image), far below the detection false-positive cutoff. Shifting the scene image upwards or downwards makes the instance detectable with a high score as long as it lies outside the blind spot. This explains the failure cases mentioned in Section 1. To further validate this effect, we run the SSD on baseline images that each contains one traffic light instance at a specific location in the input. We store the detection score for each instance. Figure 5a depicts the computed scores in a 2D map. It is evident that the model fails to detect the traffic light instance exactly when it is located within the “blind spot” band. The artifacts further disrupt the localization of the objects as evident in the top-right plot in Figure 4 which shows per-anchor object proposals computed for a 0 input. 4 REMINDER: WHY IS PADDING NEEDED IN CNNS? Padding is applied at most convolutional layers in CNNs to serve two fundamental purposes: Maintaining feature map size A padding that satisfies this property is often described as SAME or HALF padding. FULL padding expands the maps by kernel size 1 along each dimension. VALID padding performs no padding, eroding the maps by the same amount. SAME padding is important to (1) design deep networks that can handle arbitrary input size (a challenge in the presence of gradual erosion), (2) maintain the aspect ratio of non-square input, and (3) concatenate feature maps from different layers as in Inception [39] and ResNet [12] models. Reducing information bias against the boundary Consider a 3⇥3 kernel applied to a 2D input. An input location at least 2 pixels away from the boundary contributes to nine local convolution operations when computing the feature map. On the other hand, the corner is involved only one time under VALID padding, four times under a 1-pixel SAME 0-padding, and nine times under a 2-pixel FULL 0-padding. With SAME 0-padding, the cumulative contribution differences among the input pixels grow exponentially over the CNN layers. We refer to such uneven treatment of input pixels as the foveation behavior of the padding mechanism and elaborate on this in Section 6. We next explore solutions to the issues that cause padding to induce spatial bias. 4 Published as a conference paper at ICLR 2021 Figure 6: (a) Illustrating the problem of uneven padding when down-sampling at a stride of 2. The padding along x-axis is consumed only at the left side. (b) Mean 3⇥3 filters in three ResNet models, trained on ImageNet with two input sizes. Color encodes average weight (green is positive). A size that induces uneven padding (top row) can lead to asymmetries, esp. around down-sampling layers. These asymmetries are mitigated when the input size induces no uneven padding (bottom row). 5 ELIMINATING UNEVEN APPLICATION OF PADDING While useful to reduce bias against the boundary, applying padding at down-sampling layers can lead to asymmetry in CNN internals. Figure 6a illustrates the source of this asymmetry when strided convolution is used for downsampling: At one side of the feature map, the padding is consumed by the kernel while at the other side it is not. To warrant even application of padding throughout the CNN, the following must hold at all d down-sampling layers, where (hi, wi) is the output shape at the i-th layer with kh i ⇥ kw i as kernel size, (si , si ) as strides, and = (pi , pi ) as padding amount (refer to appendix A for a proof): 8i 2 {1, . . , d} : hi 1 = si · (hi 1) + k i 2 · pi ^ wi 1 = si · (wi 1) + k i 2 · pi (1) The values h0 and w0 represent the CNN input dimensions. The above constraints are not always satisfied during training or inference with arbitrary input dimensions. For example, ImageNet classifiers based on ResNet [12] and MobileNet [13] contain five down-sampling layers (d = 5) that apply 1-pixel 0-padding before performing 2-strided convolution. To avoid uneven application of padding, the input to these CNNs must satisfy the following, as explained in appendix A: h0 = a1⇥2+1 = 32 ·a1+1 and w0 = a2⇥2+1 = 32 ·a2+1 where a1, a2 2 N (2) The traditional 1 and prevalent input size for training ImageNet models is 224⇥224. This size violates Eq. 2, leading to uneven padding at every down-sampling layer in ResNet and MobileNet models where 0-padding is effectively applied only at the left and top sides of layer input. This over-represents zeros at the top and left sides of 3⇥ 3 feature-map patches the filters are convolved with during training. The top row of Figure 6b shows per-layer mean filters in three ResNet models in PyTorch [33], pre-trained on ImageNet with 224⇥224 images. In all of these models, a few of the mean filters, adjacent to down-sampling layers, exhibit stark asymmetry about their centers. We increase the image size to 225⇥225 without introducing additional image information2. This size satisfies Eq. 2, warranting even application of padding at every downsampling layer in the above models. Retraining the models with this size strongly reduces this asymmetry as evident in the bottom row of Figure 6b. This, in turn, visibly boosts the accuracy in all models we experimented with as we report in Table 1. The accuracy did not improve further when we retrained two of the models, ResNet-18 and ResNet-34, on 226 ⇥ 226 images. This provides evidence that the boost is due to eliminating uneven padding and not merely due to increasing the input size. 1 This size has been used to facilitate model comparison on ImageNet, since the inception of AlexNet. 2 This is done via constant padding. The side to pad with one pixel is chosen at random to balance out the application of padding at both sides over the training set. No additional padding is applied at further layers. 5 Published as a conference paper at ICLR 2021 Replacing 0-padding with a padding method that reuses feature map values can alleviate the asymmetry in the learned filters in the presence of unevenly applied padding. Another possibility is to use a rigid downsampling kernel, such as max-pooling, instead of a learned one. Appendix C demonstrates both possibilities. Finally, antialiasing before downsampling [43] can strongly reduce the asymmetry as we elaborate in Section 8 and in Appendix E. Table 1: Top-1 (and top-5) accuracy of five ImageNet classifiers trained with different input sizes. Input Size 2 MobileNet ResNet-18 ResNet-34 ResNet-50 ResNet-101 224⇥224 68.19 (88.44) 69.93 (89.22) 73.30 (91.42) 75.65 (92.47) 77.37 (93.56) 225⇥225 68.80 (88.78) 70.27 (89.52) 73.72 (91.58) 76.01 (92.90) 77.67 (93.81) Even when no padding is applied (pi = 0 or pi = 0), an input size that does no satisfy Eq. 1 can lead to uneven erosion of feature maps, in turn, reducing the contribution of pixels from the impacted sides (Fig 7e. Satisfying Eq 1 imposes a restriction on input size, e.g., to values in increments of 2d = 32 with the above models (193⇥193, 225⇥225, 257⇥257, ...). Depending on the application domain, this can be guaranteed either by resizing an input to the closest increment, or by padding it accordingly with suited values. 6 PADDING MECHANISM AND FOVEATION By foveation we mean the unequal involvement of input pixels in convolutional operations throughout the CNN. Padding plays a fundamental role in the foveation behavior of CNNs. We visualize this behavior by means of a foveation map that counts for each input pixel the number of convolutional paths through which it can propagate information to the CNN output. We obtain these counts by computing the effective receptive field [28] for the sum of the final convolutional layer after assigning all weights in the network to 1 (code in supplemental). Neutralizing the weights is essential to obtain per-pixel counts of input-output paths that reflect the foveation behavior. f Figure 7: Foveation behavior of different padding methods applied to VGG-19 [37], and illustrated in a 512 ⇥ 512 input space (unless otherwise stated). Color represents the number of paths to the output for each input pixel. (a) The difference between VALID, FULL, and SAME 0-padding. (b) SAME alternatives to 0-padding. (c) Dilation amplifies foveation of SAME 0-padding. (d) Strides can lead to checkerboard patterns. (e) Foveation effects are more extensive in smaller inputs (relative to input size) and are sensitive to uneven padding. Figure 7a shows the extensive foveation effect when no padding is applied. The diminishing contribution of vast areas of the input explains the drastic drop in accuracy recently observed under VALID padding [16]. In contrast, FULL 0-padding does not incur foveation, however, at the cost of increasing the output size after each layer, making it impractical as explained in Section 4. SAME 0-padding incurs moderate foveation at the periphery, whose absolute extent depends on the number of convolutional layers and their filter sizes. Its relative extent depends on the input size: the larger the input, the larger the ratio of the constant area in yellow (refer to appendix B for a detailed example). 6 Published as a conference paper at ICLR 2021 Figure 7b shows the foveation behavior of alternatives to SAME 0-padding that have roots in wavelet analysis [19] and image processing [27]. Mirror padding mirrors pixels at the boundary to fill the padding area. When the border is included (SYMMETRIC mode in TensorFlow) all input pixels have an equal number of input-output paths 3, resulting in a uniform foveation map. When the border is not included (REFLECT mode both in PyTorch and in TensorFlow), the map exhibits bias against the border and towards a contour in its proximity. This bias is amplified over multiple layers. Replication padding exhibits the opposite bias when the padding area is wider than 1 pixel. This is because it replicates the outer 1-pixel border multiple times to fill this area 3. The method is equivalent to SYMMETRIC if the padding area is 1-pixel wide. Circular padding wraps opposing borders, enabling the kernels to seamlessly operate on the boundary and resulting in a uniform map. Partial Convolution [22] has been proposed as a padding method that treats pixels outside the original image as missing values and rescales the computed convolutions accordingly [23]. Its foveation behavior resembles reflective padding 3. Distribution padding [30] resizes the input to fill the padding area around the original feature map, aiming at preserving the distribution of the map. Its foveation map is largely uniform, except for the corners and edges. Impact of input size Besides influencing the relative extent of foveation effects, the input size also determines the presence of uneven padding (or uneven feature-map erosion), as we discussed in Section 5. Figure 7e shows the foveation map for VGG-19 with a 127⇥127 input. This input violates Eq. 1 at every downsampling layer (appendix A), leading to successive feature map erosion at the bottom and right sides which is reflected in the foveation map (see appendix B for a detailed example). The bottom-right part of the input is hence less involved in the CNN computations. Impact of dilation We assign a dilation factor of 2 to all VGG-19 convolutional layers. While this exponentially increases the receptive field of the neurons at deeper layers [42], dilation doubles the extent of the non-uniform peripheral areas that emerge with SAME 0-padding as evident in Figure 7c. SYMMETRIC and circular padding maintain uniform foveation maps regardless of dilation 3. In contrast, dilation increases the complexity of these maps for REFLECT and replication padding. Impact of strides Whether learned on based on pooling, downsampling layers can amplify the impact of succeeding convolutional layers on foveation behaviour. Furthermore, these layers can cause input pixels to vary in the count of their input-output paths. This can happen when the kernel size is not divisible by the stride, leading to a checkerboard pattern in the foveation maps. This manifests in ResNet models as we illustrate in appendix B. In VGG-19, all max-pooling layers use a stride of 2 and kernel size of 2. Changing the kernel size to 3 leads to a checkerboard pattern as evident in Figure 7d. Such effects were shown to impact pixel-oriented tasks [32]. The padding technique and its foveation behaviour have direct impact on feature-map artifacts (Section 7), and on the ability of CNNs to encode spatial information (Section 8). Understanding the foveation behavior is key to determine how suited a padding method is for a given task. For example, small object detection is known to be challenging close to the boundary [26], in part due to the foveation behavior of SAME 0-padding. In Figure 5b, we change the padding method in the SSD to SYMMETRIC. The stimulus is noticeably more detectable at the boundary, compared with 0-padding 4. In contrast, ImageNet classification is less sensitive to foveation effects because the target objects are mostly located away from the periphery. Nevertheless, the padding method was shown to impact classification accuracy [23] because it still affects feature map artifacts. 7 PADDING METHODS AND FEATURE MAP ARTIFACTS It is also noticeable that the score map in Figure 5b is more uniform than in Figure 5a. In particular, under SYMMETRIC padding the model is able to detect traffic lights placed in the blind spots of the original 0-padded model. To verify whether the line artifacts in Figure 2 are mitigated, we inspect the mean feature maps of the adapted model. With a constant input, SYMMETRIC padding warrants constant maps throughout the CNN because it reuses the border to fill the padding area. Instead, we average these maps over 30 samples generated uniformly at random. Figure 8 depicts the mean maps which are largely uniform, unlike the case with 0-padding. 3 Refer to appendix F or to http://mind-the-pad.github.io for visual illustration and further theoretical analysis of the foveation behavior. Since the input size causes uneven application of padding, the right and bottom borders are still challenging. 7 Published as a conference paper at ICLR 2021 Figure 8: The same feature maps in Figure 2, generated under mirror padding and averaged over 30 randomly-generated input samples. The line artifacts induced by 0-padding are largely mitigated. To further analyze the impact of SYMMETRIC padding, we retrain the adapted model following the original training protocol. This significantly improves the average precision (AP) as reported in Table 2 under different overlap thresholds (matching IoU), confirming that small object detection is particularly sensitive to feature-map artifacts. Table 2: Performance of the SSD traffic light detector, trained under two different padding schemes. Average Precision (AP) AP@.20IOU AP@.50IOU AP@.75IOU AP@.90IOU Zero Padding 80.24% 49.58% 3.7% 0.007% Mirror Padding 83.20% 57% 8.44% 0.02% Of the padding methods listed in Section 6, mirror padding in both SYMMETRIC and REFLECT modes, PartialConv, and circular padding are generally effective at reducing feature map artifacts that emerge under zero padding, in particular salient line patterns. In contrast, distribution padding can induce significant artifacts. Refer to appendix D for comparative examples of artifacts under the aforementioned padding schemes. Artifact magnitude and propagation While feature-map artifacts are induced by the padding mechanism at the boundary, their magnitude and inward propagation in the maps are impacted by several architectural aspects of CNNs. In particular, certain normalization schemes such as batchnorm [15] tend to limit the range of variation within a feature map and to relatively harmonize this range across different maps. This, in turn, impacts how possible artifacts in these maps accumulate when they are processed by the next convolutional layer. Similarly, artifacts that manifest after applying ReLU units are of a positive sign. These factors were instrumental in the formation of potential blind spots described in Section 3. We hence recommend to involve non-convolutional layers when inspecting the feature maps. Besides having possible impact on artifact magnitude, several aspects of convolution arithmetic, such as filter size and dilation factors, can also impact the spatial propagation of these artifacts. 8 RELATED FINDINGS AND TAKEAWAYS Handling the boundary is an inherent challenge when dealing with spatial data [9]. Mean padding is known to cause visual artifacts in traditional image processing, with alternative methods proposed to mitigate them [24]. CNNs have been often assumed to deal with such effects implicitly. Innamorati et al [14] propose learning separate sets of filters dedicated to the boundaries to avoid impacting the weights learned by regular filters. A grouped padding strategy, proposed to support 2⇥2 filters [41], offers avenues to mitigate uneven padding and corresponding skewness in foveation maps without restrictions on input size (see our note in appendix B for explanation). Finally, insights from signal and image processing [10; 11] could inspire further CNN padding schemes. Zero padding has been recently linked to CNNs’ ability to encode position information [7; 16; 18; 29]. In contrast, circular padding was shown to limit this ability [7] and to boost shift invariance [35]. The input sizes in those studies do induce uneven padding. This can be, in part, the underlying mechanism behind the aforementioned ability. Whether or not this ability is desirable depends on the task, with several methods proposed to explicitly encode spatial information [5; 6; 20; 25; 29; 31]. 8 Published as a conference paper at ICLR 2021 Downsampling using max-pooling or strided convolution has been shown to impact shift invariance in CNNs by incurring aliasing effects [3; 38; 43]. These effects can manifest in the same symptoms we reported in Section 1, albeit for a different reason. Zhang [43] demonstrated how blurring the feature maps before subsampling mitigates aliasing effects and improves ImageNet classification accuracy of various popular CNNs. We analyzed the mean filters in antialiased MobileNet and ResNet models pre-trained on ImageNet under 0-padding, with 224⇥224 as input size (refer to Appendix E). We found that antialiasing can also mitigate the asymmetry of mean filters that exhibited high asymmetry in the baseline models, especially at deeper layers. This is remarkable given that these models are trained on 224⇥224 images, which incurs one-sided zero padding at every downsampling layer. This could, in part, be attributed to the ability of the BlurPool operator used in antialiased CNN to smoothen the acuity of zero-padded borders, in turn, reducing the value imbalance incurred by one-sided padding. Further analysis is needed to examine the interaction between padding and aliasing effects in CNNs and to establish possible synergy between antialiasing and eliminating uneven application of padding. Luo et al [28] drew connections between effective receptive fields and foveated vision. Our analysis links foveation behavior with the padding scheme and suggests that it might occur implicitly in CNNs when using VALID or SAME 0-padding, without the need for explicit mechanisms [2; 21]. Furthermore, it explains the drastic accuracy drop noted by [16] under VALID padding, which is amplified by feature map erosion. Choosing a padding method SAME 0-padding is by far the most widely-used method. Compared with other methods, it can enable as much as 50% faster training and inference. Problem-specific constraints can dictate different choices [34; 35; 40]. In the lack of a universally superior padding method, we recommend considering multiple ones while paying attention to the nature of the data and the task, as well as to the following aspects: • Feature-map statistics: 0-padding can alter the value distribution within the feature maps and can shift their mean value in the presence of ReLU units. The alternatives presented in Section 6 tend to preserve this distribution, thanks to reusing existing values in the maps. • Foveation behavior: 0-padding might not be suited for tasks that require high precision at the periphery, unlike circular and SYMMETRIC mirror padding. • Interference with image semantics (esp. with a padding amount > 1 pixel): For example, circular padding could introduce border discontinuities unless the input is panoramic [35]. • Potential to induce feature map artifacts: All alternatives to 0-padding induce relatively fewer artifacts, except for Distribution padding [30] (see appendix D). We also recommend eliminating uneven padding at downsampling layers both at training and at inference time, as we illustrated in Section 5. This is especially important when zero padding is applied and the downsampling is learned. The scripts used to generate the visualizations in this paper are available in the supplemental as well as at http://mind-the-pad.github.io. Summary We demonstrated how the padding mechanism can induce spatial bias in CNNs, in the form of skewed kernels and feature-map artifacts. These artifacts can be highly pronounced with the widely-used 0-padding when applied unevenly at the four sides of the feature maps. We demonstrated how such uneven padding can inherently take place in state-of-the-art CNNs, and how the artifacts it causes can be detrimental to certain tasks such as small object detection. We provided visualization methods to expose these artifacts and to analyze the implication of various padding schemes on boundary pixels. We further proposed solutions to eliminate uneven padding and to mitigate spatial bias in CNNs. Further work is needed to closely examine the implications of spatial bias and foveation in various applications (see supplementary for examples), as well as padding impact on recurrent models and 1-D CNNs. ACKNOWLEDGEMENT We are thankful to Ross Girshick for providing useful recommendations and experiment ideas, and to Shubham Muttepawar for implementing an interactive tool out of our analysis scripts, guided by our front-end specialist Edward Wang and our AI user-experience designer Sara Zhang. 9 Published as a conference paper at ICLR 2021 REFERENCES [1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, et al. TensorFlow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016. [2] E. Akbas and M. P. Eckstein. Object detection through search with a foveated visual system. PLoS computational biology, 13(10):e1005743, 2017. [3] A. Azulay and Y. Weiss. Why do deep convolutional networks generalize so poorly to small image transformations? Journal of Machine Learning Research (JMLR), 20(184):1–25, 2019. [4] K. Behrendt, L. Novak, and R. Botros. A deep learning approach to traffic lights: Detection, tracking, and classification. In Robotics and Automation (ICRA), 2017 IEEE International Conference on, pp. 1370–1377. IEEE, 2017. [5] C.-A. Brust, S. Sickert, M. Simon, E. Rodner, and J. Denzler. Convolutional patch networks with spatial prior for road detection and urban scene understanding. In International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP), 2015. [6] G. F. Elsayed, P. Ramachandran, J. Shlens, and S. Kornblith. Revisiting spatial invariance with low-rank local connectivity. In International Conference on Machine Learning (ICML), 2020. [7] J. Geiping, H. Bauermeister, H. Dröge, and M. Moeller. Inverting gradients–how easy is it to break privacy in federated learning? arXiv preprint arXiv:2003.14053, 2020. [8] R. Gens and P. M. Domingos. Deep symmetry networks. In Advances in neural information processing systems (NeurIPS), pp. 2537–2545, 2014. [9] D. Griffith and C. Amrhein. An evaluation of correction techniques for boundary effects in spatial statistical analysis: traditional methods. Geographical Analysis, 15(4):352–360, 1983. [10] V. Gupta and N. Ramani. A note on convolution and padding for two-dimensional data. Geophysical Prospecting, 26(1):214–217, 1978. [11] L. Hamey. A functional approach to border handling in image processing. In International Conference on Digital Image Computing: Techniques and Applications, pp. 1–8, 2015. [12] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In IEEE conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016. [13] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017. [14] C. Innamorati, T. Ritschel, T. Weyrich, and N. J. Mitra. Learning on the edge: Investigating boundary filters in CNNs. International Journal of Computer Vision (IJCV), pp. 1–10, 2019. [15] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning (ICML), pp. 448– 456, 2015. [16] M. A. Islam, S. Jia, and N. D. Bruce. How much position information do convolutional neural networks encode? In International Conference on Learning Representations (ICLR), 2020. [17] M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial transformer networks. In Advances in neural information processing systems (NeurIPS), pp. 2017–2025, 2015. [18] O. S. Kayhan and J. C. van Gemert. On translation invariance in CNNs: Convolutional layers can exploit absolute spatial location. In IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2020. [19] T. L. Kijewski-Correa. Full-scale measurements and system identification: A time-frequency perspective. PhD thesis, University of Notre Dame., 2003. 10 Published as a conference paper at ICLR 2021 [20] I. Kim, W. Baek, and S. Kim. Spatially attentive output layer for image classification. In IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2020. [21] H. Larochelle and G. E. Hinton. Learning to combine foveal glimpses with a third-order boltzmann machine. In Advances in neural information processing systems (NeurIPS), pp. 1243–1251, 2010. [22] G. Liu, F. A. Reda, K. J. Shih, T.-C. Wang, A. Tao, and B. Catanzaro. Image inpainting for irregular holes using partial convolutions. In European Conference on Computer Vision, 2018. [23] G. Liu, K. J. Shih, T.-C. Wang, F. A. Reda, K. Sapra, Z. Yu, A. Tao, and B. Catanzaro. Partial convolution based padding. In arXiv preprint arXiv:1811.11718, 2018. [24] R. Liu and J. Jia. Reducing boundary artifacts in image deconvolution. In IEEE International Conference on Image Processing (ICIP), pp. 505–508, 2008. [25] R. Liu, J. Lehman, P. Molino, F. P. Such, E. Frank, A. Sergeev, and J. Yosinski. An intriguing failing of convolutional neural networks and the CoordConv solution. In Advances in Neural Information Processing Systems (NeurIPS), pp. 9605–9616, 2018. [26] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg. SSD: Single shot multibox detector. In European Conference on Computer Vision, pp. 21–37, 2016. [27] S. Lou, X. Jiang, and P. J. Scott. Fast algorithm for morphological filters. Journal of Physics: Conference Series, 311(1):012001, 2011. [28] W. Luo, Y. Li, R. Urtasun, and R. Zemel. Understanding the effective receptive field in deep convolutional neural networks. In Advances in Neural Information Processing Systems (NeurIPS), pp. 4898–4906, 2016. [29] R. Murase, M. Suganuma, and T. Okatani. How can cnns use image position for segmentation? arXiv preprint arXiv:2005.03463, 2020. [30] A.-D. Nguyen, S. Choi, W. Kim, S. Ahn, J. Kim, and S. Lee. Distribution padding in convolutional neural networks. In IEEE International Conference on Image Processing (ICIP), pp. 4275–4279, 2019. [31] D. Novotny, S. Albanie, D. Larlus, and A. Vedaldi. Semi-convolutional operators for instance segmentation. In European Conference on Computer Vision (ECCV), pp. 86–102, 2018. [32] A. Odena, V. Dumoulin, and C. Olah. Deconvolution and checkerboard artifacts. Distill, 1 (10):e3, 2016. [33] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, et al. PyTorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems (NeurIPS), pp. 8024–8035, 2019. [34] P. O. Pinheiro, T.-Y. Lin, R. Collobert, and P. Dollár. Learning to refine object segments. In European Conference on Computer Vision (ECCV), pp. 75–91, 2016. [35] S. Schubert, P. Neubert, J. Pöschmann, and P. Pretzel. Circular convolutional neural networks for panoramic images and laser data. In IEEE Intelligent Vehicles Symposium (IV), pp. 653– 660, 2019. [36] E. Shalnov. BSTLD-demo: A sample project to train and evaluate model on BSTLD. https: //github.com/e-sha/BSTLD_demo, 2019. [37] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations (ICLR), 2015. [38] G. Sundaramoorthi and T. E. Wang. Translation insensitive CNNs. arXiv preprint arXiv:1911.11238, 2019. 11 Published as a conference paper at ICLR 2021 [39] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In IEEE conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9, 2015. [40] S. Vashishth, S. Sanyal, V. Nitin, N. Agrawal, and P. Talukdar. InteractE: Improving convolution-based knowledge graph embeddings by increasing feature interactions. In AAAI conference on Artifical Intelligence, 2020. [41] S. Wu, G. Wang, P. Tang, F. Chen, and L. Shi. Convolution with even-sized kernels and symmetric padding. In Advances in Neural Information Processing Systems (NeurIPS), pp. 1192–1203, 2019. [42] F. Yu and V. Koltun. Multi-scale context aggregation by dilated convolutions. In International Conference on Learning Representations (ICLR), 2016. [43] R. Zhang. Making convolutional networks shift-invariant again. In International Conference on Machine Learning (ICML), 2019.
[ { "affiliations": [], "name": "BLIND SPOTS" }, { "affiliations": [], "name": "Bilal Alsallakh" }, { "affiliations": [], "name": "Narine Kokhlikyan" }, { "affiliations": [], "name": "Vivek Miglani" }, { "affiliations": [], "name": "Jun Yuan" } ]
[ { "authors": [ "M. Abadi", "A. Agarwal", "P. Barham", "E. Brevdo", "Z. Chen", "C. Citro", "G.S. Corrado", "A. Davis", "J. Dean" ], "title": "TensorFlow: Large-scale machine learning on heterogeneous distributed systems", "venue": "arXiv preprint arXiv:1603.04467,", "year": 2016 }, { "authors": [ "E. Akbas", "M.P. Eckstein" ], "title": "Object detection through search with a foveated visual system", "venue": "PLoS computational biology, 13(10):e1005743,", "year": 2017 }, { "authors": [ "A. Azulay", "Y. Weiss" ], "title": "Why do deep convolutional networks generalize so poorly to small image transformations", "venue": "Journal of Machine Learning Research (JMLR),", "year": 2019 }, { "authors": [ "K. Behrendt", "L. Novak", "R. Botros" ], "title": "A deep learning approach to traffic lights: Detection, tracking, and classification", "venue": "Robotics and Automation (ICRA), 2017 IEEE International Conference on, pp. 1370–1377. IEEE,", "year": 2017 }, { "authors": [ "C.-A. Brust", "S. Sickert", "M. Simon", "E. Rodner", "J. Denzler" ], "title": "Convolutional patch networks with spatial prior for road detection and urban scene understanding", "venue": "International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP),", "year": 2015 }, { "authors": [ "G.F. Elsayed", "P. Ramachandran", "J. Shlens", "S. Kornblith" ], "title": "Revisiting spatial invariance with low-rank local connectivity", "venue": "International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "J. Geiping", "H. Bauermeister", "H. Dröge", "M. Moeller" ], "title": "Inverting gradients–how easy is it to break privacy in federated learning", "venue": "arXiv preprint arXiv:2003.14053,", "year": 2020 }, { "authors": [ "R. Gens", "P.M. Domingos" ], "title": "Deep symmetry networks", "venue": "Advances in neural information processing systems (NeurIPS), pp. 2537–2545,", "year": 2014 }, { "authors": [ "D. Griffith", "C. Amrhein" ], "title": "An evaluation of correction techniques for boundary effects in spatial statistical analysis: traditional methods", "venue": "Geographical Analysis, 15(4):352–360,", "year": 1983 }, { "authors": [ "V. Gupta", "N. Ramani" ], "title": "A note on convolution and padding for two-dimensional data", "venue": "Geophysical Prospecting, 26(1):214–217,", "year": 1978 }, { "authors": [ "L. Hamey" ], "title": "A functional approach to border handling in image processing", "venue": "International Conference on Digital Image Computing: Techniques and Applications, pp. 1–8,", "year": 2015 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition", "venue": "IEEE conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778,", "year": 2016 }, { "authors": [ "A.G. Howard", "M. Zhu", "B. Chen", "D. Kalenichenko", "W. Wang", "T. Weyand", "M. Andreetto", "H. Adam" ], "title": "MobileNets: Efficient convolutional neural networks for mobile vision applications", "venue": "arXiv preprint arXiv:1704.04861,", "year": 2017 }, { "authors": [ "C. Innamorati", "T. Ritschel", "T. Weyrich", "N.J. Mitra" ], "title": "Learning on the edge: Investigating boundary filters in CNNs", "venue": "International Journal of Computer Vision (IJCV), pp. 1–10,", "year": 2019 }, { "authors": [ "S. Ioffe", "C. Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "International Conference on Machine Learning (ICML), pp. 448– 456,", "year": 2015 }, { "authors": [ "M.A. Islam", "S. Jia", "N.D. Bruce" ], "title": "How much position information do convolutional neural networks encode", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "M. Jaderberg", "K. Simonyan", "A. Zisserman" ], "title": "Spatial transformer networks. In Advances in neural information processing systems (NeurIPS)", "venue": null, "year": 2017 }, { "authors": [ "O.S. Kayhan", "J.C. van Gemert" ], "title": "On translation invariance in CNNs: Convolutional layers can exploit absolute spatial location", "venue": "In IEEE conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "T.L. Kijewski-Correa" ], "title": "Full-scale measurements and system identification: A time-frequency perspective", "venue": "PhD thesis, University of Notre Dame.,", "year": 2003 }, { "authors": [ "I. Kim", "W. Baek", "S. Kim" ], "title": "Spatially attentive output layer for image classification", "venue": "IEEE conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "H. Larochelle", "G.E. Hinton" ], "title": "Learning to combine foveal glimpses with a third-order boltzmann machine", "venue": "Advances in neural information processing systems (NeurIPS), pp. 1243–1251,", "year": 2010 }, { "authors": [ "G. Liu", "F.A. Reda", "K.J. Shih", "T.-C. Wang", "A. Tao", "B. Catanzaro" ], "title": "Image inpainting for irregular holes using partial convolutions", "venue": "European Conference on Computer Vision,", "year": 2018 }, { "authors": [ "G. Liu", "K.J. Shih", "T.-C. Wang", "F.A. Reda", "K. Sapra", "Z. Yu", "A. Tao", "B. Catanzaro" ], "title": "Partial convolution based padding", "venue": "arXiv preprint arXiv:1811.11718,", "year": 2018 }, { "authors": [ "R. Liu", "J. Jia" ], "title": "Reducing boundary artifacts in image deconvolution", "venue": "IEEE International Conference on Image Processing (ICIP), pp. 505–508,", "year": 2008 }, { "authors": [ "R. Liu", "J. Lehman", "P. Molino", "F.P. Such", "E. Frank", "A. Sergeev", "J. Yosinski" ], "title": "An intriguing failing of convolutional neural networks and the CoordConv solution", "venue": "Advances in Neural Information Processing Systems (NeurIPS), pp. 9605–9616,", "year": 2018 }, { "authors": [ "W. Liu", "D. Anguelov", "D. Erhan", "C. Szegedy", "S. Reed", "C.-Y. Fu", "A.C. Berg" ], "title": "SSD: Single shot multibox detector", "venue": "European Conference on Computer Vision, pp. 21–37,", "year": 2016 }, { "authors": [ "S. Lou", "X. Jiang", "P.J. Scott" ], "title": "Fast algorithm for morphological filters", "venue": "Journal of Physics: Conference Series, 311(1):012001,", "year": 2011 }, { "authors": [ "W. Luo", "Y. Li", "R. Urtasun", "R. Zemel" ], "title": "Understanding the effective receptive field in deep convolutional neural networks", "venue": "Advances in Neural Information Processing Systems (NeurIPS), pp. 4898–4906,", "year": 2016 }, { "authors": [ "R. Murase", "M. Suganuma", "T. Okatani" ], "title": "How can cnns use image position for segmentation", "venue": "arXiv preprint arXiv:2005.03463,", "year": 2020 }, { "authors": [ "A.-D. Nguyen", "S. Choi", "W. Kim", "S. Ahn", "J. Kim", "S. Lee" ], "title": "Distribution padding in convolutional neural networks", "venue": "IEEE International Conference on Image Processing (ICIP), pp. 4275–4279,", "year": 2019 }, { "authors": [ "D. Novotny", "S. Albanie", "D. Larlus", "A. Vedaldi" ], "title": "Semi-convolutional operators for instance segmentation", "venue": "European Conference on Computer Vision (ECCV), pp. 86–102,", "year": 2018 }, { "authors": [ "A. Odena", "V. Dumoulin", "C. Olah" ], "title": "Deconvolution and checkerboard artifacts", "venue": "Distill, 1 (10):e3,", "year": 2016 }, { "authors": [ "A. Paszke", "S. Gross", "F. Massa", "A. Lerer", "J. Bradbury", "G. Chanan", "T. Killeen", "Z. Lin", "N. Gimelshein" ], "title": "PyTorch: An imperative style, high-performance deep learning library", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "P.O. Pinheiro", "T.-Y. Lin", "R. Collobert", "P. Dollár" ], "title": "Learning to refine object segments", "venue": "European Conference on Computer Vision (ECCV), pp. 75–91,", "year": 2016 }, { "authors": [ "S. Schubert", "P. Neubert", "J. Pöschmann", "P. Pretzel" ], "title": "Circular convolutional neural networks for panoramic images and laser data", "venue": "IEEE Intelligent Vehicles Symposium (IV), pp. 653– 660,", "year": 2019 }, { "authors": [ "E. Shalnov" ], "title": "BSTLD-demo: A sample project to train and evaluate model on BSTLD", "venue": "https: //github.com/e-sha/BSTLD_demo,", "year": 2019 }, { "authors": [ "K. Simonyan", "A. Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "G. Sundaramoorthi", "T.E. Wang" ], "title": "Translation insensitive CNNs", "venue": "arXiv preprint arXiv:1911.11238,", "year": 2019 }, { "authors": [ "C. Szegedy", "W. Liu", "Y. Jia", "P. Sermanet", "S. Reed", "D. Anguelov", "D. Erhan", "V. Vanhoucke", "A. Rabinovich" ], "title": "Going deeper with convolutions", "venue": "IEEE conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9,", "year": 2015 }, { "authors": [ "S. Vashishth", "S. Sanyal", "V. Nitin", "N. Agrawal", "P. Talukdar" ], "title": "InteractE: Improving convolution-based knowledge graph embeddings by increasing feature interactions", "venue": "AAAI conference on Artifical Intelligence,", "year": 2020 }, { "authors": [ "S. Wu", "G. Wang", "P. Tang", "F. Chen", "L. Shi" ], "title": "Convolution with even-sized kernels and symmetric padding", "venue": "Advances in Neural Information Processing Systems (NeurIPS), pp. 1192–1203,", "year": 2019 }, { "authors": [ "F. Yu", "V. Koltun" ], "title": "Multi-scale context aggregation by dilated convolutions", "venue": "International Conference on Learning Representations (ICLR),", "year": 2016 }, { "authors": [ "R. Zhang" ], "title": "Making convolutional networks shift-invariant again", "venue": "International Conference on Machine Learning (ICML),", "year": 2019 } ]
[ { "heading": null, "text": "1 MOTIVATION\nConvolutional neural networks (CNNs) serve as feature extractors for a wide variety of machinelearning tasks. Little attention has been paid to the spatial distribution of activation in the feature maps a CNN computes. Our interest in analyzing this distribution is triggered by mysterious failure cases of a traffic light detector: The detector successfully detects a small but visible traffic light in a road scene. However, it fails completely in detecting the same traffic light in the next frame captured by the ego-vehicle. The major difference between both frames is a limited shift along the vertical dimension as the vehicle moves forward. Therefore, the drastic difference in object detection is surprising given that CNNs are often assumed to have a high degree of translation invariance [8; 17].\nThe spatial distribution of activation in feature maps varies with the input. Nevertheless, by closely examining this distribution for a large number of samples, we found consistent patterns among them, often in the form of artifacts that do not resemble any input features. This work aims to analyze the root cause of such artifacts and their impact on CNNs. We show that these artifacts are responsible for the mysterious failure cases mentioned earlier, as they can induce ‘blind spots’ for the object detection head. Our contributions are:\n• Demonstrating how the padding mechanism can induce spatial bias in CNNs (Section 2). • Demonstrating how spatial bias can impair downstream tasks (Section 3). • Identifying uneven application of 0-padding as a resolvable source of bias (Section 5). • Relating the padding mechanism with the foveation behavior of CNNs (Section 6). • Providing recommendations to mitigate spatial bias and demonstrating how this can prevent\nblind spots and boost model accuracy.\n2 THE EMERGENCE OF SPATIAL BIAS IN CNNS\nOur aim is to determine to which extent activation magnitude in CNN feature maps is influenced by location. We demonstrate our analysis on a publicly-available traffic-light detection model [36]. This model implements the SSD architecture [26] in TensorFlow [1], using MobileNet-v1 [13] as a feature extractor. The model is trained on the BSTLD dataset [4] which annotates traffic lights in road scenes. Figure 1 shows two example scenes from the dataset. For each scene, we show two feature maps computed by two filters in the 11th convolutional layer. This layer contains 512 filters whose feature maps are used directly by the first box predictor in the SSD to detect small objects.\n1\nPublished as a conference paper at ICLR 2021\nThe bottom row in Figure 1 shows the average response of each of the two aforementioned filters, computed over the test set in BSTLD. The first filter seems to respond mainly to features in the top half of the input, while the second filter responds mainly to street areas. There are visible lines in the two average maps that do not seem to resemble any scene features and are consistently present in the individual feature maps. We analyzed the prevalence of these line artifacts in the feature maps of all 512 filters. The right column in Figure 1 shows the average of these maps per scene, as well as over the entire test set (see supplemental for all 512 maps). The artifacts are largely visible in the average maps, with variations per scene depending on which individual maps are dominant.\nA useful way to make the artifacts stand out is to neutralize scene features by computing the feature maps for a zero-valued input. Figure 2 depicts the resulting average map for each convolutional layer after applying ReLU units. The first average map is constant as we expect with a 0-valued input. The second map is also constant except for a 1-pixel boundary where the value is lower at the left border and higher at the other three borders. We magnify the corners to make these deviations visible. The border deviations increase in thickness and in variance at subsequent layers, creating multiple line artifacts at each border. These artifacts become quite pronounced at ReLU 8 where they start to propagate inwards, resembling the ones in Figure 1.\n2\nPublished as a conference paper at ICLR 2021\nIt is evident that the 1-pixel border variations in the second map are caused by the padding mechanism in use. This mechanism pads the output of the previous layer with a 1-pixel 0-valued border in order to maintain the size of the feature map after applying 3x3 convolutional. The maps in the first layer are not impacted because the input we feed is zero valued. Subsequent layers, however, are increasingly impacted by the padding, as preceding bias terms do not warrant 0-valued input.\nIt is noticeable in Figure 2 that the artifacts caused by the padding differ across the four borders. To investigate this asymmetry, we analyze the convolutional kernels (often called filters) that produce the feature maps. Figure 3 depicts a per-layer mean of these 3x3 kernels. These mean kernels exhibit different degrees of asymmetry in the spatial distribution of their weights. For example, the kernels in L1 assign (on average) a negative weight at the left border, and a positive weight at the bottom. This directly impacts the padding-induced variation at each border. Such asymmetries are related to uneven application of padding as we explain in Section 5.\n3 IMPLICATIONS OF SPATIAL BIAS\nWe demonstrate how feature-map artifacts can cause blind spots for the SSD model. Similar issues arise in several small-object detectors, e.g., for faces and masks, as well as in pixel-oriented tasks such as semantic segmentation and image inpainting (see supplemental for examples).\nFigure 4 illustrates how the SSD predicts small objects based on the feature maps of the 11-th convolutional layer. The SSD uses the pixel positions in these maps as anchors of object proposals. Each proposal is scored by the SSD to represent a target category, with ”background“ being an implicit category that is crucial to exclude irrelevant parts of the input. In addition to these scores, the SSD computes a bounding box to localize the predicted object at each anchor. We examine\n3\nPublished as a conference paper at ICLR 2021\nobject proposals computed at 1:2 aspect ratio, as they resemble the shape of most traffic lights in the dataset. We visualize the resulting score maps both for the background category and for traffic lights, when feeding a 0-valued input to the SSD. We also visualize the bounding boxes of these proposals in the image space. The SSD predicts the image content to be of background category at all anchor locations, as evident from the value range in both score maps. Such predictions are expected with an input that contains no traffic lights. However, the line artifacts in the feature maps have a strong impact on the score maps. These artifacts elevate the likelihood of anchors closer to the top to be classified as background (see the yellow band in the background score map). Conversely, these anchors have significantly lower scores for the traffic light category, compared with other anchors in the feature map. Such difference in the impact on the target categories is due to the different weights the SSD assigns to the feature maps for each target. As a result, the artifacts lead to potential blind spots in which the scores for certain categories are artificially muted.\nTo validate whether or not the blind spots hinder object detection, we examine road scenes that contain highly-visible traffic light instances in the impacted area. Figure 4-bottom shows an example of such a scene. The SSD computes a low detection score of 7% when the traffic light lies in the blind spot (see middle image), far below the detection false-positive cutoff. Shifting the scene image upwards or downwards makes the instance detectable with a high score as long as it lies outside the blind spot. This explains the failure cases mentioned in Section 1. To further validate this effect, we run the SSD on baseline images that each contains one traffic light instance at a specific location in the input. We store the detection score for each instance. Figure 5a depicts the computed scores in a 2D map. It is evident that the model fails to detect the traffic light instance exactly when it is located within the “blind spot” band. The artifacts further disrupt the localization of the objects as evident in the top-right plot in Figure 4 which shows per-anchor object proposals computed for a 0 input.\n4 REMINDER: WHY IS PADDING NEEDED IN CNNS?\nPadding is applied at most convolutional layers in CNNs to serve two fundamental purposes:\nMaintaining feature map size A padding that satisfies this property is often described as SAME or HALF padding. FULL padding expands the maps by kernel size - 1 along each dimension. VALID padding performs no padding, eroding the maps by the same amount. SAME padding is important to (1) design deep networks that can handle arbitrary input size (a challenge in the presence of gradual erosion), (2) maintain the aspect ratio of non-square input, and (3) concatenate feature maps from different layers as in Inception [39] and ResNet [12] models.\nReducing information bias against the boundary Consider a 3⇥3 kernel applied to a 2D input. An input location at least 2 pixels away from the boundary contributes to nine local convolution operations when computing the feature map. On the other hand, the corner is involved only one time under VALID padding, four times under a 1-pixel SAME 0-padding, and nine times under a 2-pixel FULL 0-padding. With SAME 0-padding, the cumulative contribution differences among the input pixels grow exponentially over the CNN layers. We refer to such uneven treatment of input pixels as the foveation behavior of the padding mechanism and elaborate on this in Section 6.\nWe next explore solutions to the issues that cause padding to induce spatial bias.\n4\nPublished as a conference paper at ICLR 2021\n5 ELIMINATING UNEVEN APPLICATION OF PADDING\nWhile useful to reduce bias against the boundary, applying padding at down-sampling layers can lead to asymmetry in CNN internals. Figure 6a illustrates the source of this asymmetry when strided convolution is used for downsampling: At one side of the feature map, the padding is consumed by the kernel while at the other side it is not. To warrant even application of padding throughout the CNN, the following must hold at all d down-sampling layers, where (hi, wi) is the output shape at the i-th layer with khi ⇥ kwi as kernel size, (shi , swi ) as strides, and = (phi , pwi ) as padding amount (refer to appendix A for a proof):\n8i 2 {1, . . , d} : hi 1 = shi · (hi 1) + khi 2 · phi ^ wi 1 = swi · (wi 1) + kwi 2 · pwi (1)\nThe values h0 and w0 represent the CNN input dimensions. The above constraints are not always satisfied during training or inference with arbitrary input dimensions. For example, ImageNet classifiers based on ResNet [12] and MobileNet [13] contain five down-sampling layers (d = 5) that apply 1-pixel 0-padding before performing 2-strided convolution. To avoid uneven application of padding, the input to these CNNs must satisfy the following, as explained in appendix A:\nh0 = a1⇥2d+1 = 32 ·a1+1 and w0 = a2⇥2d+1 = 32 ·a2+1 where a1, a2 2 N+ (2)\nThe traditional 1 and prevalent input size for training ImageNet models is 224⇥224. This size violates Eq. 2, leading to uneven padding at every down-sampling layer in ResNet and MobileNet models where 0-padding is effectively applied only at the left and top sides of layer input. This over-represents zeros at the top and left sides of 3⇥ 3 feature-map patches the filters are convolved with during training. The top row of Figure 6b shows per-layer mean filters in three ResNet models in PyTorch [33], pre-trained on ImageNet with 224⇥224 images. In all of these models, a few of the mean filters, adjacent to down-sampling layers, exhibit stark asymmetry about their centers. We increase the image size to 225⇥225 without introducing additional image information2. This size satisfies Eq. 2, warranting even application of padding at every downsampling layer in the above models. Retraining the models with this size strongly reduces this asymmetry as evident in the bottom row of Figure 6b. This, in turn, visibly boosts the accuracy in all models we experimented with as we report in Table 1. The accuracy did not improve further when we retrained two of the models, ResNet-18 and ResNet-34, on 226 ⇥ 226 images. This provides evidence that the boost is due to eliminating uneven padding and not merely due to increasing the input size.\n1 This size has been used to facilitate model comparison on ImageNet, since the inception of AlexNet. 2 This is done via constant padding. The side to pad with one pixel is chosen at random to balance out the\napplication of padding at both sides over the training set. No additional padding is applied at further layers.\n5\nPublished as a conference paper at ICLR 2021\nReplacing 0-padding with a padding method that reuses feature map values can alleviate the asymmetry in the learned filters in the presence of unevenly applied padding. Another possibility is to use a rigid downsampling kernel, such as max-pooling, instead of a learned one. Appendix C demonstrates both possibilities. Finally, antialiasing before downsampling [43] can strongly reduce the asymmetry as we elaborate in Section 8 and in Appendix E.\nEven when no padding is applied (phi = 0 or pwi = 0), an input size that does no satisfy Eq. 1 can lead to uneven erosion of feature maps, in turn, reducing the contribution of pixels from the impacted sides (Fig 7e. Satisfying Eq 1 imposes a restriction on input size, e.g., to values in increments of 2d = 32 with the above models (193⇥193, 225⇥225, 257⇥257, ...). Depending on the application domain, this can be guaranteed either by resizing an input to the closest increment, or by padding it accordingly with suited values.\n6 PADDING MECHANISM AND FOVEATION\nBy foveation we mean the unequal involvement of input pixels in convolutional operations throughout the CNN. Padding plays a fundamental role in the foveation behavior of CNNs. We visualize this behavior by means of a foveation map that counts for each input pixel the number of convolutional paths through which it can propagate information to the CNN output. We obtain these counts by computing the effective receptive field [28] for the sum of the final convolutional layer after assigning all weights in the network to 1 (code in supplemental). Neutralizing the weights is essential to obtain per-pixel counts of input-output paths that reflect the foveation behavior.\nFigure 7a shows the extensive foveation effect when no padding is applied. The diminishing contribution of vast areas of the input explains the drastic drop in accuracy recently observed under VALID padding [16]. In contrast, FULL 0-padding does not incur foveation, however, at the cost of increasing the output size after each layer, making it impractical as explained in Section 4. SAME 0-padding incurs moderate foveation at the periphery, whose absolute extent depends on the number of convolutional layers and their filter sizes. Its relative extent depends on the input size: the larger the input, the larger the ratio of the constant area in yellow (refer to appendix B for a detailed example).\n6\nPublished as a conference paper at ICLR 2021\nFigure 7b shows the foveation behavior of alternatives to SAME 0-padding that have roots in wavelet analysis [19] and image processing [27]. Mirror padding mirrors pixels at the boundary to fill the padding area. When the border is included (SYMMETRIC mode in TensorFlow) all input pixels have an equal number of input-output paths 3, resulting in a uniform foveation map. When the border is not included (REFLECT mode both in PyTorch and in TensorFlow), the map exhibits bias against the border and towards a contour in its proximity. This bias is amplified over multiple layers. Replication padding exhibits the opposite bias when the padding area is wider than 1 pixel. This is because it replicates the outer 1-pixel border multiple times to fill this area 3. The method is equivalent to SYMMETRIC if the padding area is 1-pixel wide. Circular padding wraps opposing borders, enabling the kernels to seamlessly operate on the boundary and resulting in a uniform map. Partial Convolution [22] has been proposed as a padding method that treats pixels outside the original image as missing values and rescales the computed convolutions accordingly [23]. Its foveation behavior resembles reflective padding 3. Distribution padding [30] resizes the input to fill the padding area around the original feature map, aiming at preserving the distribution of the map. Its foveation map is largely uniform, except for the corners and edges.\nImpact of input size Besides influencing the relative extent of foveation effects, the input size also determines the presence of uneven padding (or uneven feature-map erosion), as we discussed in Section 5. Figure 7e shows the foveation map for VGG-19 with a 127⇥127 input. This input violates Eq. 1 at every downsampling layer (appendix A), leading to successive feature map erosion at the bottom and right sides which is reflected in the foveation map (see appendix B for a detailed example). The bottom-right part of the input is hence less involved in the CNN computations.\nImpact of dilation We assign a dilation factor of 2 to all VGG-19 convolutional layers. While this exponentially increases the receptive field of the neurons at deeper layers [42], dilation doubles the extent of the non-uniform peripheral areas that emerge with SAME 0-padding as evident in Figure 7c. SYMMETRIC and circular padding maintain uniform foveation maps regardless of dilation 3. In contrast, dilation increases the complexity of these maps for REFLECT and replication padding.\nImpact of strides Whether learned on based on pooling, downsampling layers can amplify the impact of succeeding convolutional layers on foveation behaviour. Furthermore, these layers can cause input pixels to vary in the count of their input-output paths. This can happen when the kernel size is not divisible by the stride, leading to a checkerboard pattern in the foveation maps. This manifests in ResNet models as we illustrate in appendix B. In VGG-19, all max-pooling layers use a stride of 2 and kernel size of 2. Changing the kernel size to 3 leads to a checkerboard pattern as evident in Figure 7d. Such effects were shown to impact pixel-oriented tasks [32].\nThe padding technique and its foveation behaviour have direct impact on feature-map artifacts (Section 7), and on the ability of CNNs to encode spatial information (Section 8). Understanding the foveation behavior is key to determine how suited a padding method is for a given task. For example, small object detection is known to be challenging close to the boundary [26], in part due to the foveation behavior of SAME 0-padding. In Figure 5b, we change the padding method in the SSD to SYMMETRIC. The stimulus is noticeably more detectable at the boundary, compared with 0-padding 4. In contrast, ImageNet classification is less sensitive to foveation effects because the target objects are mostly located away from the periphery. Nevertheless, the padding method was shown to impact classification accuracy [23] because it still affects feature map artifacts.\n7 PADDING METHODS AND FEATURE MAP ARTIFACTS\nIt is also noticeable that the score map in Figure 5b is more uniform than in Figure 5a. In particular, under SYMMETRIC padding the model is able to detect traffic lights placed in the blind spots of the original 0-padded model. To verify whether the line artifacts in Figure 2 are mitigated, we inspect the mean feature maps of the adapted model. With a constant input, SYMMETRIC padding warrants constant maps throughout the CNN because it reuses the border to fill the padding area. Instead, we average these maps over 30 samples generated uniformly at random. Figure 8 depicts the mean maps which are largely uniform, unlike the case with 0-padding.\n3 Refer to appendix F or to http://mind-the-pad.github.io for visual illustration and further theoretical analysis of the foveation behavior.\n4Since the input size causes uneven application of padding, the right and bottom borders are still challenging.\n7\nPublished as a conference paper at ICLR 2021\nTo further analyze the impact of SYMMETRIC padding, we retrain the adapted model following the original training protocol. This significantly improves the average precision (AP) as reported in Table 2 under different overlap thresholds (matching IoU), confirming that small object detection is particularly sensitive to feature-map artifacts.\nOf the padding methods listed in Section 6, mirror padding in both SYMMETRIC and REFLECT modes, PartialConv, and circular padding are generally effective at reducing feature map artifacts that emerge under zero padding, in particular salient line patterns. In contrast, distribution padding can induce significant artifacts. Refer to appendix D for comparative examples of artifacts under the aforementioned padding schemes.\nArtifact magnitude and propagation While feature-map artifacts are induced by the padding mechanism at the boundary, their magnitude and inward propagation in the maps are impacted by several architectural aspects of CNNs. In particular, certain normalization schemes such as batchnorm [15] tend to limit the range of variation within a feature map and to relatively harmonize this range across different maps. This, in turn, impacts how possible artifacts in these maps accumulate when they are processed by the next convolutional layer. Similarly, artifacts that manifest after applying ReLU units are of a positive sign. These factors were instrumental in the formation of potential blind spots described in Section 3. We hence recommend to involve non-convolutional layers when inspecting the feature maps. Besides having possible impact on artifact magnitude, several aspects of convolution arithmetic, such as filter size and dilation factors, can also impact the spatial propagation of these artifacts.\n8 RELATED FINDINGS AND TAKEAWAYS\nHandling the boundary is an inherent challenge when dealing with spatial data [9]. Mean padding is known to cause visual artifacts in traditional image processing, with alternative methods proposed to mitigate them [24]. CNNs have been often assumed to deal with such effects implicitly. Innamorati et al [14] propose learning separate sets of filters dedicated to the boundaries to avoid impacting the weights learned by regular filters. A grouped padding strategy, proposed to support 2⇥2 filters [41], offers avenues to mitigate uneven padding and corresponding skewness in foveation maps without restrictions on input size (see our note in appendix B for explanation). Finally, insights from signal and image processing [10; 11] could inspire further CNN padding schemes.\nZero padding has been recently linked to CNNs’ ability to encode position information [7; 16; 18; 29]. In contrast, circular padding was shown to limit this ability [7] and to boost shift invariance [35]. The input sizes in those studies do induce uneven padding. This can be, in part, the underlying mechanism behind the aforementioned ability. Whether or not this ability is desirable depends on the task, with several methods proposed to explicitly encode spatial information [5; 6; 20; 25; 29; 31].\n8\nPublished as a conference paper at ICLR 2021\nDownsampling using max-pooling or strided convolution has been shown to impact shift invariance in CNNs by incurring aliasing effects [3; 38; 43]. These effects can manifest in the same symptoms we reported in Section 1, albeit for a different reason. Zhang [43] demonstrated how blurring the feature maps before subsampling mitigates aliasing effects and improves ImageNet classification accuracy of various popular CNNs. We analyzed the mean filters in antialiased MobileNet and ResNet models pre-trained on ImageNet under 0-padding, with 224⇥224 as input size (refer to Appendix E). We found that antialiasing can also mitigate the asymmetry of mean filters that exhibited high asymmetry in the baseline models, especially at deeper layers. This is remarkable given that these models are trained on 224⇥224 images, which incurs one-sided zero padding at every downsampling layer. This could, in part, be attributed to the ability of the BlurPool operator used in antialiased CNN to smoothen the acuity of zero-padded borders, in turn, reducing the value imbalance incurred by one-sided padding. Further analysis is needed to examine the interaction between padding and aliasing effects in CNNs and to establish possible synergy between antialiasing and eliminating uneven application of padding.\nLuo et al [28] drew connections between effective receptive fields and foveated vision. Our analysis links foveation behavior with the padding scheme and suggests that it might occur implicitly in CNNs when using VALID or SAME 0-padding, without the need for explicit mechanisms [2; 21]. Furthermore, it explains the drastic accuracy drop noted by [16] under VALID padding, which is amplified by feature map erosion.\nChoosing a padding method SAME 0-padding is by far the most widely-used method. Compared with other methods, it can enable as much as 50% faster training and inference. Problem-specific constraints can dictate different choices [34; 35; 40]. In the lack of a universally superior padding method, we recommend considering multiple ones while paying attention to the nature of the data and the task, as well as to the following aspects:\n• Feature-map statistics: 0-padding can alter the value distribution within the feature maps and can shift their mean value in the presence of ReLU units. The alternatives presented in Section 6 tend to preserve this distribution, thanks to reusing existing values in the maps.\n• Foveation behavior: 0-padding might not be suited for tasks that require high precision at the periphery, unlike circular and SYMMETRIC mirror padding.\n• Interference with image semantics (esp. with a padding amount > 1 pixel): For example, circular padding could introduce border discontinuities unless the input is panoramic [35].\n• Potential to induce feature map artifacts: All alternatives to 0-padding induce relatively fewer artifacts, except for Distribution padding [30] (see appendix D).\nWe also recommend eliminating uneven padding at downsampling layers both at training and at inference time, as we illustrated in Section 5. This is especially important when zero padding is applied and the downsampling is learned. The scripts used to generate the visualizations in this paper are available in the supplemental as well as at http://mind-the-pad.github.io.\nSummary We demonstrated how the padding mechanism can induce spatial bias in CNNs, in the form of skewed kernels and feature-map artifacts. These artifacts can be highly pronounced with the widely-used 0-padding when applied unevenly at the four sides of the feature maps. We demonstrated how such uneven padding can inherently take place in state-of-the-art CNNs, and how the artifacts it causes can be detrimental to certain tasks such as small object detection. We provided visualization methods to expose these artifacts and to analyze the implication of various padding schemes on boundary pixels. We further proposed solutions to eliminate uneven padding and to mitigate spatial bias in CNNs. Further work is needed to closely examine the implications of spatial bias and foveation in various applications (see supplementary for examples), as well as padding impact on recurrent models and 1-D CNNs.\nACKNOWLEDGEMENT\nWe are thankful to Ross Girshick for providing useful recommendations and experiment ideas, and to Shubham Muttepawar for implementing an interactive tool out of our analysis scripts, guided by our front-end specialist Edward Wang and our AI user-experience designer Sara Zhang.\n9\nPublished as a conference paper at ICLR 2021\nREFERENCES [1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis,\nJ. Dean, et al. TensorFlow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.\n[2] E. Akbas and M. P. Eckstein. Object detection through search with a foveated visual system. PLoS computational biology, 13(10):e1005743, 2017.\n[3] A. Azulay and Y. Weiss. Why do deep convolutional networks generalize so poorly to small image transformations? Journal of Machine Learning Research (JMLR), 20(184):1–25, 2019.\n[4] K. Behrendt, L. Novak, and R. Botros. A deep learning approach to traffic lights: Detection, tracking, and classification. In Robotics and Automation (ICRA), 2017 IEEE International Conference on, pp. 1370–1377. IEEE, 2017.\n[5] C.-A. Brust, S. Sickert, M. Simon, E. Rodner, and J. Denzler. Convolutional patch networks with spatial prior for road detection and urban scene understanding. In International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP), 2015.\n[6] G. F. Elsayed, P. Ramachandran, J. Shlens, and S. Kornblith. Revisiting spatial invariance with low-rank local connectivity. In International Conference on Machine Learning (ICML), 2020.\n[7] J. Geiping, H. Bauermeister, H. Dröge, and M. Moeller. Inverting gradients–how easy is it to break privacy in federated learning? arXiv preprint arXiv:2003.14053, 2020.\n[8] R. Gens and P. M. Domingos. Deep symmetry networks. In Advances in neural information processing systems (NeurIPS), pp. 2537–2545, 2014.\n[9] D. Griffith and C. Amrhein. An evaluation of correction techniques for boundary effects in spatial statistical analysis: traditional methods. Geographical Analysis, 15(4):352–360, 1983.\n[10] V. Gupta and N. Ramani. A note on convolution and padding for two-dimensional data. Geophysical Prospecting, 26(1):214–217, 1978.\n[11] L. Hamey. A functional approach to border handling in image processing. In International Conference on Digital Image Computing: Techniques and Applications, pp. 1–8, 2015.\n[12] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In IEEE conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016.\n[13] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.\n[14] C. Innamorati, T. Ritschel, T. Weyrich, and N. J. Mitra. Learning on the edge: Investigating boundary filters in CNNs. International Journal of Computer Vision (IJCV), pp. 1–10, 2019.\n[15] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning (ICML), pp. 448– 456, 2015.\n[16] M. A. Islam, S. Jia, and N. D. Bruce. How much position information do convolutional neural networks encode? In International Conference on Learning Representations (ICLR), 2020.\n[17] M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial transformer networks. In Advances in neural information processing systems (NeurIPS), pp. 2017–2025, 2015.\n[18] O. S. Kayhan and J. C. van Gemert. On translation invariance in CNNs: Convolutional layers can exploit absolute spatial location. In IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2020.\n[19] T. L. Kijewski-Correa. Full-scale measurements and system identification: A time-frequency perspective. PhD thesis, University of Notre Dame., 2003.\n10\nPublished as a conference paper at ICLR 2021\n[20] I. Kim, W. Baek, and S. Kim. Spatially attentive output layer for image classification. In IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2020.\n[21] H. Larochelle and G. E. Hinton. Learning to combine foveal glimpses with a third-order boltzmann machine. In Advances in neural information processing systems (NeurIPS), pp. 1243–1251, 2010.\n[22] G. Liu, F. A. Reda, K. J. Shih, T.-C. Wang, A. Tao, and B. Catanzaro. Image inpainting for irregular holes using partial convolutions. In European Conference on Computer Vision, 2018.\n[23] G. Liu, K. J. Shih, T.-C. Wang, F. A. Reda, K. Sapra, Z. Yu, A. Tao, and B. Catanzaro. Partial convolution based padding. In arXiv preprint arXiv:1811.11718, 2018.\n[24] R. Liu and J. Jia. Reducing boundary artifacts in image deconvolution. In IEEE International Conference on Image Processing (ICIP), pp. 505–508, 2008.\n[25] R. Liu, J. Lehman, P. Molino, F. P. Such, E. Frank, A. Sergeev, and J. Yosinski. An intriguing failing of convolutional neural networks and the CoordConv solution. In Advances in Neural Information Processing Systems (NeurIPS), pp. 9605–9616, 2018.\n[26] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg. SSD: Single shot multibox detector. In European Conference on Computer Vision, pp. 21–37, 2016.\n[27] S. Lou, X. Jiang, and P. J. Scott. Fast algorithm for morphological filters. Journal of Physics: Conference Series, 311(1):012001, 2011.\n[28] W. Luo, Y. Li, R. Urtasun, and R. Zemel. Understanding the effective receptive field in deep convolutional neural networks. In Advances in Neural Information Processing Systems (NeurIPS), pp. 4898–4906, 2016.\n[29] R. Murase, M. Suganuma, and T. Okatani. How can cnns use image position for segmentation? arXiv preprint arXiv:2005.03463, 2020.\n[30] A.-D. Nguyen, S. Choi, W. Kim, S. Ahn, J. Kim, and S. Lee. Distribution padding in convolutional neural networks. In IEEE International Conference on Image Processing (ICIP), pp. 4275–4279, 2019.\n[31] D. Novotny, S. Albanie, D. Larlus, and A. Vedaldi. Semi-convolutional operators for instance segmentation. In European Conference on Computer Vision (ECCV), pp. 86–102, 2018.\n[32] A. Odena, V. Dumoulin, and C. Olah. Deconvolution and checkerboard artifacts. Distill, 1 (10):e3, 2016.\n[33] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, et al. PyTorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems (NeurIPS), pp. 8024–8035, 2019.\n[34] P. O. Pinheiro, T.-Y. Lin, R. Collobert, and P. Dollár. Learning to refine object segments. In European Conference on Computer Vision (ECCV), pp. 75–91, 2016.\n[35] S. Schubert, P. Neubert, J. Pöschmann, and P. Pretzel. Circular convolutional neural networks for panoramic images and laser data. In IEEE Intelligent Vehicles Symposium (IV), pp. 653– 660, 2019.\n[36] E. Shalnov. BSTLD-demo: A sample project to train and evaluate model on BSTLD. https: //github.com/e-sha/BSTLD_demo, 2019.\n[37] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations (ICLR), 2015.\n[38] G. Sundaramoorthi and T. E. Wang. Translation insensitive CNNs. arXiv preprint arXiv:1911.11238, 2019.\n11\nPublished as a conference paper at ICLR 2021\n[39] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In IEEE conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9, 2015.\n[40] S. Vashishth, S. Sanyal, V. Nitin, N. Agrawal, and P. Talukdar. InteractE: Improving convolution-based knowledge graph embeddings by increasing feature interactions. In AAAI conference on Artifical Intelligence, 2020.\n[41] S. Wu, G. Wang, P. Tang, F. Chen, and L. Shi. Convolution with even-sized kernels and symmetric padding. In Advances in Neural Information Processing Systems (NeurIPS), pp. 1192–1203, 2019.\n[42] F. Yu and V. Koltun. Multi-scale context aggregation by dilated convolutions. In International Conference on Learning Representations (ICLR), 2016.\n[43] R. Zhang. Making convolutional networks shift-invariant again. In International Conference on Machine Learning (ICML), 2019.\n12\nPublished as a conference paper at ICLR 2021\nA ELIMINATING UNEVEN APPLICATION OF PADDING\nConsider a CNN with d downsampling layers, L1, L2, ..., Ld. To simplify the analysis and without loss of generality we assume that the kernels in these layers are of square shape and that all other layers maintain their input size. We denote by si and ki the stride and kernel size of layer Li. We denote by hi and wi the dimensions of the feature maps computed by Li. We denote by h0 and w0 the size of the CNN input. We examine the conditions to warrant no uneven application of padding along the height dimension. Parallel conditions apply to the width dimension.\nWe denote by h̄i the height of the padded input to Li. The effective portion ĥi h̄i of this amount processed by the convolutional filters in Li is equal to:\nĥi = si · (hi 1) + ki\nOur goal is to warrant that ĥi = h̄i to prevent information loss and to avoid uneven padding along the vertical dimension when the unconsumed part h̄i ĥi < si is an odd number. Since the non-downsampling layers maintain their input size, we can formulate the height of the padded input as follows: h̄i = hi 1 + 2 · pi where pi is the amount of padding applied at the top and at the bottom of the input in Li. Accordingly, we can warrant no uneven padding if the following holds:\n8i 2 [1. . d] : hi 1 = si · (hi 1) + ki 2 · pi (3)\nExample 1: ResNet-18 This network contains five downsampling layers (d = 5) all of which use a stride of 2. Despite performing downsampling, all of these layers apply a padding amount entailed by SAME padding to avoid information bias against the boundary. In four of these layers having 3⇥3 kernels (ki = 3), the amount used is pi = 1. For the first layer having 7⇥ 7 kernels, this amount is equal to 3. In both cases, the term ki 2 · pi in Eq. 3 is equal to 1. To warrant no uneven padding along the vertical dimension, the heights of the feature maps at downsampling layers should hence satisfy: 8i 2 [1. . d] : hi 1 = 2 · (hi 1) + 1 = 2 · hi 1 Accordingly, the input height should satisfy:\nh0 = 2 d · hd (2d 1) = 2d · (hd 1) + 1\nwhere hd is the height of the final feature map, and can be any natural number larger than 1 to avoid a degenerate case of a 1⇥ 1 input. The same holds for the input width:\nw0 = 2 d · (wd 1) + 1\nA 225⇥225 input satisfies these constraints since 225 = 25 · 7+ 1, yielding even padding in all five downsampling layers and output feature maps of size 8⇥ 8.\nExample 2: VGG-16 This network contains five max-pooling layers (d = 5) all of which use a stride of 2 and a kernel size of 2 and apply no padding. To warrant no uneven padding along the vertical dimension, the heights of the feature maps at all of these layers should hence satisfy:\n8i 2 [1. . d] : hi 1 = 2 · (hi 1) + 2 = 2 · hi Accordingly, the input dimensions should satisfy:\nh0 = 2 d · hd and w0 = 2d · wd (4)\nA 224⇥224 input satisfies these constraints since 224 = 25 · 7, causing no feature-map erosion at any downsampling layer and resulting in output feature maps of size 7⇥ 7.\n13\nPublished as a conference paper at ICLR 2021\nB THE EXTENT OF FOVEATION UNDER SAME 0-PADDING\nWe illustrate how the absolute extent of foveation under SAME 0-padding depends on the number of convolutional layers, and how its relative extent depends on the input size.\nIn the following maps, color represents the number of paths to the CNN output for each input pixel. Note: The checkerboard pattern is caused by downsampling layers in ResNet that use 3⇥ 3 kernels and a stride of 2.\nIn the next figure, we illustrate how uneven application of padding impacts the foveation maps. Note: It is possible to rectify the skewness in the 2nd foveation map by alternating the side where one-sided padding is applied between successive downsampling layers. This, however, does not mitigate the skewness in the learned filters (see next Section).\n14\nPublished as a conference paper at ICLR 2021\nC THE IMPACT OF THE PADDING METHOD ON LEARNED WEIGHTS\nIn the presence of uneven application of padding, 0-padding causes skewness in the learned weights because the filters are exposed more frequently to feature-map patches with zeros at their top and left sides. Redundancy methods such as circular or mirror padding mitigate such skewness because they fill the padding areas with values taken from the feature maps. PartialConv also mitigates such skewness because it assumes the pixels in the padding area are missing, and rescales the partial convolutional sum to account for them. Below we show the effectiveness of these alternatives in mitigating the skewness in three ResNet architectures.\nWhat if no padding is applied during downsampling? VGG models perform downsampling using 2 ⇥ 2 pooling layers that do not apply any padding. Accordingly, the mean filters do not exhibit significant skewness, even if the input size does not satisfy Eq 4:\n15\nPublished as a conference paper at ICLR 2021\nD THE IMPACT OF PADDING METHODS ON FEATURE-MAP ARTIFACTS\nWe show per-layer mean feature maps in ResNet-18 under different padding methods. The mean maps are averaged over 20 input samples generated at random.\n16\nPublished as a conference paper at ICLR 2021\nE THE IMPACT OF ANTIALIASING ON THE LEARNED WEIGHTS\nWe demonstrate how antialiasing [43] significantly reduces the asymmetry of mean filters around downsampling layers, even in the presence of unevenly-applied zero padding.\n17\nPublished as a conference paper at ICLR 2021\nF FOVEATION ANALYSIS OF PADDING ALGORITHMS\nRefer to http://mind-the-pad.github.io for an interactive and animated visual illustration of padding algorithms and their foveation behavior. This appendix serves as a print version.\nAmong the SAME padding algorithms we discussed in the manuscript, two algorithms warrant that each input pixel is involved in an equal number of convolutional operations, leading to uniform foveation maps: circular padding and SYMMETRIC mirror padding. In contrast, this number varies under zero padding, REFLECT mirror padding, replication padding, and partial convolution.\nWe illustrate in detail how each padding algorithm treats the input pixels. For this purpose we illustrate step by step how each pixel is processed by the convolutional kernel. We choose a set of pixels that are sufficient to expose the behavior of the respective algorithm. This set spans an area within two or three pixels from the boundary that encompasses all relevant cases for the analysis and is situated at the top-left corner. The behavior at the other corners is analogous.\nAll illustrations use a stride of 1. Except for VALID, all configurations warrant SAME padding.\n• VALID Padding: This algorithm is illustrated on a 3⇥ 3 kernel without dilation. A larger kernel size or dilation factor will increase the foveation effect.\n• Zero Padding: This algorithm is illustrated on a 3 ⇥ 3 kernel without dilation. A larger kernel size or dilation factor will increase the foveation effect.\n• Circular Padding: This algorithm is illustrated on a 3 ⇥ 3 kernel without dilation. It is straightforward to prove that the algorithm warrants equal treatment of the pixels irrespective of the kernel size or dilation factor. This is because it effectively applies circular convolution: Once the kernel hits one side, it can seamlessly operate on the pixels of the other side. Circular convolution hence renders the feature map as infinite to the kernel, warranting that edge pixels are treated in the same manner as interior pixels.\n• Mirror Padding (SYMMETRIC): This algorithm warrants that each pixel is involved in the same number of convolutional operations. It is important to notice that, unlike under circular convolution, these operations do not utilize the kernel pixels uniformly as we demonstrate in detail. We illustrate the algorithm behavior under the following settings:\n– 3⇥ 3 kernel and dilation factor of 1. – 5⇥ 5 kernel and dilation factor of 1. – 3⇥ 3 kernel and dilation factor of 2. – 2 ⇥ 2 kernel and dilation factor of 1, along with a grouped padding strategy to com-\npensate for uneven padding [41]. – 4⇥ 4 kernel size and dilation factor of 1, along with a grouped padding strategy.\n• Mirror Padding (REFLECT): This algorithm is illustrated on a 3⇥ 3 kernel without dilation.\n• Replication Padding: This algorithm is illustrated on a 5⇥ 5 kernel without dilation. We choose this kernel size since a 3⇥3 kernel under SAME padding would render the algorithm equivalent to SYMMETRIC mirror padding.\n• Partial Convolution: This algorithm is illustrated on a 3 ⇥ 3 kernel without dilation. Its foveation behavior is analogous to REFLECT mirror padding.\n18\na b c .. .. 1 2 3 3 3\nd e f .. .. 2 4 6 6 6\ng h i .. .. 3 6 9 9 9\n.. .. .. .. .. 3 6 9 9 9\n.. .. .. .. .. 6 9 9 9\nWhich kernel cells these ops utilize?\na: b: c: d: e: f: g: h: i:\n1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n1 1 1 1 1 1 1 1 1 1 1 1\n1 1 1 1 1 1\nsum = 6.25 sum = 8.75 sum = 7.5 sum = 8.75 sum = 9 sum = 10.5 sum = 7.5 sum = 10.5 sum = 9\nuniform\nDetailed Illustration of how the counts are derived\na b c\nd e f\ng h i\na b c ... a b c ...\nd e f ... d e f ...\ng h i ... g h i ..\n... ... ... ... ... ... ... ...\na b c ... a b c ... a b c ... ...\nd e f ... d e f ... d e f ... ...\ng h i ... g h i ... g h i ... ...\n... ... ... ... ... ... ... ... ... ... ... ... ...\nVALID Padding Illustrated on a 3x3 kernel\nInput # of conv ops each pixel is involved in\nConvolutions involving (a) Convolutions involving (b) Convolutions involving (c)\n3\na b c ... a b c ...\nd e f ... d e f ...\ng h i ... g h i ...\n... ... ... ... ... ... ... ...\na b c ...\nd e f ...\ng h i ...\n... ... ... ...\na b c ... a b c ... a b c ... a b c ...\nd e f ... d e f ... d e f ... d e f ...\ng h i ... g h i ... g h i ... g h i ...\n... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...\na b c ... ... a b c ...\nd e f ... ... d e f ... ...\ng h i ... ... g h i ... ...\n... ... ... ... ... ... ... ... ...\nConvolutions involving (e)\nConvolutions involving (f)\nConvolutions involving (d): rotated version of (b)\nConvolutions involving (g): Rotated version of (c)\nConvolutions involving (h): Rotated version of (f)\nConvolutions involving (i): Regular uniform treatment\na b c ...\nd e f ...\ng h i ...\n... ... ... ...\nOther border cases are translation or rotation of (a) or (b)\na 1 1 b 1 1 1\n1 1 1 1 1\nsum = 4 sum = 6\nZero Padding\nIllustrated on 3x3 kernel and 1-pixel padding\nOriginal Input Padded Input # of conv ops each pixel is involved in\n0 0 0 .. .. ..\na b .. .. .. 0 4 6 6 6 6\nc d .. .. .. 0 6 9 9 9 9\n.. .. .. .. .. 0 6 9 9 9 9\n.. .. .. .. .. .. 6 9 9 9 9\n.. .. .. .. .. .. 6 9 9 9 9\na b .. .. ..\nc d .. .. ..\n.. .. .. .. ..\n.. .. .. .. ..\n.. .. .. .. ..\nc 1 1 d 1 1 1\n1 1 1 1 1\n1 1 1 1 1\nsum = 6 sum = 9\nuniform\nDetailed Illustration of how the counts are derived\n0 0 0\n0 a b b a b b a b b a b\n0 c d e d e e d e e d e\n0 a b 0 a b c 0 a b 0 a b 0 a b 0 a b\n0 c d 0 c d f 0 c d 0 c d 0 c d 0 c d\nConvolutions involving (a)\nConvolutions involving (b)\nWhich kernel cells these ops utilize?\n0 0 0 0 0 00 0 0\n0 0 00 0 00 0 0 0 0 0 0 0 0 0 0 0\nOriginal Input Padded Input\nm s r q p o n m s\na b c d e f g g a b c d e f g a 9 9 9 9 9 9 9\nx .. .. .. .. .. h h x .. .. .. .. .. h x 9 9 9 9 9 9 9\nw .. .. .. .. .. i i w .. .. .. .. .. i w 9 9 9 9 9 9 9\nv .. .. .. .. .. j j v .. .. .. .. .. j v 9 9 9 9 9 9 9\nu .. .. .. .. .. k k u .. .. .. .. .. k u 9 9 9 9 9 9 9\nt .. .. .. .. .. l l t .. .. .. .. .. l t 9 9 9 9 9 9 9\ns r q p o n m m s r q p o n m s 9 9 9 9 9 9 9\ng a b c d e f g a\nWhich kernel cells these ops utilize?\nOther border cases are translation or rotation of (a) or (b)\na 1 1 1 b 1 1 1\n1 1 1 1 1 1\n1 1 1 1 1 1\nsum = 9 sum = 9\nuniform uniform\nConvolutions involving (a)\nm s r s r q g a b a b c f g a n m s\ng a b a b c h x .. x .. .. .. h x f g a\nh x .. x .. .. i w .. w .. .. .. i w .. h x\nl t .. t .. .. .. l t\nm s r s r q n m s\ng a b a b c f g a\nConvolutions involving (b)\nm s r g a b s r q a b c r q p b c d\ng a b h x .. a b c x .. .. b c d .. .. ..\nh x .. i w .. x .. .. w .. .. .. .. .. .. .. ..\nl t .. t .. .. .. .. ..\nm s r s r q r q p\ng a b a b c b c d\nCircular Padding\nIllustrated on 3x3 kernel and 1-pixel padding\n# of conv ops each pixel is involved in\nDetailed Illustration of how the counts are derived\na a b c .. ..\na b c .. .. a a b c .. .. 9 9 9 9 9\nd e f .. .. d d e f .. .. 9 9 9 9 9\ng h i .. .. g g h i .. .. 9 9 9 9 9\n.. .. .. .. .. .. .. .. .. .. .. 9 9 9 9 9\n.. .. .. .. .. .. .. .. .. .. .. 9 9 9 9 9\nWhich kernel cells these ops utilize?\na: b: c: d: e: f: g: h: i:\n4 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 2 1 1 1 1 1 1 1\n2 1 1 1 1 1 1 1 2 1 1 1 1 1 1 1 2 1 1 1 1 1 1 1\n2 1 1 1 1 1 1 1 2 1 1 1 1 1 1 1\nsum = 9 sum = 9 sum = 9 sum = 9 sum = 9 sum = 9 sum = 9 sum = 9 sum = 9\nuniform uniform uniform uniform\nDetailed Illustration of how the counts are derived\na a b c a a b c a a b c a a b c\na a b c a a b c a a b c a a b c\nd d e f d d e f d d e f d d e f\ng g h i g g h i g g h i g g h i\na a b c a a b c a a b c a d e f a a b c a a b c\na a b c a a b c a a b c a a b c a a b c a a b c\nd d e f d d e f d d e f d d e f d d e f d d e f\ng g h i g g h i g g h i g g h i g g h i g g h i\na a b c a a b c a a b c a a b c a a b c a a b c\na a b c a a b c a a b c a a b c a a b c a a b c\nd d e f d d e f d d e f d d e f d d e f d d e f\ng g h i g g h i g g h i g g h i g g h i g g h i\nMirror Padding (SYMMETRIC)\nIllustrated on 3x3 kernel and 1-pixel padding\nOriginal Input Padded Input # of conv ops each pixel is involved in\nConvolutions involving (b)\nConvolutions involving (a)\nConvolutions involving (c)\na a b c a a b c a a b c a a b c a a b c a a b c\na a b c a a b c b a b c a a b c a a b c a a b c\nd d e f d d e f d d e f d d e f d d e f d d e f\ng g h i g g h i g g h i g g h i g g h i g g h i\na a b c a a b c a a b c\na a b c a a b c a a b c\nd d e f d d e f d d e f\ng g h i g g h i g g h i\na a b c a a b c a a b c a a b c a a b c a a b c\na a b c a a b c a a b c a a b c a a b c a a b c\nd d e f d d e f d d e f d d e f d d e f d d e f\ng g h i g g h i g g h i g g h i g g h i g g h i\na a b c a a b c a a b c\na a b c a a b c a a b c\nd d e f d d e f d d e f\ng g h i g g h i g g h i\nConvolutions involving (g): Rotated version of (c)\nConvolutions involving (h): Rotated version of (f)\nConvolutions involving (i): Regular uniform treatment\nConvolutions involving (f)\nConvolutions involving (e)\nConvolutions involving (d): Rotated version of (b)\nOriginal Input\nPadded Input\n# of conv ops for each pixel\ne d d e f .. .. b a a b c .. ..\na b c .. .. b a a b c .. .. 25 25 25 25 25 d e f .. .. e d d e f .. .. 25 25 25 25 25 g h i .. .. h g g h i .. .. 25 25 25 25 25 .. .. .. .. .. .. .. .. .. .. .. .. 25 25 25 25 25 .. .. .. .. .. .. .. .. .. .. .. .. 25 25 25 25 25\nWhich kernel cells these ops utilize?\na: b: c: d: e: f: g: h: i: 4 4 2 4 2 2 2 2 2 2 2 2 4 4 2 4 2 2 2 2 2 2 2 2 2 2 1 2 1 1 1 1 1 1 1 1 4 4 2 4 2 2 2 2 2 2 2 2 2 2 1 2 1 1 1 1 1 1 1 1 2 2 1 2 1 1 1 1 1 1 1 1 2 2 1 2 1 1 1 1 1 1 1 1 2 2 1 2 1 1 1 1 1 1 1 1 2 2 1 2 1 1 1 1 1 1 1 1\n2 2 1 2 1 1 1 1 1 1 1 1 2 2 1 2 1 1 1 1 1 1 1 1\n2 2 1 2 1 1 1 1 1 1 1 1\nsum = 25 sum = 25 sum = 25 sum = 25 sum = 25 sum = 25 sum = 25 sum = 25 sum = 25 uniform\nDetailed Illustration of how the counts are derived\ne d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i\ne d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i\ne d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i\ne d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i\ne d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i\ne d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i\ne d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i\nMirror Padding (SYMMETRIC)\nIllustrated on 5x5 kernel and 2-pixel padding\nConvolutions involving (c)\nConvolutions involving (b)\nConvolutions involving (a)\ne d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i\ne d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i\ne d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i\nf: e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f\nb a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i\ne d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i\ne d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i\ne d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i\nConvolutions involving (f)\nConvolutions involving (g): Rotated version of (c)\nConvolutions involving (h): Rotated version of (f)\nConvolutions involving (i): Regular uniform treatment\nConvolutions involving (e)\nConvolutions involving (d): Rotated version of (b)\ne d d e f .. b a a b c ..\na b c .. b a a b c .. 9 9 9 9 d e f .. e d d e f .. 9 9 9 9 g h i .. h g g h i .. 9 9 9 9 .. .. .. .. .. .. .. .. .. .. 9 9 9 9\n.. .. .. .. .. ..\nWhich kernel cells these ops utilize?\na: b: c: d: e: f: g: h: i:\n4 2 4 2 2 2 2 4 2 4 2 2 2 2 2 1 2 1 1 1 1 2 1 2 1 1 1 1 2 1 2 1 1 1 1 2 1 2 1 1 1 1\n2 1 2 1 1 1 1\nsum = 9 sum = 9 sum = 9 sum = 9 sum = 9 sum = 9 sum = 9 sum = 9 sum = 9\nDetailed Illustration of how the counts are derived\ne d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i\ne d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i\ne d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i\ne d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i\ne d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i\ne d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i\ne d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i\nOriginal Input\nPadded Input\n# of conv ops for each pixel Mirror Padding (SYMMETRIC)\nIllustrated on 3x3 kernel and 1-pixel padding with dilation factor of 2\nConvolutions involving (c)\nConvolutions involving (b)\nConvolutions involving (a)\nuniform\ne d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i\ne d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i\ne d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i\ne d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i\ne d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i\ne d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i\ne d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i\nConvolutions involving (f)\nConvolutions involving (g): Rotated version of (c)\nConvolutions involving (h): Rotated version of (f)\nConvolutions involving (i): Regular uniform treatment\nConvolutions involving (e)\nConvolutions involving (d): Rotated version of (b)\nOriginal Input\na a b a b\na b .. a a b .. a a b .. a b .. a b ..\nc d .. c c d .. c c d .. c d .. c d ..\n.. .. .. .. .. .. .. .. .. .. .. ..\nNumber of conv ops each pixel is involved in\n9 6 6 3 2 2 3 6 6 1 2 2 4 4 4\n6 4 4 6 4 4 2 4 4 2 4 4 4 4 4\n6 4 4 6 4 4 2 4 4 2 4 4 4 4 4\nWhich kernel cells these ops utilize?\nPadded at topleft corner\nPadded at bottomleft corner\nPadded at topright corner\nPadded at bottomright corner\nAverage (grouped padding strategy)\n3 2 2 1 2 1 2 0.75\n2 2 1 0.75 0.5\nsum = 9 sum = 3 sum = 3 sum = 1 sum = 4\n2 2 1 1 2 2 1 1 1.5 1.5\n1 1 1 1 0.5 0.5\nsum = 6 sum = 2 sum = 6 sum = 2 sum = 4\n2 1 2 1 1 1 1.5 0.5\n2 1 2 1 1 1 1.5 0.5\nsum = 6 sum = 6 sum = 2 sum = 2 sum = 4\n1 1 1 1 1 1 1 1 1 1\n1 1 1 1 1 1 1 1 1 1\nsum = 4 sum = 4 sum = 4 sum = 4 sum = 4\nuniform uniform uniform uniform uniform\nConvolutions involving (a)\nConvolutions involving (b)\nConvolutions involving (c)\nConvolutions involving (d)\nPadded at topleft corner\nPadded at bottomleft corner\nPadded at topright corner\nPadded at bottomright corner\nAverage (grouped padding strategy)\nPadded at topleft corner\nPadded at bottomleft corner Padded at topright corner\nPadded at bottomright corner\nMirror Padding (SYMMETRIC) with Grouping\nIllustrated on 2x2 kernel and 1-pixel padding A grouped padding strategy is applied to balance uneven padding (Wu et al 2019)\nDetailed Illustration of how the counts are derived\nPadded at bottomright cornera a b a a b a a b a a b\nPadded at topleft corner\na a b .. a a b .. a a b .. a a b .. a b ..\nc c d .. c c d .. c c d .. c c d .. c d ..\n.. .. .. .. .. .. .. .. .. .. .. .. .. .. ..\nPadded at topright corner\na b a b\nPadded at bottomleft corner a b .. a b .. a a b .. a a b .. c d .. c d .. c c d .. c c d ..\n.. .. .. .. .. .. .. .. .. .. .. ..\na a b a a b a a b a a b\na a b .. a a b .. a a b .. a a b ..\nc c d .. c c d .. c c d .. c c d ..\n.. .. .. .. .. .. .. .. .. .. .. ..\na b a b a b a b\na b .. a b .. a b .. a b ..\nc d .. c d .. c d .. c d ..\n.. .. .. .. .. .. .. .. .. .. .. ..\na a b .. a a b .. a b .. a b ..\nc c d .. c c d .. c d .. c d ..\n.. .. .. .. .. .. .. .. .. .. .. ..\nPadded at topright corner\nPadded at topleft corner\nPadded at bottomleft corner\nPadded at bottomright corner\nConvolutions involving (b)\nConvolutions involving (a)\nConvolutions involving (c): Rotated version of (b)\nConvolutions involving (d): Rotated version of (a)\nOriginal Input Padded at top-left Padded at bottomleft corner Padded at top-right Padded at bottomright corner\ne d d e f d d e f\nb a a b c b a a b c a a b c a a b c\na b c .. b a a b c .. b a a b c .. a a b c .. a a b c ..\nd e f .. e d d e f .. e d d e f .. d d e f .. d d e f ..\ng h i h g g h i h g g h i g g h i g g h i\n.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..\nNumber of conv ops each pixel is involved in\nAverage (grouped padding strategy)\n25 25 20 20 20 15 15 12 12 12 15 15 20 20 20 9 9 12 12 12 16 16 16 16 16\n25 25 20 20 20 15 15 12 12 12 15 15 20 20 20 9 9 12 12 12 16 16 16 16 16\n20 20 16 16 16 20 20 16 16 16 12 12 16 16 16 12 12 16 16 16 16 16 16 16 16\n20 20 16 16 16 20 20 16 16 16 12 12 16 16 16 12 12 16 16 16 16 16 16 16 16\n20 20 16 16 16 20 20 16 16 16 12 12 16 16 16 12 12 16 16 16 16 16 16 16 16\nWhich kernel cells these ops utilize?\n(a) 4 4 2 4 4 2 4 2 4 2 4 3 1\n4 4 2 2 2 1 4 2 2 1 3 2.25 0.75\n2 2 1 2 1 1 0.75 0.25\nsum = 25 sum = 15 sum = 15 sum = 9 sum = 16\n(b) 4 2 2 2 4 2 2 2 2 2 2 2 2 2 3 2 2 1\n4 2 2 2 2 1 1 1 2 2 2 1 1 1 2.25 1.5 1.5 0.75\n2 1 1 1 1 1 1 0.75 0.5 0.5 0.25\nsum = 25 sum = 15 sum = 15 sum = 9 sum = 16\n(c) 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2\n2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 1.5 1.5 1.5 1.5\n1 1 1 1 1 1 1 1 0.5 0.5 0.5 0.5\nsum = 20 sum = 12 sum = 20 sum = 12 sum = 16\n(e) 4 2 2 2 2 1 1 1 2 2 2 1 1 1 2.25 1.5 1.5 0.75\n2 1 1 1 2 1 1 1 1 1 1 1 1 1 1.5 1 1 0.5\n2 1 1 1 2 1 1 1 1 1 1 1 1 1 1.5 1 1 0.5\n2 1 1 1 1 1 1 0.75 0.5 0.5 0.25\nsum = 25 sum = 15 sum = 15 sum = 9 sum = 16\n(f) 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 1.5 1.5 1.5 1.5\n1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n1 1 1 1 1 1 1 1 0.5 0.5 0.5 0.5\nsum = 20 sum = 12 sum = 20 sum = 12 sum = 16\n(g) rotated version of (c)\n(h) rotated version of (f)\n(i) regular treatment\nMirror Padding (SYMMETRIC) with Grouping Illustrated on 4x4 kernel and 1-pixel padding\nPadded at top-left\nPadded at bottom-left\nPadded at top-right\nPadded at bottom-right\nPadded at top-left\nPadded at bottom-left Padded at top-right\nPadded at bottom-right\nAverage (grouped padding strategy)\n(d) rotated version of (b)\na a a b c .. .. a a a b c .. ..\na b c .. .. a a a b c .. .. 36 24 30 30 30 d e f .. .. d d d e f .. .. 24 16 20 20 20 g h i .. .. g g g h i .. .. 30 20 25 25 25 .. .. .. .. .. .. .. .. .. .. .. .. 30 20 25 25 25 .. .. .. .. .. .. .. .. .. .. .. .. 30 20 25 25 25\nWhich kernel cells these ops utilize? a: b: c: d: e: f: g: h: i: 9 6 3 3 3 3 3 3 3 3 3 3 3 2 1 1 1 1 1 1 1 1 1 1 3 2 1 1 1 1 1 1 1 1 1 1\n6 4 2 2 2 2 2 2 2 2 2 2 3 2 1 1 1 1 1 1 1 1 1 1 3 2 1 1 1 1 1 1 1 1 1 1 3 2 1 1 1 1 1 1 1 1 1 1 3 2 1 1 1 1 1 1 1 1 1 1 3 2 1 1 1 1 1 1 1 1 1 1\n3 2 1 1 1 1 1 1 1 1 1 1 3 2 1 1 1 1 1 1 1 1 1 1\n3 2 1 1 1 1 1 1 1 1 1 1\nsum = 36 sum = 24 sum = 30 sum = 24 sum = 16 sum = 20 sum = 30 sum = 20 sum = 25\nDetailed Illustration of how the counts are derived\na a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c d d d e f d d d e f d d d e f d d d e f d d d e f d d d e f g g g h i g g g h i g g g h i g g g h i g g g h i g g g h i\na a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c d d d e f d d d e f d d d e f g g g h i g g g h i g g g h i\na a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c d d d e f d d d e f d d d e f d d d e f d d d e f d d d e f g g g h i g g g h i g g g h i g g g h i g g g h i g g g h i\na a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a b b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c d d d e f d d d e f d d d e f d d d e f d d d e f d d d e f g g g h i g g g h i g g g h i g g g h i g g g h i g g g h i\na a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c d d d e f d d d e f d d d e f d d d e f d d d e f d d d e f g g g h i g g g h i g g g h i g g g h i g g g h i g g g h i\na a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a b b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c d d d e f d d d e f d d d e f d d d e f d d d e f d d d e f g g g h i g g g h i g g g h i g g g h i g g g h i g g g h i\na a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c d d d e f d d d e f d d d e f g g g h i g g g h i g g g h i\nOriginal Input\nPadded Input\n# of conv ops for each pixel Replication Padding\nIllustrated on 5x5 kernel and 2-pixel padding\nConvolutions involving (b)\nConvolutions involving (a)\nConvolutions involving (c)\nuniform\na a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c d d d e f d d d e f d d d e f d d d e f d d d e f d d d e f g g g h i g g g h i g g g h i g g g h i g g g h i g g g h i\na a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c d d d e f d d d e f d d d e f d d d e f d d d e f d d d e f g g g h i g g g h i g g g h i g g g h i g g g h i g g g h i\na a d b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c d d d e f d d d e f d d d e f d d d e f g g g h i g g g h i g g g h i g g g h i\na a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c b a a b c a a a b c a a a b c d d d e f d d d e f d d d e f e d d e f d d d e f d d d e f g g g h i g g g h i g g g h i h g g h i g g g h i g g g h i\na a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c b a a b c a a a b c a a a b c a a a b c a a a b c a a a b c b a a b c a a a b c a a a b c d d d e f d d d e f d d d e f e d d e f d d d e f d d d e f g g g h i g g g h i g g g h i h g g h i g g g h i g g g h i\na a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c d d d e f d d d e f d d d e f d d d e f g g g h i g g g h i g g g h i g g g h i\na a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c d d d e f d d d e f d d d e f d d d e f g g g h i g g g h i g g g h i g g g h i\nConvolutions involving (f)\nConvolutions involving (g): Rotated version of (c)\nConvolutions involving (h): Rotated version of (f)\nConvolutions involving (i): Regular uniform treatment\nConvolutions involving (e)\nConvolutions involving (d): Rotated version of (b)\nOriginal Input Padded Input # of conv ops each pixel is involved in\ne d e f .. ..\na b c .. .. b a b c .. .. 4 8 6 6 6\nd e f .. .. e d e f .. .. 8 16 12 12 12\ng h i .. .. h g h i .. .. 6 12 9 9 9\n.. .. .. .. .. .. .. .. .. .. .. 6 12 9 9 9\n.. .. .. .. .. .. .. .. .. .. .. 6 12 9 9 9\nWhich kernel cells these ops utilize?\na: b: c: d: e: f: g: h: i:\n1 1 2 1 1 1 1 1 2 2 2 2 2 4 2 2 1 1 1 2 1 1 1 1 1\n1 1 2 1 1 1 1 1 1 1 1 1 1 2 1 1 1 1 1 2 1 1 1 1 1\n1 1 1 1 1 2 1 1 2 1 1 1 1 1\nsum = 4 sum = 8 sum = 6 sum = 8 sum = 12 sum = 16 sum = 6 sum = 12 sum = 9\nuniform\nDetailed Illustration of how the counts are derived\ne d e f e d e f e d e f e d e f\nb a b c b a b c b a b c b a b c\ne d e f e d e f e d e f e d e f\nh g h i h g h i h g h i h g h i\ne d e f e d e f e d e f e d e f e d e f e d e f\nb a b c b a b c b a b c b a b c b a b c b a b c\ne d e f e d e f e d e f e d e f e d e f e d e f\nh g h i h g h i h g h i h g h i h g h i h g h i\ne d e f e d e f e d e f e d e f e d e f e d e f\nb a b c b a b c b a b c b a b c b a b c b a b c\ne d e f e d e f e d e f e d e f e d e f e d e f\nh g h i h g h i h g h i h g h i h g h i h g h i\nMirror Padding (REFLECT)\nIllustrated on 3x3 kernel and 1-pixel padding\nConvolutions involving (a)\nConvolutions involving (b)\nConvolutions involving (c)\ne d e f e d e f e d e f e d e f e d e f e d e f\nb a b c b a b c b a b c b a b c b a b c b a b c\ne d e f e d e f e d e f e d e f e d e f e d e f\nh g h i h g h i h g h i h g h i h g h i h g h i\ne d e f e d e f e d e f\nb a b c b a b c b a b c\ne d e f e d e f e d e f\nh g h i h g h i h g h i\ne d e f e d e f e d e f e d e f e d e f e d e f\nb a b c b a b c b a b c b a b c b a b c b a b c\ne d e f e d e f e d e f e d e f e d e f e d e f\nh g h i h g h i h g h i h g h i h g h i h g h i\ne d e f e d e f e d e f\nb a b c b a b c b a b c\ne d e f e d e f e d e f\nh g h i h g h i h g h i\nConvolutions involving (d): Rotated version of (b)\nConvolutions involving (e)\nConvolutions involving (f)\nConvolutions involving (g): Rotated version of (c)\nConvolutions involving (h): Rotated version of (f)\nConvolutions involving (i): Regular uniform treatment\na b c .. .. 6.25 8.75 7.5 7.5 7.5\nd e f .. .. 8.75 12.25 10.5 10.5 10.5\ng h i .. .. 7.5 10.5 9 9 9\n.. .. .. .. .. 7.5 10.5 9 9 9\n.. .. .. .. .. 10.5 9 9 9\nWhich kernel cells these ops utilize?\na: b: c: d: e: f: g: h: i:\n1 1.5 1 1 1.5 1 1 1 1 1.5 1 1 1.5 1 1 1 1 1.5 1 1 1.5 1 1 1\n1.5 2.25 1.5 1.5 2.25 1.5 1.5 1.5 1 1.5 1 1 1.5 1 1 1 1 1.5 1 1 1.5 1 1 1\n1.5 2.25 1.5 1.5 2.25 1.5 1.5 1.5 1 1.5 1 1 1.5 1 1 1\nsum = 6.25 sum = 8.75 sum = 7.5 sum = 8.75 sum = 9 sum = 10.5 sum = 7.5 sum = 10.5 sum = 9\nuniform\nDetailed Illustration of how the counts are derived\na b c a b c a b c a b c\nd e f d e f d e f d e f\ng h i g h i g h i g h i\nweight 9 / 4 = 2.25 9 / 6 = 1.5 9 / 6 = 1.5 9 / 9 = 1\nsum = 6.25\na b c ... a b c ... a b c ... a b c ... a b c ... a b c ...\nd e f ... d e f ... d e f ... d e f ... d e f ... d e f ...\ng h i ... g h i ... g h i ... g h i ... g h i ... g h i ..\n... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...\nweight 9 / 4 = 2.25 9 / 6 = 1.5 9 / 6 = 1.5 9 / 9 = 1 9 / 6 = 1.5 9 / 9 = 1\nsum = 8.75\na b c ... a b c ... a b c ... a b c ... a b c ... a b c ... ...\nd e f ... d e f ... d e f ... d e f ... d e f ... d e f ... ...\ng h i ... g h i ... g h i ... g h i ... g h i ... g h i ... ...\n... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...\nweight 9 / 6 = 1.5 9 / 9 = 1 9 / 6 = 1.5 9 / 9 = 1 9 / 6 = 1.5 9 / 9 = 1\nsum = 7.5\nPartial Convolution Illustrated on a 3x3 kernel\nInput Weighted # of conv ops each pixel is involved in\nConvolutions involving (a)\nConvolutions involving (b)\nConvolutions involving (c)\n7.5\na b c ... a b c ... a b c ... a b c ... a b c ... a b c ...\nd e f ... d e f ... d e f ... d e f ... d e f ... d e f ...\ng h i ... g h i ... g h i ... g h i ... g h i ... g h i ...\n... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...\nweight 9 / 4 = 2.25 9 / 6 = 1.5 9 / 6 = 1.5 9 / 6 = 1.5 9 / 9 = 1 9 / 9 = 1\na b c ... a b c ... a b c ...\nd e f ... d e f ... d e f ...\ng h i ... g h i ... g h i ...\n... ... ... ... ... ... ... ... ... ... ... ...\nweight 9 / 4 = 1.5 9 / 9 = 1 9 / 9 = 1\nsum = 12.25\na b c ... a b c ... a b c ... a b c ... a b c ... a b c ...\nd e f ... d e f ... d e f ... d e f ... d e f ... d e f ...\ng h i ... g h i ... g h i ... g h i ... g h i ... g h i ...\n... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...\nweight 9 / 6 = 1.5 9 / 9 = 1 9 / 9 = 1 9 / 6 = 1.5 9 / 9 = 1 9 / 9 = 1\na b c ... ... a b c ... ... a b c ...\nd e f ... ... d e f ... ... d e f ... ...\ng h i ... g h i ... ... g h i ... ...\n... ... ... ... ... ... ... ... ... ... ... ... ...\nweight 9 / 6 = 1.5 9 / 9 = 1 9 / 9 = 1\nsum = 10.5\nConvolutions involving (e)\nConvolutions involving (f)\nConvolutions involving (d): rotated version of (b)\nConvolutions involving (g): Rotated version of (c)\nConvolutions involving (h): Rotated version of (f)\nConvolutions involving (i): Regular uniform treatment" } ]
2,021
null
SP:4a4c6ede9645c5b814a84fbd9e91472f0888621e
[ "The paper proposes a modification for the adjoint method, such that to improve the training efficiency of neural ODEs. The proposed idea is that the solution of some terms in the adjoint method can be less accurate, because these are not ODEs but simple integrals, and hence, the error does not propagate. Thus, the solver can utilize bigger steps, and in total to perform less steps. In the experiments the efficiency is demonstrated under different scenarios where neural ODEs are used." ]
Neural differential equations may be trained by backpropagating gradients via the adjoint method, which is another differential equation typically solved using an adaptive-step-size numerical differential equation solver. A proposed step is accepted if its error, relative to some norm, is sufficiently small; else it is rejected, the step is shrunk, and the process is repeated. Here, we demonstrate that the particular structure of the adjoint equations makes the usual choices of norm (such as L) unnecessarily stringent. By replacing it with a more appropriate (semi)norm, fewer steps are unnecessarily rejected and the backpropagation is made faster. This requires only minor code modifications. Experiments on a wide range of tasks—including time series, generative modeling, and physical control—demonstrate a median improvement of 40% fewer function evaluations. On some problems we see as much as 62% fewer function evaluations, so that the overall training time is roughly halved.
[]
[ { "authors": [ "Ricky T.Q. Chen", "Yulia Rubanova", "Jesse Bettencourt", "David K Duvenaud" ], "title": "Neural Ordinary Differential Equations", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Ricky TQ Chen", "David K Duvenaud" ], "title": "Neural networks with cheap differential operators", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "J.R. Dormand", "P.J. Prince" ], "title": "A family of embedded Runge–Kutta formulae", "venue": "J. Comp. Appl. Math,", "year": 1980 }, { "authors": [ "Emilien Dupont", "Arnaud Doucet", "Yee Whye Teh" ], "title": "Augmented neural odes", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "E. Weinan" ], "title": "A Proposal on Machine Learning via Dynamical Systems", "venue": "Commun. Math. Stat.,", "year": 2017 }, { "authors": [ "Chris Finlay", "Jörn-Henrik Jacobsen", "Levon Nurbekyan", "Adam M. Oberman" ], "title": "How to train your neural ODE: the world of Jacobian and kinetic regularization", "venue": null, "year": 2020 }, { "authors": [ "Arnab Ghosh", "Harkirat Singh Behl", "Emilien Dupont", "Philip H.S. Torr", "Vinay Namboodiri" ], "title": "STEER: Simple Temporal Regularization For Neural ODEs", "venue": "Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Will Grathwohl", "Ricky T.Q. Chen", "Jesse Bettencourt", "Ilya Sutskever", "David Duvenaud" ], "title": "Ffjord: Free-form continuous dynamics for scalable reversible generative models", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Michael F Hutchinson" ], "title": "A stochastic estimator of the trace of the influence matrix for laplacian smoothing splines", "venue": "Communications in Statistics-Simulation and Computation,", "year": 1989 }, { "authors": [ "Jacob Kelly", "Jesse Bettencourt", "Matthew James Johnson", "David Duvenaud" ], "title": "Learning Differential Equations that are Easy to Solve", "venue": null, "year": 2007 }, { "authors": [ "Patrick Kidger", "James Morrill", "James Foster", "Terry Lyons" ], "title": "Neural Controlled Differential Equations for Irregular Time Series", "venue": null, "year": 2005 }, { "authors": [ "D Kingma", "J Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "ICLR,", "year": 2015 }, { "authors": [ "Derek Onken", "Samy Wu Fung", "Xingjian Li", "Lars Ruthotto" ], "title": "Ot-flow: Fast and accurate continuous normalizing flows via optimal transport", "venue": "arXiv preprint arXiv:2006.00104,", "year": 2020 }, { "authors": [ "Alessio Quaglino", "Marco Gallieri", "Jonathan Masci", "Jan" ], "title": "Koutnı́k. Snode: Spectral discretization of neural odes for system identification", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Chris Rackauckas", "Mike Innes", "Yingbo Ma", "Jesse Bettencourt", "Lyndon White", "Vaibhav Dixit" ], "title": "Diffeqflux.jl-a julia library for neural differential equations", "venue": null, "year": 1902 }, { "authors": [ "Christopher Rackauckas", "Qing Nie" ], "title": "Differentialequations.jl–a performant and feature-rich ecosystem for solving differential equations in julia", "venue": "Journal of Open Research Software,", "year": 2017 }, { "authors": [ "Yulia Rubanova", "Tian Qi Chen", "David K Duvenaud" ], "title": "Latent Ordinary Differential Equations for Irregularly-Sampled Time Series", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Pete Warden" ], "title": "Speech commands: A dataset for limited-vocabulary speech recognition", "venue": null, "year": 2020 }, { "authors": [ "Han Zhang", "Xi Gao", "Jacob Unterman", "Tom Arodz" ], "title": "Approximation capabilities of neural odes and invertible residual networks", "venue": "International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Yaofeng Desmond Zhong", "Biswadip Dey", "Amit Chakraborty" ], "title": "Symplectic ode-net: Learning hamiltonian dynamics with control", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Juntang Zhuang", "Nicha Dvornek", "Xiaoxiao Li", "Sekhar Tatikonda", "Xenophon Papademetris", "James Duncan" ], "title": "Adaptive checkpoint adjoint method for gradient estimation in neural ode", "venue": "International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "We use the same setup", "hyperparameters as in Kidger" ], "title": "The loss function is cross entropy", "venue": "The optimiser used was Adam (Kingma & Ba, 2015), with learning rate 1.6× 10−3, batch size of 1024, and 0.01-weighted L2 weight regularisation, trained for 200 epochs. The number of", "year": 2020 }, { "authors": [ "Grathwohl" ], "title": "The loss function is the negative log likelihood − log(p(z(T ) = x)) of equation (6). The optimiser used was Adam, with learning rate 10−3 and batch size 256, trained for 100 epochs. Relative and absolute tolerance of the solver are both taken to be 10−5", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "We begin by recalling the usual set-up for neural differential equations." }, { "heading": "1.1 NEURAL ORDINARY DIFFERENTIAL EQUATIONS", "text": "The general approach of neural ordinary differential equations (E, 2017; Chen et al., 2018) is to use ODEs as a learnable component of a differentiable framework. Typically the goal is to approximate a map x 7→ y by learning functions `1( · , φ), `2( · , ψ) and f( · , · , θ), which are composed such that\nz(τ) = `1(x, φ), z(t) = z(τ) + ∫ t τ f(s, z(s), θ) ds and y ≈ `2(z(T ), ψ). (1)\nThe variables φ, θ, ψ denote learnable parameters and the ODE is solved over the interval [τ, T ]. We include the (often linear) maps `1( · , φ), `2( · , ψ) for generality, as in many contexts they are important for the expressiveness of the model (Dupont et al., 2019; Zhang et al., 2020), though our contributions will be focused around the ODE component and will not depend on these maps.\nHere we will consider neural differential equations that may be interpreted as a neural ODE." }, { "heading": "1.2 APPLICATIONS", "text": "Neural differential equations have to the best our knowledge three main applications:\n1. Time series modeling. Rubanova et al. (2019) interleave Neural ODEs with RNNs to produce ODEs with jumps. Kidger et al. (2020) take f(t, z, θ) = g(z, θ)dXdt (t), dependent on some time-varying input X , to produce a neural controlled differential equation.\n2. Continuous Normalising Flows as in Chen et al. (2018); Grathwohl et al. (2019), in which the overall model acts as coupling or transformation between probability distributions,\n3. Modeling or controlling physical environments, for which a differential equation based model may be explicitly desired, see for example Zhong et al. (2020)." }, { "heading": "1.3 ADJOINT EQUATIONS", "text": "The integral in equation (1) may be backpropagated through either by backpropagating through the internal operations of a numerical solver, or by solving the backwards-in-time adjoint equations with respect to some (scalar) loss L.\naz(T ) = dL\ndz(T ) , az(t) = az(T )− ∫ t T az(s) · ∂f ∂z (s, z(s), θ) ds and\ndL\ndz(τ) = az(τ), aθ(T ) = 0, aθ(t) = aθ(T )− ∫ t T az(s) · ∂f ∂θ (s, z(s), θ) ds and dL dθ = aθ(τ),\nat(T ) = dL\ndT , at(t) = at(T )− ∫ t T az(s) · ∂f ∂s (s, z(s), θ) ds and\ndL dτ = at(τ),\n(2)\nThese equations are typically solved together as a joint system a(t) = [az(t), aθ(t), at(t)]. (They are already coupled; the latter two equations depend on az .) As additionally their integrands require z(s), and as the results of the forward computation of equation (1) are usually not stored, then the adjoint equations are typically additionally augmented by recovering z by solving backwards-intime\nz(t) = z(T ) + ∫ t T f(s, z(s), θ)ds. (3)" }, { "heading": "1.4 CONTRIBUTIONS", "text": "We demonstrate that the particular structure of the adjoint equations implies that numerical equation solvers will typically take too many steps, that are too small, wasting time during backpropagation. Specifically, the accept/reject step of adaptive-step-size solvers is too stringent.\nBy applying a correction to account for this, we demonstrate that the number of steps needed to solve the adjoint equations may be reduced by typically about 40%. We observe improvements on some problems by as much as 62%. Factoring in the forward pass (which is unchanged), the overall training time is roughly halved. Our method is hyperparameter-free and requires no tuning.\nWe do not observe any change in model performance, and at least with the torchdiffeq package (our chosen differential equation package), this correction may be applied with only 12 lines of code." }, { "heading": "2 METHOD", "text": "" }, { "heading": "2.1 NUMERICAL SOLVERS", "text": "Both the forward pass given by equation (1), and the backward pass given by equations (2) and (3), are solved by invoking a numerical differential equation solver. Our interest here is in adaptive-stepsize solvers. Indeed the default choice for solving many equations is the adaptive-step-size Runge– Kutta 5(4) scheme of Dormand–Prince (Dormand & Prince, 1980), for example as implemented by dopri5 in the torchdiffeq package or ode45 in MATLAB.\nA full discussion of the internal operations of these solvers is beyond our scope here; the part of interest to us is the accept/reject scheme. Consider the case of solving the general ODE\ny(t) = y(τ) + ∫ t τ f(s, y(s)) ds,\nwith y(t) ∈ Rd. Suppose for some fixed t the solver has computed some estimate ŷ(t) ≈ y(t), and it now seeks to take a step ∆ > 0 to compute ŷ(t + ∆) ≈ y(t + ∆). A step is made, and some candidate ŷcandidate(t + ∆) is generated. The solver additionally produces yerr ∈ Rd representing an estimate of the numerical error made in each channel during that step.\nGiven some prespecified absolute tolerance ATOL (for example 10−9), relative tolerance RTOL (for example 10−6), and (semi)norm ‖ · ‖ : Rd → [0,∞) (for example ‖y‖ = √\n1 d ∑d i=1 y 2 i the\nRMS norm), then an estimate of the size of the equation is given by\nSCALE = ATOL+RTOL ·max(ŷ(t), ŷcandidate(t+ ∆)) ∈ Rd, (4)\nwhere the maximum is taken channel-wise, and the error ratio r = ∥∥∥ yerr SCALE ∥∥∥ ∈ R (5) is then computed. If r ≤ 1 then the error is deemed acceptable, the step is accepted and we take ŷ(t+∆) = ŷcandidate(t+∆). If r > 1 then the error is deemed too large, the candidate ŷcandidate(t+∆) is rejected, and the procedure is repeated with a smaller ∆.\nNote the dependence on the choice of norm ‖ · ‖: in particular this determines the relative importance of each channel towards the accept/reject criterion." }, { "heading": "2.2 ADJOINT SEMINORMS", "text": "Not an ODE The key observation is that aθ (and in fact also at) does not appear anywhere in the vector fields of equation (2).\nThis means that (conditioned on knowing z and az), the integral corresponding to aθ is just an integral—not an ODE. As such, it is arguably inappropriate to solve it with an ODE solver, which makes the implicit assumption that small errors now may propagate to create large errors later.\nAccept/reject This is made manifest in the accept/reject step of equation (5). Typical choices of norm ‖ · ‖, such as L2, will usually weight each channel equally. But we have just established that to solve the adjoint equations accurately, it is far more important that z and az be accurate than it is that aθ be accurate.\nSeminorms Thus, when solving the adjoint equations equation (2), we propose to use a ‖ · ‖ that scales down the effect in those channels corresponding to aθ.\nIn practice, in our experiments, we scale ‖ · ‖ all the way down by applying zero weight to the offending channels, so that ‖ · ‖ is in fact a seminorm. This means that the integration steps are chosen solely for the computation of az and z, and the values of aθ are computed just by integrating with respect to those steps.\nExample As an explicit example, note that aθ(T ) = 0. When solving the adjoint equation numerically, this means for t close to T that the second term in equation (4) is small. As ATOL is typically also small, then SCALE is additionally small, and the error ratio r in equation (5) is large.\nThis implies that it becomes easy for the error ratio r to violate r ≤ 1, and it is easy for the step to be rejected. Now there is nothing intrinsically bad about a step being rejected—we would like to solve the ODE accurately, after all—the problem is that this is a spurious rejection, as the rejection occurred to ensure the accuracy of aθ, which is as already established unnecessary.\nIn practice, we observe that spurious rejections may occur for any t, not just those near T .\nOther channels In fact, essentially the same argument applies to at as well: this does not affect the value of the vector field either. In a continuous normalising flow, the log-probability channel is also only an integral, rather than an ODE, and again the same argument may be applied.\nDoes this reduce the accuracy of parameter gradients? One obvious concern is that we are typically ultimately interested in the parameter gradients aθ, in order to train a model; with respect to this our approach seems counter-intuitive.\nHowever, we verify empirically that models still train without a reduction in performance. We explain this by noting that the z, az channels truly are ODEs, so that small errors now do propagate to create larger errors later. Thus these are likely the dominant source of error overall." }, { "heading": "2.3 CODE", "text": "Depending on the software package, the code for making this change can be trivial. For example, using PyTorch (Paszke et al., 2019) and torchdiffeq (Chen et al., 2018), the standard set-up requires only a few additional lines of code. The additional 12 lines are marked.\n1 import t o r c h d i f f e q 2 3 def rms norm ( t e n s o r ) : # 4 re turn t e n s o r . pow ( 2 ) . mean ( ) . s q r t ( ) # 5 # 6 def make norm ( s t a t e ) : # 7 s t a t e s i z e = s t a t e . numel ( ) # 8 def norm ( a u g s t a t e ) : # 9 y = a u g s t a t e [ 1 : 1 + s t a t e s i z e ] # 10 a d j y = a u g s t a t e [1 + s t a t e s i z e : 1 + 2 ∗ s t a t e s i z e ] # 11 re turn max ( rms norm ( y ) , rms norm ( a d j y ) ) # 12 re turn norm # 13 # 14 t o r c h d i f f e q . o d e i n t a d j o i n t ( func = . . . , y0 = . . . , t = . . . , 15 a d j o i n t o p t i o n s = d i c t ( norm=make norm ( y0 ) ) ) #\nThis amounts to the extra 12 lines of code stated in the title—a number that even includes the additional whitespace and visual indents.\nTo keep the remainder of this discussion software-agnostic, we defer further explanation of this specific code to Appendix A." }, { "heading": "3 EXPERIMENTS", "text": "We compare our proposed technique against conventionally-trained neural differential equations, across multiple tasks—time series, generative, and physics-informed. These are each drawn from the main applications of neural differential equations, discussed in Section 1.2. In every case, the differential equation solver used is the Dormand–Prince 5(4) solver “dopri5”. The default norm is a mixed L∞/L2 norm used in torchdiffeq; see Appendix A. For the sake of an interesting presentation, we investigate different aspects of each problem.\nIn each case see Appendix B for details on hyperparameters, optimisers and so on.\nThe code for these experiments can be found at [redacted; see supplementary material]." }, { "heading": "3.1 NEURAL CONTROLLED DIFFERENTIAL EQUATIONS", "text": "Consider the Neural Controlled Differential Equation (Neural CDE) model of Kidger et al. (2020).\nTo recap, given some (potentially irregularly sampled) time series x = ((t0, x0), . . . , (tn, xn)), with each ti ∈ R the timestamp of the observation xi ∈ Rv , let X : [t0, tn] → R1+v be an interpolation such that X(ti) = (ti, xi). For example X could be a natural cubic spline.\nThen take f(t, z, θ) = g(z, θ)dXdt (t) in a Neural ODE model, so that changes in x provoke changes in the vector field, and the model incorporates the incoming information x. This may be thought of as a continuous-time RNN; indeed Kidger et al. (2020) use this to learn functions of (irregular) time series.\nWe apply a Neural CDE to the Speech Commands dataset (Warden, 2020). This is a dataset of one-second audio recordings of spoken words such as ‘left’, ‘right’ and so on. We take 34975 time series corresponding to 10 words, to produce a balanced classification problem. We preprocess the dataset by computing mel-frequency cepstrum coefficients so that each time series is then regularly spaced with length 161 and 20 channels. The data was then normalised to have zero mean and unit variance. We used the torchcde package (Kidger, 2020), which wraps torchdiffeq.\nThe initial map `1 (of equation (1)) is taken to be linear on (t0, x0). The terminal map `2 is taken to be linear on z(tn).\nWe investigate how the effect changes for varying tolerances by varying the pair (RTOL,ATOL) over (10−3, 10−6), (10−4, 10−7), and (10−5, 10−8). For each such pair we run five repeated experiments.\nSee Table 1 for results on accuracy and number of function evaluations. We see that the accuracy of the model is unaffected by our proposed change, whilst the backward pass uses 40%–62% fewer steps, depending on tolerance.\nNext, we investigate how accuracy and function evaluations change during training. See Figure 1. We see that accuracy quickly gets close to its maximum value during training, with only incremental improvements for most of the training procedure. In particular, this statement is true of both approaches, and we do not see any discrepancy between them. Additionally, we see that the number of function evaluations is much lower for the seminorm throughout training." }, { "heading": "3.2 CONTINUOUS NORMALISING FLOWS", "text": "Continuous Normalising Flows (CNF) (Chen et al., 2018) are a class of generative models that define a probability distribution as the transformation of a simpler distribution by following the vector field parameterised by a Neural ODE. Let p(z0) be an arbitrary base distribution that we can efficiently sample from, and compute its density. Then let z(t) be the solution of the initial value problem\nz(0) ∼ p(z0), dz(t)\nzt = f(t, z(t), θ),\nd log p(z(t))\ndt = −tr\n( ∂f\n∂z (t, z(t), θ)\n) ,\nfor which the initial point is randomly sampled from p(z0), and for which the change in log probability density is also tracked as the sample is transformed through the vector field.\nThe distribution at an arbitrary time value T can be trained with maximum likelihood to match data samples, resulting in a generative model of data with the distribution p(z(T )). Furthermore, Grathwohl et al. (2019) combines this with a stochastic trace estimator (Hutchinson, 1989) to create an efficient and unbiased estimator of the log probability for data samples x.\nlog p(z(T ) = x) = log p(z(0)) + Ev∼N (0,1) [∫ 0\nT\nvT ( ∂f\n∂z (t, z(t), θ)\n) v dt ] . (6)\nIn Table 2 we show the final test performance and the total number of function evaluations (NFEs) used in the adjoint method over 100 epochs. We see substantially fewer NFEs in experiments on both MNIST and CIFAR-10. Next, we investigate changing model size, by varying the complexity of the vector field f , which is a CNN with dh hidden channels. We find that using the seminorm, the backward NFE does not increase as much as when using the default norm. In particular, we can use roughly same NFEs as the smaller (dh = 32) model to train a larger (dh = 128) model and in doing so achieve a substantial gain in performance (1.00→ 0.95)." }, { "heading": "3.3 HAMILTONIAN DYNAMICS", "text": "Finally we consider the problem of learning Hamiltonian dynamics, using the Symplectic ODE-Net model of Zhong et al. (2020). Under Hamiltonian mechanics, the vector field is parameterised as\nf(t, z, θ) = dHdp (q, p, θ)g(q, θ)u− dHdq (q, p, θ) 0 , where z = (q, p, u) is the state decomposed into (generalised) positions q, momenta p, and control u. H is a Hamiltonian and g is an input term, both parameterised as neural networks. The input g offers a way to control the system, which can be understood via energy shaping.\nThis parameterisation lends itself to being interpretable. H learns to encode the physics of the system in the form of Hamiltonian mechanics, whilst g learns to encode the way in which inputs affect the system. This can then be used to construct controls u driving the system to a desired state. In this context (unlike the other problems we consider here), z is not a hidden state but instead the output of the model. The evolution of z is matched against the observed state at multiple times z(t1), z(t2), . . ., and trained with L2 loss.\nWe consider the fully-actuated double pendulum (“acrobot”) problem. Training data involves small oscillations under constant forcing. We investigate several quantities of interest.\nFirst we investigate the number of function evaluations. See Table 3. We see that the model successfully learns the dynamics, with very small loss (order O(10−4)) in both cases. However, under our proposed change the model is trained using 43% fewer function evaluations on the backward pass.\nNext, we verify that the end goal of controlling the system is achievable. See Figures 2 and 3, in which the double pendulum is successfully controlled from the full-down to the full-upright position, using the seminorm-trained model.\nAfter that, we investigate the locations of the function evaluations. See Figure 4. We see that the default norm makes consistently more evaluations for every time t. This is interesting as our initial hypothesis was that there would be more spurious rejections for t near the terminal time T , where aθ(t) is small, as aθ(T ) = 0. In fact we observe consistent improvements for all t.\nTo conclude, we investigate the locations of the rejected steps. See Figure 5. Note that the Dormand– Prince solver makes 6 function evaluations per step, which is the reason for the difference in scale between Figures 4 and 5. We see that the seminorm produces almost no rejected steps. This accounts for the tight grouping seen in Figure 4, as for each batch the evaluations times are typically at the same locations. Moreover, we observe an interesting structure to the the location of the rejected steps with the default norm. We suspect this is due to the time-varying physics of the problem." }, { "heading": "3.4 WHERE DO IMPROVEMENTS COME FROM?", "text": "Parameter-state ratio We hypothesised that the ratio between size of state and number of parameters may explain the differences between the smaller improvements for continuous normalising flows against the substantial improvements for Hamiltonian neural networks and neural CDEs (corresponding to the proportion of channels that can no longer cause rejections). Given a model with p parameters and state z(t) ∈ Rd, then seminorms reduce the number of rejection-causing channels from 1+2d+p to just 1+2d.\nWe plot the (log) ratio 1+2d+p/1+2d against the percentage improvement in backward steps. See Figure 6. We see that this hypothesis partly explains things: broadly speaking continuous normalising flows (which have O(103) state variables) see small improvements, whilst the neural CDEs and Hamiltonian neural networks (with O(101) state variables) see larger improvements.\nThere are still many differences between problems (tolerances, complexity of vector field, and so on); see for example the variation amongst the continuous normalising flows. Thus for any given problem this serves more as a rule-of-thumb.\nNumber of accepted/rejected steps Next we investigate the improvements for each problem, broken down by accepted and rejected steps (over a single backward pass). See Table 4. We begin by noting that the absolute number of accepted and rejected steps nearly always decreased. (In the case of modelling Hamiltonian dynamics, the seminorm actually produced zero rejected steps.) However beyond this, the proportion of rejected steps nearly always decreased.\nThis highlights that the benefit of seminorms is two-fold. First, that the number of accepted steps is reduced indicates that use of seminorms means systems are treated as smaller and so easier-tosolve, allowing larger step sizes. Second, the reduced proportion of rejected steps means that the differential equation solve is more efficient, with fewer wasted steps." }, { "heading": "4 RELATED WORK", "text": "Several authors have sought techniques for speeding up training of neural differential equations.\nGhosh et al. (2020) (much like this work) make a conceptually simple change. They regularise the neural ODE by randomly selecting the terminal integration time. We note that this does not seem applicable to Neural CDEs as well as Neural ODEs.\nFinlay et al. (2020) and Kelly et al. (2020) investigate regularising the higher-order derivatives of the model; the idea is that this encourages simpler trajectories that are easier to integrate. However, this improvement must be weighed against the extra cost of computing the regularisation term. Thus whilst Finlay et al. (2020) describe speed improvements for CNFs, Kelly et al. (2020) describe slower training as a result of this extra cost.\nQuaglino et al. (2020) describe speeding up training via spectral approximation. Unfortunately, this method is rather involved, and to our knowledge does not have a reference implementation.\nWhen backpropagating through the internal operations of the solver (so not the adjoint method used here), Zhuang et al. (2020) note that backpropagating through rejected steps is unnecessary.\nMassaroli et al. (2020b) discuss hypersolvers, which are hybrids of neural networks and numerical differential equation solvers, trained to efficiently solve a desired Neural ODE. However the Neural ODE changes during training, making these most useful after training, to speed up inference.\nDupont et al. (2019) and Massaroli et al. (2020a) note that adding extra dimensions to the Neural ODE improve expressivity and training. Indeed this has arguably now become a standard part of the model; we include this via the linear `1 in equation (1).\nIn the context of Continuous Normalizing Flows, Grathwohl et al. (2019), Chen & Duvenaud (2019), and Onken et al. (2020) have proposed specific architectural choices to reduce training cost.\nChen et al. (2018) mention regularizing the differential equation using weight decay; we include this throughout our experiments.\nIndependent of modifications to the model or training procedure, Rackauckas & Nie (2017) and Rackauckas et al. (2019) claim speed-ups simply through an improved implementation." }, { "heading": "5 CONCLUSION", "text": "We have introduced a method for reducing the number of function evaluations required to train a neural differential equation, by reducing the number of rejected steps during the adjoint (backward) pass. The method is simple to implement, straightforward to integrate into existing codebases, and offers substantial speed-ups across a variety of applications with no observed downsides." }, { "heading": "B EXPERIMENTAL DETAILS", "text": "B.1 NEURAL CONTROLLED DIFFERENTIAL EQUATIONS\nWe use the same setup and hyperparameters as in Kidger et al. (2020). The loss function is cross entropy. The optimiser used was Adam (Kingma & Ba, 2015), with learning rate 1.6× 10−3, batch size of 1024, and 0.01-weighted L2 weight regularisation, trained for 200 epochs. The number of hidden channels (the size of z) is 90, and f is parameterised as a feedforward network, of width 40 with 4 hidden layers, ReLU activation, and tanh final activation.\nB.2 CONTINUOUS NORMALISING FLOWS\nWe follow Grathwohl et al. (2019) and Finlay et al. (2020). The loss function is the negative log likelihood − log(p(z(T ) = x)) of equation (6). The optimiser used was Adam, with learning rate 10−3 and batch size 256, trained for 100 epochs. Relative and absolute tolerance of the solver are both taken to be 10−5. We used a multi-scale architecture as in Grathwohl et al. (2019) and Finlay et al. (2020), with 4 blocks of CNFs at 3 different scales.\nB.3 SYMPLECTIC ODE-NET\nThe optimiser used was Adam, with learning rate 10−3, batch size of 256, and 0.01-weighted L2 weight regularisation. Relative and absolute tolerance of the solver are both taken to be 10−4. We use the same architecture as in Zhong et al. (2020): this involves parameterising H as a sum of kinetic and potential energy terms. These details of Sympletic ODE-Net are a little involved, and we refer the reader to Zhong et al. (2020) for full details." } ]
2,020
null
SP:43b0b8d8e0c30180cb627ef62898028f5e7dfec8
[ "By integrating Adaboosting and a fully connected layer, this paper provides a new graph neural network structure. The objective of this paper is to design a deeper graph models in an efficient way for better performance. The computational efficiency and performance of the proposed algorithm are evaluated using the task of node property prediction on several public datasets. This is a new variant of GNN, but the quality this paper is lower than the expectation regarding to the clarity and organisation. " ]
The design of deep graph models still remains to be investigated and the crucial part is how to explore and exploit the knowledge from different hops of neighbors in an efficient way. In this paper, we propose a novel RNN-like deep graph neural network architecture by incorporating AdaBoost into the computation of network; and the proposed graph convolutional network called AdaGCN (Adaboosting Graph Convolutional Network) has the ability to efficiently extract knowledge from high-order neighbors of current nodes and then integrates knowledge from different hops of neighbors into the network in an Adaboost way. Different from other graph neural networks that directly stack many graph convolution layers, AdaGCN shares the same base neural network architecture among all “layers” and is recursively optimized, which is similar to an RNN. Besides, We also theoretically established the connection between AdaGCN and existing graph convolutional methods, presenting the benefits of our proposal. Finally, extensive experiments demonstrate the consistent state-of-the-art prediction performance on graphs across different label rates and the computational advantage of our approach AdaGCN 1.
[ { "affiliations": [], "name": "Ke Sun" }, { "affiliations": [], "name": "Zhanxing Zhu" }, { "affiliations": [], "name": "Zhouchen Lin" } ]
[ { "authors": [ "Sami Abu-El-Haija", "Amol Kapoor", "Bryan Perozzi", "Joonseok Lee" ], "title": "N-gcn: Multi-scale graph convolution for semi-supervised node classification", "venue": "International Workshop on Mining and Learning with Graphs (MLG),", "year": 2018 }, { "authors": [ "Sami Abu-El-Haija", "Bryan Perozzi", "Amol Kapoor", "Hrayr Harutyunyan", "Nazanin Alipourfard", "Kristina Lerman", "Greg Ver Steeg", "Aram Galstyan" ], "title": "Mixhop: Higher-order graph convolution architectures via sparsified neighborhood mixing", "venue": "International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Eugene Belilovsky", "Michael Eickenberg", "Edouard Oyallon" ], "title": "Greedy layerwise learning can scale to imagenet", "venue": "International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking", "venue": "International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Joan Bruna", "Wojciech Zaremba", "Arthur Szlam", "Yann LeCun" ], "title": "Spectral networks and locally connected networks on graphs", "venue": "International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "Eliav Buchnik", "Edith Cohen" ], "title": "Bootstrapped graph diffusions: Exposing the power of nonlinearity", "venue": "In Abstracts of the 2018 ACM International Conference on Measurement and Modeling of Computer Systems,", "year": 2018 }, { "authors": [ "Peter Bühlmann", "Bin Yu" ], "title": "Boosting with the l 2 loss: regression and classification", "venue": "Journal of the American Statistical Association,", "year": 2003 }, { "authors": [ "Jie Chen", "Tengfei Ma", "Cao Xiao" ], "title": "Fastgcn: fast learning with graph convolutional networks via importance sampling", "venue": "International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2016 }, { "authors": [ "Santo Fortunato" ], "title": "Community detection in graphs", "venue": "Physics reports,", "year": 2010 }, { "authors": [ "Yoav Freund", "Robert Schapire", "Naoki Abe" ], "title": "A short introduction to boosting", "venue": "Journal-Japanese Society For Artificial Intelligence,", "year": 1999 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets. In Advances in neural information processing systems (NeurIPS)", "venue": null, "year": 2014 }, { "authors": [ "Marco Gori", "Gabriele Monfardini", "Franco Scarselli" ], "title": "A new model for learning in graph domains", "venue": "In Proceedings", "year": 2005 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "Trevor Hastie", "Saharon Rosset", "Ji Zhu", "Hui Zou" ], "title": "Multi-class adaboost", "venue": "Statistics and Its Interface,", "year": 2009 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Furong Huang", "Jordan Ash", "John Langford", "Robert Schapire" ], "title": "Learning deep resnet blocks sequentially using boosting theory", "venue": "International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Wenxin Jiang" ], "title": "Process consistency for adaboost", "venue": "The Annals of Statistics,", "year": 2004 }, { "authors": [ "Ming Jin", "Heng Chang", "Wenwu Zhu", "Somayeh Sojoudi" ], "title": "Power up! robust graph convolutional network against evasion attacks based on graph powering", "venue": null, "year": 1905 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Johannes Klicpera", "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Predict then propagate: Graph neural networks meet personalized pagerank", "venue": "International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Guohao Li", "Matthias Müller", "Ali Thabet", "Bernard Ghanem" ], "title": "Can gcns go as deep as cnns", "venue": null, "year": 2019 }, { "authors": [ "Qimai Li", "Zhichao Han", "Xiao-Ming Wu" ], "title": "Deeper insights into graph convolutional networks for semi-supervised learning. Association for the Advancement of Artificial Intelligence (AAAI), 2018", "venue": null, "year": 2018 }, { "authors": [ "Renjie Liao", "Zhizhen Zhao", "Raquel Urtasun", "Richard S Zemel" ], "title": "Lanczosnet: Multi-scale deep graph convolutional networks", "venue": "International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Meng Liu", "Hongyang Gao", "Shuiwang Ji" ], "title": "Towards deeper graph neural networks", "venue": "In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2020 }, { "authors": [ "Aurelie C Lozano", "Sanjeev R Kulkarni", "Robert E Schapire" ], "title": "Convergence and consistency of regularized boosting with weakly dependent observations", "venue": "IEEE Transactions on Information Theory,", "year": 2013 }, { "authors": [ "Sitao Luan", "Mingde Zhao", "Xiao-Wen Chang", "Doina Precup" ], "title": "Break the ceiling: Stronger multiscale deep graph convolutional networks", "venue": "Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Gábor Lugosi", "Nicolas Vayatis" ], "title": "On the bayes-risk consistency of boosting methods", "venue": null, "year": 2001 }, { "authors": [ "Shie Mannor", "Ron Meir", "Tong Zhang" ], "title": "Greedy algorithms for classification–consistency, convergence rates, and adaptivity", "venue": "Journal of Machine Learning Research,", "year": 2003 }, { "authors": [ "Andrew Kachites McCallum", "Kamal Nigam", "Jason Rennie", "Kristie Seymore" ], "title": "Automating the construction of internet portals with machine learning", "venue": "Information Retrieval,", "year": 2000 }, { "authors": [ "Kenta Oono", "Taiji Suzuki" ], "title": "Optimization and generalization analysis of transduction through gradient boosting and application to multi-scale graph neural networks", "venue": "Advances in Neural Information Processing Systems (NeurIPS),", "year": 2020 }, { "authors": [ "Omri Puny", "Heli Ben-Hamu", "Yaron Lipman" ], "title": "From graph low-rank global attention to 2-fwl approximation", "venue": "ICML Workshop Graph Representation Learning and Beyond,", "year": 2020 }, { "authors": [ "Prithviraj Sen", "Galileo Namata", "Mustafa Bilgic", "Lise Getoor", "Brian Galligher", "Tina Eliassi-Rad" ], "title": "Collective classification in network data", "venue": "AI magazine,", "year": 2008 }, { "authors": [ "Oleksandr Shchur", "Maximilian Mumme", "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Pitfalls of graph neural network evaluation", "venue": "In Relational Representation Learning Workshop", "year": 2018 }, { "authors": [ "Ke Sun", "Zhanxing Zhu", "Zhouchen Lin" ], "title": "Multi-stage self-supervised learning for graph convolutional networks. Association for the Advancement of Artificial Intelligence (AAAI), 2019", "venue": null, "year": 2019 }, { "authors": [ "Ilya O Tolstikhin", "Sylvain Gelly", "Olivier Bousquet", "Carl-Johann Simon-Gabriel", "Bernhard Schölkopf" ], "title": "Adagan: Boosting generative models", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "Felix Wu", "Tianyi Zhang", "Amauri Holanda de Souza Jr.", "Christopher Fifty", "Tao Yu", "Kilian Q Weinberger" ], "title": "Simplifying graph convolutional networks", "venue": "International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Keyulu Xu", "Chengtao Li", "Yonglong Tian", "Tomohiro Sonobe", "Ken-ichi Kawarabayashi", "Stefanie Jegelka" ], "title": "Representation learning on graphs with jumping knowledge networks", "venue": "International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Hanqing Zeng", "Hongkuan Zhou", "Ajitesh Srivastava", "Rajgopal Kannan", "Viktor Prasanna" ], "title": "Graphsaint: Graph sampling based inductive learning method", "venue": "International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Tong Zhang", "Bin Yu" ], "title": "Boosting with early stopping: Convergence and consistency", "venue": "The Annals of Statistics,", "year": 2005 }, { "authors": [ "Jun Zhu", "Jiaming Song", "Bei Chen" ], "title": "Max-margin nonparametric latent feature models for link prediction", "venue": "arXiv preprint arXiv:1602.07428,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recently, research related to learning on graph structural data has gained considerable attention in machine learning community. Graph neural networks (Gori et al., 2005; Hamilton et al., 2017; Veličković et al., 2018), particularly graph convolutional networks (Kipf & Welling, 2017; Defferrard et al., 2016; Bruna et al., 2014) have demonstrated their remarkable ability on node classification (Kipf & Welling, 2017), link prediction (Zhu et al., 2016) and clustering tasks (Fortunato, 2010). Despite their enormous success, almost all of these models have shallow model architectures with only two or three layers. The shallow design of GCN appears counterintuitive as deep versions of these models, in principle, have access to more information, but perform worse. Oversmoothing (Li et al., 2018) has been proposed to explain why deep GCN fails, showing that by repeatedly applying Laplacian smoothing, GCN may mix the node features from different clusters and makes them indistinguishable. This also indicates that by stacking too many graph convolutional layers, the embedding of each node in GCN is inclined to converge to certain value (Li et al., 2018), making it harder for classification. These shallow model architectures restricted by oversmoothing issue ∗Corresponding author. 1Code is available at https://github.com/datake/AdaGCN.\nlimit their ability to extract the knowledge from high-order neighbors, i.e., features from remote hops of neighbors for current nodes. Therefore, it is crucial to design deep graph models such that high-order information can be aggregated in an effective way for better predictions.\nThere are some works (Xu et al., 2018b; Liao et al., 2019; Klicpera et al., 2018; Li et al., 2019; Liu et al., 2020) that tried to address this issue partially, and the discussion can refer to Appendix A.1. By contrast, we argue that a key direction of constructing deep graph models lies in the efficient exploration and effective combination of information from different orders of neighbors. Due to the apparent sequential relationship between different orders of neighbors, it is a natural choice to incorporate boosting algorithm into the design of deep graph models. As an important realization of boosting theory, AdaBoost (Freund et al., 1999) is extremely easy to implement and keeps competitive in terms of both practical performance and computational cost (Hastie et al., 2009). Moreover, boosting theory has been used to analyze the success of ResNets in computer vision (Huang et al., 2018) and AdaGAN (Tolstikhin et al., 2017) has already successfully incorporated boosting algorithm into the training of GAN (Goodfellow et al., 2014).\nIn this work, we focus on incorporating AdaBoost into the design of deep graph convolutional networks in a non-trivial way. Firstly, in pursuit of the introduction of AdaBoost framework, we refine the type of graph convolutions and thus obtain a novel RNN-like GCN architecture called AdaGCN. Our approach can efficiently extract knowledge from different orders of neighbors and then combine these information in an AdaBoost manner with iterative updating of the node weights. Also, we compare our AdaGCN with existing methods from the perspective of both architectural difference and feature representation power to show the benefits of our method. Finally, we conduct extensive experiments to demonstrate the consistent state-of-the-art performance of our approach across different label rates and computational advantage over other alternatives." }, { "heading": "2 OUR APPROACH: ADAGCN", "text": "" }, { "heading": "2.1 ESTABLISHMENT OF ADAGCN", "text": "Consider an undirected graph G = (V, E) with N nodes vi ∈ V , edges (vi, vj) ∈ E . A ∈ RN×N is the adjacency matrix with corresponding degree matrix Dii = ∑ j Aij . In the vanilla GCN model (Kipf & Welling, 2017) for semi-supervised node classification, the graph embedding of nodes with two convolutional layers is formulated as:\nZ =  ReLU(ÂXW (0))W (1) (1)\nwhere Z ∈ RN×K is the final embedding matrix (output logits) of nodes before softmax and K is the number of classes. X ∈ RN×C denotes the feature matrix where C is the input dimension.  = D̃− 1 2 ÃD̃− 1 2 where à = A + I and D̃ is the degree matrix of Ã. In addition, W (0) ∈ RC×H is the input-to-hidden weight matrix for a hidden layer with H feature maps and W (1) ∈ RH×K is the hidden-to-output weight matrix.\nOur key motivation of constructing deep graph models is to efficiently explore information of highorder neighbors and then combine these messages from different orders of neighbors in an AdaBoost way. Nevertheless, if we naively extract information from high-order neighbors based on GCN, we are faced with stacking l layers’ parameter matrix W (i), i = 0, ..., l − 1, which is definitely costly in computation. Besides, Multi-Scale Deep Graph Convolutional Networks (Luan et al., 2019) also theoretically demonstrated that the output can only contain the stationary information of graph structure and loses all the local information in nodes for being smoothed if we simply deepen GCN. Intuitively, the desirable representation of node features does not necessarily need too many nonlinear transformation f applied on them. This is simply due to the fact that the feature of each node is normally one-dimensional sparse vector rather than multi-dimensional data structures, e.g., images, that intuitively need deep convolution network to extract high-level representation for vision tasks. This insight has been empirically demonstrated in many recent works (Wu et al., 2019; Klicpera et al., 2018; Xu et al., 2018a), showing that a two-layer fully-connected neural networks is a better choice in the implementation. Similarly, our AdaGCN also follows this direction by choosing an appropriate f in each layer rather than directly deepen GCN layers.\nThus, we propose to remove ReLU to avoid the expensive joint optimization of multiple parameter matrices. Similarly, Simplified Graph Convolution (SGC) (Wu et al., 2019) also adopted this prac-\ntice, arguing that nonlinearity between GCN layers is not crucial and the majority of the benefits arises from local weighting of neighboring features. Then the simplified graph convolution is:\nZ = ÂlXW (0)W (1) · · ·W (l−1) = ÂlXW̃, (2)\nwhere we collapse W (0)W (1) · · ·W (l−1) as W̃ and Âl denotes  to the l-th power. In particular, one crucial impact of ReLU in GCN is to accelerate the convergence of matrix multiplication since the ReLU is a contraction mapping intuitively. Thus, the removal of ReLU operation could also alleviate the oversmoothing issue, i.e. slowering the convergence of node embedding to indistinguishable ones (Li et al., 2018). Additionally, without ReLU this simplified graph convolution is also able to avoid the aforementioned joint optimization over multiple parameter matrices, resulting in computational benefits. Nevertheless, we find that this type of stacked linear transformation from graph convolution has insufficient power in representing information of high-order neighbors, which is revealed in our experiment described in Appendix A.2. Therefore, we propose to utilize an appropriate nonlinear function fθ, e.g., a two-layer fully-connected neural network, to replace the linear transformation W̃ in Eq. 2 and enhance the representation ability of each base classifier in AdaGCN as follows:\nZ(l) = fθ( lX), (3)\nwhere Z(l) represents the final embedding matrix (output logits before Softmax) after the l-th base classifier in AdaGCN. This formulation also implies that the l-th base classifier in AdaGCN is extracting knowledge from features of current nodes and their l-th hop of neighbors. Due to the fact that the function of l-th base classifier in AdaGCN is similar to that of the l-th layer in other traditional GCN-based methods that directly stack many graph convolutional layers, we regard the whole part of l-th base classifier as the l-th layers in AdaGCN. As for the realization of Multi-class AdaBoost, we apply SAMME (Stagewise Additive Modeling using a Multi-class Exponential loss function) algorithm (Hastie et al., 2009), a natural and clean multi-class extension of the two-class AdaBoost adaptively combining weak classifiers.\nAs illustrated in Figure 1, we apply base classifier f (l)θ to extract knowledge from current node feature and l-th hop of neighbors by minimizing current weighted loss. Then we directly compute the weighted error rate err(l) and corresponding weight α(l) of current base classifier f (l)θ as follows:\nerr(l) = n∑ i=1 wiI ( ci 6= f (l)θ (xi) ) / n∑ i=1 wi\nα(l) = log 1− err(l)\nerr(l) + log(K − 1),\n(4)\nwhere wi denotes the weight of i-th node and ci represents the category of current i-th node. To attain a positive α(l), we only need (1 − err(l)) > 1/K, i.e., the accuracy of each weak classifier\nshould be better than random guess (Hastie et al., 2009). This can be met easily to guarantee the weights to be updated in the right direction. Then we adjust nodes’ weights by increasing weights on incorrectly classified ones:\nwi ← wi · exp ( α(l) · I ( ci 6= f (l)θ (xi) )) , i = 1, . . . , n (5)\nAfter re-normalizing the weights, we then compute Âl+1X =  · (ÂlX) to sequentially extract knowledge from l+1-th hop of neighbors in the following base classifier f (l+1)θ . One crucial point of AdaGCN is that different from traditional AdaBoost, we only define one fθ, e.g. a two-layer fully connected neural network, which in practice is recursively optimized in each base classifier just similar to a recurrent neural network. This also indicates that the parameters from last base classifier are leveraged as the initialization of next base classifier, which coincides with our intuition that l+1-th hop of neighbors are directly connected from l-th hop of neighbors. The efficacy of this kind of layer-wise training has been similarly verified in (Belilovsky et al., 2018) recently. Further, we combine the predictions from different orders of neighbors in an Adaboost way to obtain the final prediction C(A,X):\nC(A,X) = argmax k L∑ l=0 α(l)f (l) θ ( lX) (6)\nFinally, we obtain the concise form of AdaGCN in the following:\nÂlX =  · (Âl−1X)\nZ(l) = f (l) θ ( lX)\nZ = AdaBoost(Z(l))\n(7)\nNote that fθ is non-linear, rather than linear in SGC (Wu et al., 2019), to guarantee the representation power. As shown in Figure 1, the architecture of AdaGCN is a variant of RNN with synchronous sequence input and output. Although the same classifier architecture is adopted for f (l)θ , their parameters are different, which is different from vanilla RNN. We provide a detailed description of the our algorithm in Section 3." }, { "heading": "2.2 COMPARISON WITH EXISTING METHODS", "text": "Architectural Difference. As illustrated in Figure 1 and 2, there is an apparent difference among the architectures of GCN (Kipf & Welling, 2017), SGC (Wu et al., 2019), Jumping Knowledge (JK) (Xu et al., 2018b) and AdaGCN. Compared with these existing graph convolutional approaches that sequentially convey intermediate result Z(l) to compute final prediction, our AdaGCN transmits weights of nodes wi, aggregated features of different hops of neighbors ÂlX . More importantly, in AdaGCN the embedding Z(l) is independent of the flow of computation in the network and the sparse adjacent matrix  is also not directly involved in the computation of individual network because we compute\nÂ(l+1)X in advance and then feed it instead of  into the classifier f (l+1)θ , thus yielding significant computation reduction, which will be discussed further in Section 3.\nConnection with PPNP and APPNP. We also established a strong connection between AdaGCN and previous state-of-the-art Personalized Propagation of Neural Predictions (PPNP) and Approximate PPNP (APPNP) (Klicpera et al., 2018) method that leverages personalized pagerank to reconstruct graph convolutions in order to use information from a large and adjustable neighborhood. The analysis can be summarized in the following Proposition 1. Proof can refer to Appendix A.3.\nProposition 1. Suppose that γ is the teleport factor. Let matrix sequence {Z(l)} be from the output of each layer l in AdaGCN, then PPNP is equivalent to the Exponential Moving Average (EMA) with exponentially decreasing factor γ on {Z(l)} in a sharing parameters version, and its approximate version APPNP can be viewed as the approximated form of EMA with a limited number of terms.\nProposition 1 illustrates that AdaGCN can be viewed as an adaptive form of APPNP, formulated as:\nZ = L∑ l=0 α(l)f (l) θ ( lX) (8)\nSpecifically, the first discrepancy between AdaGCN and APPNP lies in the adaptive coefficient α(l) in AdaGCN determined by the error of l-th base classifier f (l)θ rather than fixed exponentially decreased weights in APPNP. In addition, AdaGCN employs classifier f (l)θ with different parameters to learn the embedding of different orders of neighbors, while APPNP shares these parameters in its form. We verified this benefit of our approach in our experiments shown in Section 4.2.\nComparison with MixHop MixHop (Abu-El-Haija et al., 2019) applied the similar way of graph convolution by repeatedly mixing feature representations of neighbors at various distance. Proposition 2 proves that both AdaGCN and MixHop are able to represent feature differences among neighbors while previous GCNs-based methods cannot. Proof can refer to Appendix A.4. Recap the definition of general layer-wise Neighborhood Mixing (Abu-El-Haija et al., 2019) as follows: Definition 1. General layer-wise Neighborhood Mixing: A graph convolution network has the ability to represent the layer-wise neighborhood mixing if for any b0, b1, ..., bL, there exists an injective mapping f with a setting of its parameters, such that the output of this graph convolution network can express the following formula:\nf ( L∑ l=0 blσ ( ÂlX )) (9)\nProposition 2. AdaGCNs defined by our proposed approach (Eq. equation 7) are capable of representing general layer-wise neighborhood mixing, i.e., can meet the Definition 1.\nAlbeit the similarity, AdaGCN distinguishes from MixHop in many aspects. Firstly, MixHop concatenates all outputs from each order of neighbors while we combines these predictions in an Adaboost way, which has theoretical generalization guarantee based on boosting theory Hastie et al. (2009). Oono & Suzuki (2020) have recently derived the optimization and generalization guarantees of multi-scale GNNs, serving as the theoretical backbone of AdaGCN. Meantime, MixHop allows full linear mixing of different orders of neighboring features, while AdaGCN utilizes different nonlinear transformation f (l)θ among all layers, enjoying stronger expressive power." }, { "heading": "3 ALGORITHM", "text": "In practice, we employ SAMME.R (Hastie et al., 2009), the soft version of SAMME, in AdaGCN. SAMME.R (R for Real) algorithm (Hastie et al., 2009) leverages real-valued confidence-rated predictions, i.e., weighted probability estimates, rather than predicted hard labels in SAMME, in the prediction combination, which has demonstrated a better generalization and faster convergence than SAMME. We elaborate the final version of AdaGCN in Algorithm 1. We provide the analysis on the choice of model depth L in Appendix A.7, and then we elaborate the computational advantage of AdaGCN in the following.\nAnalysis of Computational Advantage. Due to the similarity of graph convolution in MixHop (Abu-El-Haija et al., 2019), AdaGCN also requires no additional memory or computational complexity compared with previous GCN models. Meanwhile, our approach enjoys huge computational advantage compared with GCN-based models, e.g., PPNP and APPNP, stemming from excluding the additional computation involved in sparse tensors, such as the sparse tensor multiplication between  and other dense tensors, in the forward and backward propagation of the neural network. Specifically, there are only L times sparse tensor operations for an AdaGCN model with L layers, i.e., ÂlX =  · (Âl−1X) for each layer l. This operation in each layer yields a dense tensor\nAlgorithm 1 AdaGCN based on SAMME.R Algorithm Input: Features Matrix X , normalized adjacent matrix Â, a two-layer fully connected network fθ, number of layers L and number of classes K. Output: Final combined prediction C(A,X).\n1: Initialize the node weights wi = 1/n, i = 1, 2, ..., n on training set, neighbors feature matrix X̂(0) = X and classifier f (−1)θ . 2: for l = 0 to L do 3: Fit the graph convolutional classifier f (l)θ on neighbor feature matrix X̂\n(l) based on f (l−1)θ by minimizing current weighted loss.\n4: Obtain the weighted probability estimates p(l)(X̂(l)) for f (l)θ : p (l) k (X̂ (l)) = Softmax(f (l)θ (c = k|X̂ (l))), k = 1, . . . ,K 5: Compute the individual prediction h(l)k (x) for the current graph convolutional classifier f (l) θ :\nh (l) k (X̂ (l))← (K − 1)\n( log p\n(l) k (X̂ (l))− 1 K ∑ k′ log p (l) k′ (X̂ (l)) ) where k = 1, . . . ,K.\n6: Adjust the node weights wi for each node xi with label yi on training set: wi ← wi · exp ( −K − 1\nK y>i log p (l) (xi)\n) , i = 1, . . . , n\n7: Re-normalize all weights wi. 8: Update l+1-hop neighbor feature matrix X̂(l+1): X̂(l+1) = ÂX̂(l) 9: end for\n10: Combine all predictions h(l)k (X̂ (l)) for l = 0, ..., L.\nC(A,X) = argmax k L∑ l=0 h (l) k (X̂ (l))\n11: return Final combined prediction C(A,X).\nBl = ÂlX for the l-th layer, which is then fed into the computation in a two-layer fully-connected network, i.e., f (l)θ (B\nl) = ReLU(BlW (0))W (1). Due to the fact that dense tensor Bl has been computed in advance, there is no other computation related to sparse tensors in the multiple forward and backward propagation procedures while training the neural network. By contrast, this multiple computation involved in sparse tensors in the GCN-based models, e.g., GCN:  ReLU(ÂXW (0))W (1), is highly expensive. AdaGCN avoids these additional sparse tensor operations in the neural network and then attains huge computational efficiency. We demonstrate this viewpoint in the Section 4.3." }, { "heading": "4 EXPERIMENTS", "text": "Experimental Setup. We select five commonly used graphs: CiteSeer, Cora-ML (Bojchevski & Günnemann, 2018; McCallum et al., 2000), PubMed (Sen et al., 2008), MS-Academic (Shchur et al., 2018) and Reddit. Dateset statistics are summarized in Table 1. Recent graph neural networks suffer from overfitting to a single splitting of training, validation and test datasets (Klicpera et al., 2018). To address this problem, inspired by (Klicpera et al., 2018), we test all approaches on multiple random splits and initialization to conduct a rigorous study. Detailed dataset splittings are provided in Appendix A.6." }, { "heading": "Citeseer", "text": "Basic Setting of Baselines and AdaGCN. We compare AdaGCN with GCN (Kipf & Welling, 2017) and Simple Graph Convolution (SGC) (Wu et al., 2019) in Figure 3. In Table 2, we employ the same baselines as (Klicpera et al., 2018): V.GCN (vanilla GCN) (Kipf & Welling, 2017) and GCN with our early stopping, N-GCN (network of GCN) (Abu-El-Haija et al., 2018a), GAT (Graph Attention Networks) (Veličković et al., 2018), BT.FP (bootstrapped feature propagation) (Buchnik & Cohen, 2018) and JK (jumping knowledge networks with concatenation) (Xu et al., 2018b). In the computation part, we additionally compare AdaGCN with FastGCN (Chen et al., 2018) and GraphSAGE (Hamilton et al., 2017). We refer to the result of baselines from (Klicpera et al., 2018) and the implementation of AdaGCN is adapted from APPNP. For AdaGCN, after the line search on hyper-parameters, we set h = 5000 hidden units for the first four datasets except Ms-academic with h = 3000, and 15, 12, 20 and 5 layers respectively due to the different graph structures. In addition, we set dropout rate to 0 for Citeseer and Cora-ML datasets and 0.2 for the other datasets and 5×10−3L2 regularization on the first linear layer. We set weight decay as 1×10−3 for Citeseer while 1 × 10−4 for others. More detailed model parameters and analysis about our early stopping mechanism can be referred from Appendix A.6." }, { "heading": "4.1 DESIGN OF DEEP GRAPH MODELS TO CIRCUMVENT OVERSMOOTHING EFFECT", "text": "It is well-known that GCN suffers from oversmoothing (Li et al., 2018) with the stacking of more graph convolutions. However, combination of knowledge from each layer to design deep graph\nmodels is a reasonable method to circumvent oversmoothing issue. In our experiment, we aim to explore the prediction performance of GCN, GCN with residual connection (Kipf & Welling, 2017), SGC and our AdaGCN with a growing number of layers.\nFrom Figure 3, it can be easily observed that oversmoothing leads to the rapid decreasing of accuracy for GCN (blue line) as the layer increases. In contrast, the speed of smoothing (green line) of SGC is much slower than GCN due to the lack of ReLU analyzed in Section 2.1. Similarly, GCN with residual connection (yellow line) partially mitigates the oversmoothing effect of original GCN but fails to take advantage of information from different orders of neighbors to improve the prediction performance constantly. Remarkably, AdaGCN (red line) is able to consistently enhance the performance with the increasing of layers across the three datasets. This implies that AdaGCN can efficiently incorporate knowledge from different orders of neighbors and circumvent oversmoothing of original GCN in the process of constructing deep graph models. In addition, the fluctuation of performance for AdaGCN is much lower than GCN especially when the number of layer is large." }, { "heading": "4.2 PREDICTION PERFORMANCE", "text": "We conduct a rigorous study of AdaGCN on four datasets under multiple splittings of dataset. The results from Table 2 suggest the state-of-the-art performance of our approach and the improvement compared with APPNP validates the benefit of adaptive form for our AdaGCN. More rigorously, p values under paired t test demonstrate the significance of improvement for our method.\nIn the realistic setting, graphs usually have different labeled nodes and thus it is necessary to investigate the robust performance of methods on different number of labeled nodes. Here we utilize label rates to measure the different numbers of labeled nodes and then sample corresponding labeled nodes per class on graphs respectively. Table 3 presents the consistent state-of-the-art performance of AdaGCN under different label rates. An interesting manifestation from Table 3 is that AdaGCN yields more improvement on fewer label rates compared with APPNP, showing more efficiency on graphs with few labeled nodes. Inspired by the Layer Effect on graphs (Sun et al., 2019), we argue that the increase of layers in AdaGCN can result in more benefits on the efficient propagation of label signals especially on graphs with limited labeled nodes.\nMore rigorously, we additionally conduct the comparison on a larger dataset, i.e., Reddit. We choose the best layer as 4 due to the fact that AdaGCN with larger number of layers tends to suffer from overfitting on this relatively simple dataset (with high label rate 65.9%). Table 4 suggests that AdaGCN can still outperform other typical baselines, including V.GCN, PPNP and APPNP. More experimental details can be referred from Appendix A.6." }, { "heading": "4.3 COMPUTATIONAL EFFICIENCY", "text": "Without the additional computational cost involved in sparse tensors in the propagation of the neural network, AdaGCN presents huge computational efficiency. From the left part of Figure 4, it exhibits that AdaGCN has the fastest speed of per-epoch training time in comparison with other methods except the comparative performance with FastGCN in Pubmed. In addition, there is a somewhat inconsistency in computation of FastGCN, with fastest speed in Pubmed but slower than\nGCN on Cora-ML and MS-Academic datasets. Furthermore, with multiple power iterations involved in sparse tensors, APPNP unfortunately has relatively expensive computation cost. It should be noted that this computational advantage of AdaGCN is more significant when it comes to large datasets, e.g., Reddit. Table 4 demonstrates AdaGCN has the potential to perform much faster on larger datasets.\nBesides, we explore the computational cost of ReLU and sparse adjacency tensor with respect to the number of layers in the right part of Figure 4. We focus on comparing AdaGCN with SGC and GCN as other GCN-based methods, such as GraphSAGE and APPNP, behave similarly with GCN. Particularly, we can easily observe that both SGC (green line) and GCN (red line) show a linear increasing tendency and GCN yields a larger slope arises from ReLU and more parameters. For SGC, stacking more layers directly is undesirable regarding the computation. Thus, a limited number of SGC layers is preferable with more advanced optimization techniques Wu et al. (2019). It also shows that the computational cost involved sparse matrices in neural networks plays a dominant role in all the cost especially when the layer is large enough. In contrast, our AdaGCN (pink line) displays an almost constant trend as the layer increases simply because it excludes the extra computation involved in sparse tensors Â, such as · · ·  ReLU(ÂXW (0))W (1) · · · , in the process of training neural networks. AdaGCN maintains the updating of parameters in the f (l)θ with a fixed architecture in each layer while the layer-wise optimization, therefore displaying a nearly constant computation cost within each epoch although more epochs are normally needed in the entire layer-wise training. We leave the analysis of exact time and memory complexity of AdaGCN as future works, but boosting-based algorithms including AdaGCN is memory-efficient (Oono & Suzuki, 2020)." }, { "heading": "5 DISCUSSIONS AND CONCLUSION", "text": "One potential concern is that AdaBoost (Hastie et al., 2009; Freund et al., 1999) is established on i.i.d. hypothesis while graphs have inherent data-dependent property. Fortunately, the statistical convergence and consistency of boosting (Lugosi & Vayatis, 2001; Mannor et al., 2003) can still be preserved when the samples are weakly dependent (Lozano et al., 2013). More discussion can refer to Appendix A.5. In this paper, we propose a novel RNN-like deep graph neural network architecture called AdaGCNs. With the delicate architecture design, our approach AdaGCN can effectively explore and exploit knowledge from different orders of neighbors in an Adaboost way. Our work paves a way towards better combining different-order neighbors to design deep graph models rather than only stacking on specific type of graph convolution." }, { "heading": "ACKNOWLEDGMENTS", "text": "Z. Lin is supported by NSF China (grant no.s 61625301 and 61731018), Major Scientific Research Project of Zhejiang Lab (grant no.s 2019KB0AC01 and 2019KB0AB02), Beijing Academy of Artificial Intelligence, and Qualcomm." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 RELATED WORKS ON DEEP GRAPH MODELS", "text": "A straightforward solution (Kipf & Welling, 2017; Xu et al., 2018b) inspired by ResNets (He et al., 2016) was by adding residual connections, but this practice was unsatisfactory both in prediction performance and computational efficiency towards building deep graph models, as shown in our experiments in Section 4.1 and 4.3. More recently, JK (Jumping Knowledge Networks (Xu et al., 2018b)) introduced jumping connections into final aggregation mechanism in order to extract knowledge from different layers of graph convolutions. However, this straightforward change of GCN architecture exhibited inconsistent empirical performance for different aggregation operators, which cannot demonstrate the successful construction of deep layers. In addition, Graph powering-based method (Jin et al., 2019) implicitly leveraged more spatial information by extending classical spectral graph theory to robust graph theory, but they concentrated on defending adversarial attacks rather than model depth. LanczosNet (Liao et al., 2019) utilized Lanczos algorithm to construct low rank approximations of the graph Laplacian and then can exploit multi-scale information. Moreover, APPNP (Approximate Personalized Propagation of Neural Predictions, (Klicpera et al., 2018)) leveraged the relationship between GCN and personalized PageRank to derive an improved global propagation scheme. Beyond these, DeepGCNs (Li et al., 2019) directly adapted residual, dense connection and dilated convolutions to GCN architecture, but it mainly focused on the task of point cloud semantic segmentation and has not demonstrated its effectiveness in typical graph tasks. Similar to our work, Deep Adaptive Graph Neural Network (DAGNN) (Liu et al., 2020) also focused on incorporating information from large receptive fields through the entanglement of representation transformation and propagation, while our work efficiently ensembles knowledge from large receptive fields in an Adaboost manner. Other related works based on global attention models (Puny et al., 2020) and sample-based methods (Zeng et al., 2019) are also helpful to construct deep graph models.\nA.2 INSUFFICIENT REPRESENTATION POWER OF ADASGC\nAs illustrated in Figure 5, with the increasing of layers, AdaSGC with only linear transformation has insufficient representation power both in extracting knowledge from high-order neighbors and combining information from different orders of neighbors while AdaGCN exhibits a consistent improvement of performance as the layer increases." }, { "heading": "A.3 PROOF OF PROPOSITION 1", "text": "Firstly, we further elaborate the Proposition 1 as follows, then we provide the proof.\nSuppose that γ is the teleport factor. Consider the output ZPPNP = γ(I − (1 − γ)Â)−1fθ(X) in PPNP and ZAPPNP from its approxminated version APPNP. Let matrix sequence {Z(l)} be from the output of each layer l in AdaGCN, then PPNP is equivalent to the Exponential Moving Average (EMA) with exponentially decreasing factor γ, a first-order infinite impulse response filter, on {Z(l)} in a sharing parameters version, i.e., f (l)θ ≡ fθ . In addition, APPNP, which we reformulate in Eq. 10, can be viewed as the approximated form of EMA with a\nlimited number of terms.\nZAPPNP = (γ L−1∑ l=0 (1− γ)lÂl + (1− γ)LÂL)fθ(X) (10)\nProof. According to Neumann Theorem, ZPPNP can be expanded as a Neumann series:\nZPPNP = γ(I− (1− γ)Â)−1fθ(X)\n= γ ∞∑ l=0 (1− γ)lÂlfθ(X),\nwhere feature embedding matrix sequence {Z(l)} for each order of neighbors share the same parameters fθ . If we relax this sharing nature to the adaptive form with respect to the layer and put Âl into fθ , then the output Z can be approximately formulated as:\nZPPNP ≈ γ ∞∑ l=0 (1− γ)lf (l)θ ( lX)\nThis relaxed version from PPNP is the Exponential Moving Average form of matrix sequence {Z(l)} with exponential decreasing factor γ. Moreover, if we approximate the EMA by truncating it after L− 1 items, then the weight omitted by stopping after L − 1 items is (1 − γ)L. Thus, the approximated EMA is exactly the APPNP form:\nZAPPNP = (γ L−1∑ l=0 (1− γ)lÂl + (1− γ)LÂL)fθ(X)" }, { "heading": "A.4 PROOF OF PROPOSITION 2", "text": "Proof. We consider a two layers fully-connected neural network as f in Eq. 8, then the output of AdaGCN can be formulated as:\nZ = L∑ l=0 α(l)σ(ÂlXW (0))W (1)\nParticularly, we set W (0) = bl sign(bl)α(l) I and W (1) = sign(bl)I where sign(bl) is the signed incidence scalar w.r.t bl. Then the output of AdaGCN can be presented as:\nZ = L∑ l=0 α(l)σ(ÂlX bl sign(bl)α(l) I)sign(bl)I\n= L∑ l=0 α(l)σ(ÂlX) bl sign(bl)α(l) sign(bl)\n= L∑ l=0 blσ ( ÂlX ) The proof that GCNs-based methods are not capable of representing general layer-wise neighborhood mixing has been demonstrated in MixHop (Abu-El-Haija et al., 2019). Proposition 2 proved." }, { "heading": "A.5 EXPLANATION ABOUT CONSISTENCY OF BOOSTING ON DEPENDENT DATA", "text": "Definition 2. (β-mixing sequences.) Let σji = σ(W ) = σ(Wi,Wi+1, ...,Wj) be the σ-field generated by a strictly stationary sequence of random variablesW = (Wi,Wi+1, ...,Wj). The β-mixing coefficient is defined by:\nβW (n) = sup k\nE sup {∣∣∣P(A|σk1)− P(A)∣∣∣ : A ∈ σ∞k+n}\nThen a sequence W is called β-mixing if limn→∞βW (n) = 0. Further, it is algebraically β-mixing if there is a positive constant rβ such that βW (n) = O(n−rβ ). Definition 3. (Consistency) A classification rule is consistent for a certain distribution P if E(L(hn)) = P{hn(X) = Y } → a as n → ∞ where a is a constant. It is strongly Bayes-risk consistent if limn→∞L(hn) = a almost surely.\nUnder these definitions, the convergence and consistence of regularized boosting method on stationary βmixing sequences can be proved under mild assumptions. More details can be referred from (Lozano et al., 2013)." }, { "heading": "A.6 EXPERIMENTAL DETAILS", "text": "Early Stopping on AdaGCN. We apply the same early stopping mechanism across all the methods as (Klicpera et al., 2018) for fair comparison. Furthermore, boosting theory also has the capacity to perfectly incorporate early stopping and it has been shown that for several boosting algorithms including AdaBoost, this regularization via early stopping can provide guarantees of consistency (Zhang et al., 2005; Jiang et al., 2004; Bühlmann & Yu, 2003).\nDataset Splitting. We choose a training set of a fixed nodes per class, an early stopping set of 500 nodes and test set of remained nodes. Each experiment is run with 5 random initialization on each data split, leading to a total of 100 runs per experiment. On a standard setting, we randomly select 20 nodes per class. For the two different label rates on each graph, we select 6, 11 nodes per class on citeseer, 8, 16 nodes per class on Cora-ML, 7, 14 nodes per class on Pubmed and 8, 15 nodes per class on MS-Academic dataset.\nModel parameters. For all GCN-based approaches, we use the same hyper-parameters in the original paper: learning rate of 0.01, 0.5 dropout rate, 5× 10−4 L2 regularization weight, and 16 hidden units. For FastGCN, we adopt the officially released code to conduct our experiments. PPNP and APPNP are adapted with best setting: K = 10 power iteration steps for APPNP, teleport probability γ = 0.1 on Cora-ML, Citeseer and Pubmed, γ = 0.2 on Ms-Academic. In addition, we use two layers with h = 64 hidden units and apply L2 regularization with λ = 5 × 10−3 on the weights of the first layer and use dropout with dropout rate d = 0.5 on both layers and the adjacency matrix. The early stopping criterion uses a patience of p = 100 and an (unreachably high) maximum of n = 10000 epochs.The implementation of AdaGCN is adapted from PPNP and APPNP. Corresponding patience p = 300 and n = 500 in the early stopping of AdaGCN. Moreover, SGC is re-implemented in a straightforward way without incorporating advanced optimization for better illustration and comparison. Other baselines are adopted the same parameters described in PPNP and APPNP.\nSettings on Reddit dataset. By repeatedly tuning the parameters of these typical methods on Reddit, we finally choose weight decay rate as 10−4, hidden layer size 100 and epoch 20000 for AdaGCN. For APPNP, we opt weight decay rate as 10−5, dropout rate as 0 and epoch 500. V.GCN applies the same parameters in (Kipf & Welling, 2017) and we choose epoch as 500. All approaches have not deployed early stopping due to the expensive computational cost on the large Reddit dataset, which is also a fair comparison." }, { "heading": "A.7 CHOICE OF THE NUMBER OF LAYERS", "text": "Different from the “forcible” behaviors in CNNs that directly stack many convolution layers, in our AdaGCN there is a theoretical guidance on the choice of model depth L, i.e., the number of base classifiers or layers, derived from boosting theory. Specifically, according to the boosting theory, the increasing of L can exponentially decreases the empirical loss, however, from the perspective of VC-dimension, an overly large L can yield overfitting of AdaGCN. It should be noted that the deeper graph convolution layers in AdaGCN are not always better, which indeed heavily depends on the the complexity of data. In practice, L can be determined via cross-validation. Specifically, we start a VC-dimension-based analysis to illustrate that too large L can yield overfitting of AdaGCN. For L layers of AdaGCN, its hypothesis set is\nFL = { argmax\nk ( L∑ l=1 α(l)f (l) θ ) : α(l) ∈ R, l ∈ [1, L] } (11)\nThen the VC-dimension of FT can be bounded as follows in terms of the VC-dimension d of the family of base hypothesis: VCdim (FL) ≤ 2(d+ 1)(L+ 1) log2((L+ 1)e), (12) where e is a constant and the upper bounds grows as L increases. Combined with VC-dimension generalization bounds, these results imply that larger values of L can lead to overfitting of AdaBoost. This situation also happens in AdaGCN, which inspires us that there is no need to stack too many layers on AdaGCN in order to avoid overfitting. In practice, L is typically determined via cross-validation." } ]
2,021
ADAGCN: ADABOOSTING GRAPH CONVOLUTIONAL NETWORKS INTO DEEP MODELS
SP:04a93ed7a7bef0c8f8c99a1fa381cc920fbd2002
[ "This paper considers the problem of finding a nash equilibrium in two player games where each of the algorithm runs an RL algorithm. In this paper they ask the question -- which nash equilibria does the dynamics converge to in this two player game (where each player optimizes based on a policy gradient algorithm). They construct two player games with multiple nash equilibria; one is a favorable nash equilibria where both players get high rewards while the other is a less favorable nash equilibria where both player only get medium rewards. In such games they first show that in general simply running policy gradient on the natural reward function i.e., the observed payoff will not lead to the desirable nash equilibria. The goal of this paper is to ameliorate this by considering perturbations in the reward space. At a high level, the algorithm learns multiple policies on a class of games generated by sampling multiple reward functions from a family and training one policy per sampled reward function using PG. Then using an evaluation function, the best policy is picked by evaluating each of the learnt policies on the original game." ]
We propose a simple, general and effective technique, Reward Randomization for discovering diverse strategic policies in complex multi-agent games. Combining reward randomization and policy gradient, we derive a new algorithm, RewardRandomized Policy Gradient (RPG). RPG is able to discover multiple distinctive human-interpretable strategies in challenging temporal trust dilemmas, including grid-world games and a real-world game Agar.io, where multiple equilibria exist but standard multi-agent policy gradient algorithms always converge to a fixed one with a sub-optimal payoff for every player even using state-of-the-art exploration techniques. Furthermore, with the set of diverse strategies from RPG, we can (1) achieve higher payoffs by fine-tuning the best policy from the set; and (2) obtain an adaptive agent by using this set of strategies as its training opponents. The source code and example videos can be found in our website: https://sites.google. com/view/staghuntrpg.
[ { "affiliations": [], "name": "Zhenggang Tang" }, { "affiliations": [], "name": "Chao Yu" }, { "affiliations": [], "name": "Boyuan Chen" }, { "affiliations": [], "name": "Huazhe Xu" }, { "affiliations": [], "name": "Xiaolong Wang" }, { "affiliations": [], "name": "Fei Fang" }, { "affiliations": [], "name": "Simon Du" }, { "affiliations": [], "name": "Yu Wang" }, { "affiliations": [], "name": "Yi Wu" } ]
[ { "authors": [ "Bo An", "Milind Tambe", "Fernando Ordonez", "Eric Shieh", "Christopher Kiekintveld" ], "title": "Refinement of strong stackelberg equilibria in security games", "venue": "In Twenty-Fifth AAAI Conference on Artificial Intelligence,", "year": 2011 }, { "authors": [ "Monica Babes", "Enrique Munoz de Cote", "Michael L Littman" ], "title": "Social reward shaping in the prisoner’s dilemma. In Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems-Volume 3, pp. 1389–1392", "venue": "International Foundation for Autonomous Agents and Multiagent Systems,", "year": 2008 }, { "authors": [ "Yu Bai", "Chi Jin" ], "title": "Provable self-play algorithms for competitive reinforcement learning", "venue": "arXiv preprint arXiv:2002.04017,", "year": 2020 }, { "authors": [ "Bowen Baker", "Ingmar Kanitscheider", "Todor Markov", "Yi Wu", "Glenn Powell", "Bob McGrew", "Igor Mordatch" ], "title": "Emergent tool use from multi-agent autocurricula, 2019", "venue": null, "year": 2019 }, { "authors": [ "Bowen Baker", "Ingmar Kanitscheider", "Todor Markov", "Yi Wu", "Glenn Powell", "Bob McGrew", "Igor Mordatch" ], "title": "Emergent tool use from multi-agent autocurricula", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "David Balduzzi", "Marta Garnelo", "Yoram Bachrach", "Wojciech M Czarnecki", "Julien Perolat", "Max Jaderberg", "Thore Graepel" ], "title": "Open-ended learning in symmetric zero-sum games", "venue": null, "year": 1901 }, { "authors": [ "Jeffrey S Banks", "Joel Sobel" ], "title": "Equilibrium selection in signaling", "venue": "games. Econometrica: Journal of the Econometric Society,", "year": 1987 }, { "authors": [ "B Douglas Bernheim", "Bezalel Peleg", "Michael D Whinston" ], "title": "Coalition-proof Nash equilibria", "venue": "i. concepts. Journal of Economic Theory,", "year": 1987 }, { "authors": [ "George W Brown" ], "title": "Iterative solution of games by fictitious play", "venue": "Activity analysis of production and allocation,", "year": 1951 }, { "authors": [ "Yuri Burda", "Harrison Edwards", "Amos Storkey", "Oleg Klimov" ], "title": "Exploration by random network distillation", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Antoine Cully", "Jeff Clune", "Danesh Tarapore", "Jean-Baptiste Mouret" ], "title": "Robots that can adapt like animals", "venue": "Nature, 521(7553):503–507,", "year": 2015 }, { "authors": [ "Eddie Dekel", "Drew Fudenberg", "David K Levine" ], "title": "Learning to play", "venue": "Bayesian games. Games and Economic Behavior,", "year": 2004 }, { "authors": [ "Sam Devlin", "Daniel Kudenko" ], "title": "Theoretical considerations of potential-based reward shaping for multi-agent systems", "venue": "In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume", "year": 2011 }, { "authors": [ "Glenn Ellison" ], "title": "Learning, local interaction, and coordination", "venue": "Econometrica: Journal of the Econometric Society,", "year": 1993 }, { "authors": [ "Benjamin Eysenbach", "Abhishek Gupta", "Julian Ibarz", "Sergey Levine" ], "title": "Diversity is all you need: Learning skills without a reward function", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Christina Fang", "Steven Orla Kimbrough", "Stefano Pace", "Annapurna Valluri", "Zhiqiang Zheng" ], "title": "On adaptive emergence of trust behavior in the game of stag hunt", "venue": "Group Decision and Negotiation,", "year": 2002 }, { "authors": [ "Fei Fang", "Albert Xin Jiang", "Milind Tambe" ], "title": "Protecting moving targets with multiple mobile resources", "venue": "Journal of Artificial Intelligence Research,", "year": 2013 }, { "authors": [ "Jakob Foerster", "Richard Y Chen", "Maruan Al-Shedivat", "Shimon Whiteson", "Pieter Abbeel", "Igor Mordatch" ], "title": "Learning with opponent-learning awareness", "venue": "In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, pp. 122–130. International Foundation for Autonomous Agents and Multiagent Systems,", "year": 2018 }, { "authors": [ "Sébastien Forestier", "Rémy Portelas", "Yoan Mollard", "Pierre-Yves Oudeyer" ], "title": "Intrinsically motivated goal exploration processes with automatic curriculum learning", "venue": "arXiv preprint arXiv:1708.02190,", "year": 2017 }, { "authors": [ "Rong Ge", "Furong Huang", "Chi Jin", "Yang Yuan" ], "title": "Escaping from saddle points—online stochastic gradient for tensor decomposition", "venue": "In Conference on Learning Theory, pp", "year": 2015 }, { "authors": [ "Russell Golman", "Scott E Page" ], "title": "Individual and cultural learning in stag hunt games with multiple actions", "venue": "Journal of Economic Behavior & Organization,", "year": 2010 }, { "authors": [ "Dylan Hadfield-Menell", "Smitha Milli", "Pieter Abbeel", "Stuart J Russell", "Anca Dragan" ], "title": "Inverse reward design", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Jiequn Han", "Ruimeng Hu" ], "title": "Deep fictitious play for finding Markovian Nash equilibrium in multi-agent games", "venue": "arXiv preprint arXiv:1912.01809,", "year": 2019 }, { "authors": [ "Jason Hartline", "Vasilis Syrgkanis", "Eva Tardos" ], "title": "No-regret learning in Bayesian games", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Johannes Heinrich", "David Silver" ], "title": "Deep reinforcement learning from self-play in imperfectinformation games", "venue": "arXiv preprint arXiv:1603.01121,", "year": 2016 }, { "authors": [ "Hengyuan Hu", "Adam Lerer", "Alex Peysakhovich", "Jakob Foerster" ], "title": "Other-play for zero-shot coordination", "venue": "arXiv preprint arXiv:2003.02979,", "year": 2020 }, { "authors": [ "Max Jaderberg", "Valentin Dalibard", "Simon Osindero", "Wojciech M Czarnecki", "Jeff Donahue", "Ali Razavi", "Oriol Vinyals", "Tim Green", "Iain Dunning", "Karen Simonyan" ], "title": "Population based training of neural networks", "venue": "arXiv preprint arXiv:1711.09846,", "year": 2017 }, { "authors": [ "Max Jaderberg", "Wojciech M Czarnecki", "Iain Dunning", "Luke Marris", "Guy Lever", "Antonio Garcia Castaneda", "Charles Beattie", "Neil C Rabinowitz", "Ari S Morcos", "Avraham Ruderman" ], "title": "Humanlevel performance in 3D multiplayer games with population-based reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Nitin Kamra", "Umang Gupta", "Kai Wang", "Fei Fang", "Yan Liu", "Milind Tambe" ], "title": "Deep fictitious play for games with continuous action spaces", "venue": "In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems,", "year": 2019 }, { "authors": [ "Michihiro Kandori", "George J Mailath", "Rafael Rob" ], "title": "Learning, mutation, and long run equilibria in games", "venue": "Econometrica: Journal of the Econometric Society,", "year": 1993 }, { "authors": [ "Max Kleiman-Weiner", "Mark K Ho", "Joseph L Austerweil", "Michael L Littman", "Joshua B Tenenbaum" ], "title": "Coordinate to cooperate or compete: abstract goals and joint intentions in social interaction", "venue": "CogSci,", "year": 2016 }, { "authors": [ "Robert Kleinberg", "Yuanzhi Li", "Yang Yuan" ], "title": "An alternative view: When does sgd escape local minima", "venue": "arXiv preprint arXiv:1802.06175,", "year": 2018 }, { "authors": [ "Marc Lanctot", "Vinicius Zambaldi", "Audrunas Gruslys", "Angeliki Lazaridou", "Karl Tuyls", "Julien Pérolat", "David Silver", "Thore Graepel" ], "title": "A unified game-theoretic approach to multiagent reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Joel Z Leibo", "Vinicius Zambaldi", "Marc Lanctot", "Janusz Marecki", "Thore Graepel" ], "title": "Multi-agent reinforcement learning in sequential social dilemmas", "venue": "In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems,", "year": 2017 }, { "authors": [ "Sergey Levine", "Chelsea Finn", "Trevor Darrell", "Pieter Abbeel" ], "title": "End-to-end training of deep visuomotor policies", "venue": "The Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Richard Li", "Allan Jabri", "Trevor Darrell", "Pulkit Agrawal" ], "title": "Towards practical multi-object manipulation using relational reinforcement learning", "venue": "In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA),", "year": 2020 }, { "authors": [ "Shihui Li", "Yi Wu", "Xinyue Cui", "Honghua Dong", "Fei Fang", "Stuart Russell" ], "title": "Robust multi-agent reinforcement learning via minimax deep deterministic policy gradient", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Michael L Littman" ], "title": "Friend-or-foe q-learning in general-sum games", "venue": "In ICML,", "year": 2001 }, { "authors": [ "Qian Long", "Zihan Zhou", "Abhinav Gupta", "Fei Fang", "Yi Wu", "Xiaolong Wang" ], "title": "Evolutionary population curriculum for scaling multi-agent reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Ryan Lowe", "Yi Wu", "Aviv Tamar", "Jean Harb", "OpenAI Pieter Abbeel", "Igor Mordatch" ], "title": "Multi-agent actor-critic for mixed cooperative-competitive environments", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Anuj Mahajan", "Tabish Rashid", "Mikayel Samvelyan", "Shimon Whiteson" ], "title": "Maven: Multi-agent variational exploration", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Kevin R McKee", "Ian Gemp", "Brian McWilliams", "Edgar A Duéñez-Guzmán", "Edward Hughes", "Joel Z Leibo" ], "title": "Social diversity and social preferences in mixed-motive reinforcement learning", "venue": null, "year": 2002 }, { "authors": [ "Richard D McKelvey", "Thomas R Palfrey" ], "title": "Quantal response equilibria for normal form games", "venue": "Games and economic behavior,", "year": 1995 }, { "authors": [ "H Brendan McMahan", "Geoffrey J Gordon", "Avrim Blum" ], "title": "Planning in the presence of cost functions controlled by an adversary", "venue": "In Proceedings of the 20th International Conference on Machine Learning", "year": 2003 }, { "authors": [ "Piotr Mirowski", "Razvan Pascanu", "Fabio Viola", "Hubert Soyer", "Andrew J Ballard", "Andrea Banino", "Misha Denil", "Ross Goroshin", "Laurent Sifre", "Koray Kavukcuoglu" ], "title": "Learning to navigate in complex environments", "venue": "arXiv preprint arXiv:1611.03673,", "year": 2016 }, { "authors": [ "Roger B Myerson" ], "title": "Refinements of the Nash equilibrium concept", "venue": "International journal of game theory,", "year": 1978 }, { "authors": [ "John Nash" ], "title": "Non-cooperative games", "venue": "Annals of mathematics,", "year": 1951 }, { "authors": [ "Andrew Y Ng", "Daishi Harada", "Stuart Russell" ], "title": "Policy invariance under reward transformations: Theory and application to reward shaping", "venue": "In ICML,", "year": 1999 }, { "authors": [ "Andrew Y Ng", "Stuart J Russell" ], "title": "Algorithms for inverse reinforcement learning", "venue": "In Icml,", "year": 2000 }, { "authors": [ "Deepak Pathak", "Pulkit Agrawal", "Alexei A Efros", "Trevor Darrell" ], "title": "Curiosity-driven exploration by self-supervised prediction", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2017 }, { "authors": [ "Julien Perolat", "Remi Munos", "Jean-Baptiste Lespiau", "Shayegan Omidshafiei", "Mark Rowland", "Pedro Ortega", "Neil Burch", "Thomas Anthony", "David Balduzzi", "Bart De Vylder" ], "title": "From Poincare recurrence to convergence in imperfect information games: Finding equilibrium via regularization", "venue": "arXiv preprint arXiv:2002.08456,", "year": 2020 }, { "authors": [ "Alexander Peysakhovich", "Adam Lerer" ], "title": "Consequentialist conditional cooperation in social dilemmas with imperfect information", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Alexander Peysakhovich", "Adam Lerer" ], "title": "Prosocial learning agents solve generalized stag hunts better than selfish ones", "venue": "In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, pp. 2043–2044. International Foundation for Autonomous Agents and Multiagent Systems,", "year": 2018 }, { "authors": [ "Julia Robinson" ], "title": "An iterative method of solving a game", "venue": "Annals of mathematics,", "year": 1951 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "R Selten" ], "title": "Reexamination of the perfectness concept for equilibrium points in extensive games", "venue": "International Journal of Game Theory,", "year": 1975 }, { "authors": [ "Reinhard Selten" ], "title": "Spieltheoretische behandlung eines oligopolmodells mit nachfrageträgheit: Teil i: Bestimmung des dynamischen preisgleichgewichts", "venue": "Zeitschrift für die gesamte Staatswissenschaft/Journal of Institutional and Theoretical Economics, (H", "year": 1965 }, { "authors": [ "Claude E Shannon" ], "title": "Xxii. programming a computer for playing chess", "venue": "The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science,", "year": 1950 }, { "authors": [ "Archit Sharma", "Shixiang Gu", "Sergey Levine", "Vikash Kumar", "Karol Hausman" ], "title": "Dynamics-aware unsupervised discovery of skills", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Macheng Shen", "Jonathan P How" ], "title": "Robust opponent modeling via adversarial ensemble reinforcement learning in asymmetric imperfect-information games", "venue": null, "year": 1909 }, { "authors": [ "David Silver", "Julian Schrittwieser", "Karen Simonyan", "Ioannis Antonoglou", "Aja Huang", "Arthur Guez", "Thomas Hubert", "Lucas Baker", "Matthew Lai", "Adrian Bolton" ], "title": "Mastering the game of go without human knowledge", "venue": null, "year": 2017 }, { "authors": [ "David Silver", "Thomas Hubert", "Julian Schrittwieser", "Ioannis Antonoglou", "Matthew Lai", "Arthur Guez", "Marc Lanctot", "Laurent Sifre", "Dharshan Kumaran", "Thore Graepel" ], "title": "A general reinforcement learning algorithm that masters chess, shogi, and go through self-play", "venue": null, "year": 2018 }, { "authors": [ "Satinder P Singh", "Michael J Kearns", "Yishay Mansour" ], "title": "Nash convergence of gradient dynamics in general-sum games", "venue": "In UAI, pp", "year": 2000 }, { "authors": [ "Brian Skyrms" ], "title": "The stag hunt and the evolution of social structure", "venue": null, "year": 2004 }, { "authors": [ "Brian Skyrms", "Robin Pemantle" ], "title": "A dynamic model of social network formation", "venue": "In Adaptive networks,", "year": 2009 }, { "authors": [ "Haoran Tang", "Rein Houthooft", "Davis Foote", "Adam Stooke", "OpenAI Xi Chen", "Yan Duan", "John Schulman", "Filip DeTurck", "Pieter Abbeel" ], "title": "exploration: A study of count-based exploration for deep reinforcement learning", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Josh Tobin", "Rachel Fong", "Alex Ray", "Jonas Schneider", "Wojciech Zaremba", "Pieter Abbeel" ], "title": "Domain randomization for transferring deep neural networks from simulation to the real world", "venue": "IEEE/RSJ international conference on intelligent robots and systems (IROS),", "year": 2017 }, { "authors": [ "Oriol Vinyals", "Timo Ewalds", "Sergey Bartunov", "Petko Georgiev", "Alexander Sasha Vezhnevets", "Michelle Yeo", "Alireza Makhzani", "Heinrich Küttler", "John Agapiou", "Julian Schrittwieser" ], "title": "Starcraft II: A new challenge for reinforcement learning", "venue": "arXiv preprint arXiv:1708.04782,", "year": 2017 }, { "authors": [ "Oriol Vinyals", "Igor Babuschkin", "Wojciech M Czarnecki", "Michaël Mathieu", "Andrew Dudzik", "Junyoung Chung", "David H Choi", "Richard Powell", "Timo Ewalds", "Petko Georgiev" ], "title": "Grandmaster level in StarCraft II using multi-agent reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Yufei Wang", "Zheyuan Ryan Shi", "Lantao Yu", "Yi Wu", "Rohit Singh", "Lucas Joppa", "Fei Fang" ], "title": "Deep reinforcement learning for green security games with real-time information", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Mark Woodward", "Chelsea Finn", "Karol Hausman" ], "title": "Learning to interactively learn and assist", "venue": "arXiv preprint arXiv:1906.10187,", "year": 2019 }, { "authors": [ "Yi Wu", "Yuxin Wu", "Georgia Gkioxari", "Yuandong Tian" ], "title": "Building generalizable agents with a realistic and rich 3D environment", "venue": "arXiv preprint arXiv:1801.02209,", "year": 2018 }, { "authors": [ "Yuxin Wu", "Yuandong Tian" ], "title": "Training agent for first-person shooter game with actor-critic curriculum", "venue": null, "year": 2016 }, { "authors": [ "Tianhe Yu", "Deirdre Quillen", "Zhanpeng He", "Ryan Julian", "Karol Hausman", "Chelsea Finn", "Sergey Levine" ], "title": "Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning", "venue": "In Conference on Robot Learning (CoRL),", "year": 2019 }, { "authors": [ "Brian D Ziebart", "Andrew L Maas", "J Andrew Bagnell", "Anind K Dey" ], "title": "Maximum entropy inverse reinforcement learning", "venue": "In Aaai,", "year": 2008 } ]
[ { "heading": null, "text": "We propose a simple, general and effective technique, Reward Randomization for discovering diverse strategic policies in complex multi-agent games. Combining reward randomization and policy gradient, we derive a new algorithm, RewardRandomized Policy Gradient (RPG). RPG is able to discover multiple distinctive human-interpretable strategies in challenging temporal trust dilemmas, including grid-world games and a real-world game Agar.io, where multiple equilibria exist but standard multi-agent policy gradient algorithms always converge to a fixed one with a sub-optimal payoff for every player even using state-of-the-art exploration techniques. Furthermore, with the set of diverse strategies from RPG, we can (1) achieve higher payoffs by fine-tuning the best policy from the set; and (2) obtain an adaptive agent by using this set of strategies as its training opponents. The source code and example videos can be found in our website: https://sites.google. com/view/staghuntrpg." }, { "heading": "1 INTRODUCTION", "text": "Games have been a long-standing benchmark for artificial intelligence, which prompts persistent technical advances towards our ultimate goal of building intelligent agents like humans, from Shannon’s initial interest in Chess (Shannon, 1950) and IBM DeepBlue (Campbell et al., 2002), to the most recent deep reinforcement learning breakthroughs in Go (Silver et al., 2017), Dota II (OpenAI et al., 2019) and Starcraft (Vinyals et al., 2019). Hence, analyzing and understanding the challenges in various games also become critical for developing new learning algorithms for even harder challenges.\nMost recent successes in games are based on decentralized multi-agent learning (Brown, 1951; Singh et al., 2000; Lowe et al., 2017; Silver et al., 2018), where agents compete against each other and optimize their own rewards to gradually improve their strategies. In this framework, Nash Equilibrium (NE) (Nash, 1951), where no player could benefit from altering its strategy unilaterally, provides a general solution concept and serves as a goal for policy learning and has attracted increasingly significant interests from AI researchers (Heinrich & Silver, 2016; Lanctot et al., 2017; Foerster et al., 2018; Kamra et al., 2019; Han & Hu, 2019; Bai & Jin, 2020; Perolat et al., 2020): many existing works studied how to design practical multi-agent reinforcement learning (MARL) algorithms that can provably converge to an NE in Markov games, particularly in the zero-sum setting.\nDespite the empirical success of these algorithms, a fundamental question remains largely unstudied in the field: even if an MARL algorithm converges to an NE, which equilibrium will it converge to? The existence of multiple NEs is extremely common in many multi-agent games. Discovering as many NE strategies as possible is particularly important in practice not only because different NEs can produce drastically different payoffs but also because when facing unknown players who are trained to play an NE strategy, we can gain advantage by identifying which NE strategy the opponent is playing and choosing the most appropriate response. Unfortunately, in many games where multiple distinct NEs exist, the popular decentralized policy gradient algorithm (PG), which has led to great successes in numerous games including Dota II and Stacraft, always converge to a particular NE with non-optimal payoffs and fail to explore more diverse modes in the strategy space.\nConsider an extremely simple example, a 2-by-2 matrix game Stag-Hunt (Rousseau, 1984; Skyrms, 2004), where two pure strategy NEs exist: a “risky” cooperative equilibrium with the highest payoff ∗Equal contribution. † Work done as an intern at Institute for Interdisciplinary Information Sciences (IIIS), Tsinghua University.\nfor both agents and a “safe” non-cooperative equilibrium with strictly lower payoffs. We show, from both theoretical and practical perspectives, that even in this simple matrix-form game, PG fails to discover the high-payoff “risky” NE with high probability. The intuition is that the neighborhood that makes policies converge to the “risky” NE can be substantially small comparing to the entire policy space. Therefore, an exponentially large number of exploration steps are needed to ensure PG discovers the desired mode. We propose a simple technique, Reward Randomization (RR),\nwhich can help PG discover the “risky” cooperation strategy in the stag-hunt game with theoretical guarantees. The core idea of RR is to directly perturb the reward structure of the multi-agent game of interest, which is typically low-dimensional. RR directly alters the landscape of different strategy modes in the policy space and therefore makes it possible to easily discover novel behavior in the perturbed game\n(Fig. 1). We call this new PG variant Reward-Randomized Policy Gradient (RPG).\nTo further illustrate the effectiveness of RPG, we introduce three Markov games – two gridworld games and a real-world online game Agar.io. All these games have multiple NEs including both “risky” cooperation strategies and “safe” non-cooperative strategies. We empirically show that even with state-of-the-art exploration techniques, PG fails to discover the “risky” cooperation strategies. In contrast, RPG discovers a surprisingly diverse set of human-interpretable strategies in all these games, including some non-trivial emergent behavior. Importantly, among this set are policies achieving much higher payoffs for each player compared to those found by PG. This “diversityseeking” property of RPG also makes it feasible to build adaptive policies: by re-training an RL agent against the diverse opponents discovered by RPG, the agent is able to dynamically alter its strategy between different modes, e.g., either cooperate or compete, w.r.t. its test-time opponent’s behavior.\nWe summarize our contributions as follow\n• We studied a collection of challenging multi-agent games, where the popular multi-agent PG algorithm always converges to a sub-optimal equilibrium strategy with low payoffs.\n• A novel reward-space exploration technique, reward randomization (RR), for discovering hard-to-find equilibrium with high payoffs. Both theoretical and empirical results show that reward randomization substantially outperforms classical policy/action-space exploration techniques in challenging trust dilemmas.\n• We empirically show that RR discovers surprisingly diverse strategic behaviors in complex Markov games, which further provides a practical solution for building an adaptive agent.\n• A new multi-agent environment Agar.io, which allows complex multi-agent strategic behavior. We released the environment to the community as a novel testbed for MARL research.\n2 A MOTIVATING EXAMPLE: STAG HUNT\nStag Hare Stag a, a c, b Hare b, c d, d\nTable 1: The stag-hunt game, a > b ≥ d > c.\nWe start by analyzing a simple problem: finding the NE with the optimal payoffs in the Stag Hunt game. This game was originally introduced in Rousseau’s work, “A discourse on inequality” (Rousseau, 1984): a group of hunters are tracking a big stag silently; now a hare shows up, each hunter should decide whether to keep tracking the stag or kill the hare immediately. This leads to the 2-by-2 matrix-form stag-hunt game in Tab. 1\nwith two actions for each agent, Stag (S) and Hare (H). There are two pure strategy NEs: the Stag NE, where both agents choose S and receive a high payoff a (e.g., a = 4), and the Hare NE, where both agents choose H and receive a lower payoff d (e.g., d = 1). The Stag NE is “risky” because if one agent defects, they still receives a decent reward b (e.g., b = 3) for eating the hare alone while the other agent with an S action may suffer from a big loss c for being hungry (e.g., c = −10). Formally, let A = {S,H} denote the action space, πi(θi) denote the policy for agent i (i ∈ {1, 2}) parameterized by θi, i.e., P [πi(θi) = S] = θi and P [πi(θi) = H] = 1− θi, and R(a1, a2; i) denote the payoff for agent i when agent 1 takes action a1 and agent 2 takes action a2. Each agent i optimizes its expected utility Ui(π1, π2) = Ea1∼π1,a2∼π2 [R(a1, a2; i)]. Using the standard policy gradient algorithm, a typical learning procedure is to repeatedly take the following two steps until\nconvergence1: (1) estimate gradient ∇i = ∇Ui(π1, π2) via self-play; (2) update the policies by θi ← θi + α∇i with learning rate α. Although PG is widely used in practice, the following theorem shows in certain scenarios, unfortunately, the probability that PG converges to the Stag NE is low.\nTheorem 1. Suppose a− b = (d− c) for some 0 < < 1 and initialize θ1, θ2 ∼ Unif [0, 1]. Then the probability that PG discovers the high-payoff NE is upper bounded by 2 + 2\n1+2 + 2 .\nTheorem 1 shows when the risk is high (i.e., c is low), then the probability of finding the Stag NE via PG is very low. Note this theorem applies to random initialization, which is standard in RL.\nRemark: One needs at least N = Ω ( 1 ) restarts to ensure a constant success probability.\nFig. 2 shows empirical studies: we select 4 value assignments, i.e., c ∈ {−5,−20,−50,−100} and a=4, b=3, d=1, and run a state-of-the-art PG method, proximal policy optimization (PPO) (Schulman et al., 2017), on these games. The Stag NE is rarely reached, and, as c becomes smaller, the probability of finding the Stag NE significantly decreases. Peysakhovich & Lerer (2018b) provided a theorem of similar flavor without analyzing the dynamics of the learning algorithm whereas we explicitly characterize the behavior of PG. They studied a prosocial reward-sharing scheme, which transforms the reward of both agents toR(a1, a2; 1)+R(a1, a2; 2). Reward sharing can be viewed as a special case of our method and, as shown in Sec. 5, it is insufficient for solving complex temporal games." }, { "heading": "2.1 REWARD RANDOMIZATION IN THE MATRIX-FORM STAG-HUNT GAME", "text": "9 Thm. 1 suggests that the utility function R highly influences what strategy PG might learn. Taking one step further, even if a strategy is difficult to learn with a particular R, it might be easier in some other function R′. Hence, if we can define an appropriate spaceR over different utility functions and draw samples from R, we may possibly discover desired novel strategies by running PG on some sampled utility function R′ and evaluating the obtained policy profile on the original game with R. We call this procedure Reward Randomization (RR).\nConcretely, in the stag-hunt game, R is parameterized by 4 variables (aR, bR, cR, dR). We can define a distribution over R4, draw a tuple R′ = (aR′ , bR′ , cR′ , dR′) from this distribution, and run PG on R′. Denote the original stag-hunt game where the Stag NE is hard to discover as R0. Reward randomization draws N perturbed tuples R1, . . . , RN , runs PG on each Ri, and evaluates each of the obtained strategies on R0. The theorem below shows it is highly likely that the population of the N policy profiles obtained from the perturbed games contains the Stag NE strategy.\nTheorem 2. For any Stag-Hunt game, suppose in the i-th run of RR we randomly generate aRi , bRi , cRi , dRi ∼ Unif [−1, 1] and initialize θ1, θ2 ∼ Unif [0, 1], then with probability at least 1− 0.6N = 1− exp (−Ω (N)), the aforementioned RR procedure discovers the high-payoff NE.\nHere we use the uniform distribution as an example. Other distributions may also help in practice. Comparing Thm. 2 and Thm. 1, RR significantly improves standard PG w.r.t. success probability.\nRemark 1: For the scenario studied in Thm. 1, to achieve a (1− δ) success probability for some 0 < δ < 1, PG requires at least N = Ω ( 1 log ( 1 δ )) random restarts. For the same scenario, RR only requires to repeat at most N = O (log (1/δ)) which is independent of . When is small, this is a huge improvement.\nRemark 2: Thm. 2 suggests that comparing with policy randomization, perturbing the payoff matrix makes it substantially easier to discover a strategy that can be hardly reached in the original game.\nNote that although in Stag Hunt, we particularly focus on the Stag NE that has the highest payoff for both agents, in general RR can also be applied to NE selection in other matrix-form games using a payoff evaluation functionE(π1, π2). For example, we can setE(π1, π2) = U1(π1, π2)+U2(π1, π2) for a prosocial NE, or look for Pareto-optimal NEs by setting E(π1, π2) = βU1(π1, π2) + (1 − β)U2(π1, π2) with 0 ≤ β ≤ 1.\n1In general matrix games beyond stag hunt, the procedure can be cyclic as well (Singh et al., 2000).\nAlgorithm 1: RPG: Reward-Randomized Policy Gradient Input: original game M , search spaceR, evaluation function E, population size N ; draw samples {R(1), . . . , R(N)} fromR; {π(i)1 , π (i) 2 } ← PG on induced games {M(R(i))}i in parallel ; // RR phase select the best candidate π(k)1 , π (k) 2 by k = arg maxiE(π (i) 1 , π (i) 2 ) ; // evaluation phase π?1 , π ? 2 ← fine-tune π (k) 1 , π (k) 2 on M via PG (if necessary) ; // fine-tuning phase return π?1 , π?2 ;" }, { "heading": "3 RPG: REWARD-RANDOMIZED POLICY GRADIENT", "text": "Herein, we extend Reward Randomization to general multi-agent Markov games. We now utilize RL terminologies and consider the 2-player setting for simplicity. Extension to more agents is straightforward (Appx. B.3).\nConsider a 2-agent Markov game M defined by (S,O,A, R, P ), where S is the state space; O = {oi : s ∈ S, oi = O(s, i), i ∈ {1, 2}} is the observation space, where agent i receives its own observation oi = O(s; i) (in the fully observable setting, O(s, i) = s); A is the action space for each agent; R(s, a1, a2; i) is the reward function for agent i; and P (s′|s, a1, a2) is transition probability from state s to state s′ when agent i takes action ai. Each agent has a policy πi(oi; θi) which produces a (stochastic) action and is parameterized by θi. In the decentralized RL framework, each agent i optimizes its expected accumulative reward Ui(θi) = Ea1∼π1,a2∼π2 [ ∑ t γ tR(st, at1, a t 2; i)] with some discounted factor γ.\nConsider we run decentralized RL on a particular a Markov game M and the derived policy profile is (π1(θ1), π2(θ2)). The desired result is that the expected reward Ui(θi) for each agent i is maximized. We formally written this equilibrium evaluation objective as an evaluation function E(π1, π2) and therefore the goal is to find the optimal policy profile (π?1 , π ? 2) w.r.t. E. Particularly for the games we considered in this paper, since every (approximate) equilibrium we ever discovered has a symmetric payoff, we focus on the empirical performance while assume a much simplified equilibrium selection problem here: it is equivalent to define E(π1, π2) by E(π1, π2) = βU1(θ1) + (1− β)U2(θ2) for any 0 ≤ β ≤ 1. Further discussions on the general equilibrium selection problem can be found in Sec. 6. The challenge is that although running decentralized PG is a popular learning approach for complex Markov games, the derived policy profile (π1, π2) is often sub-optimal, i.e., there exists (π?1 , π ? 2) such that E(π?1 , π ? 2) > E(π1, π2). It will be shown in Sec. 5 that even using state-of-the-art exploration techniques, the optimal policies (π?1 , π ? 2) can be hardly achieved.\nFollowing the insights from Sec. 2, reward randomization can be applied to a Markov game M similarly: if the reward function in M poses difficulties for PG to discover some particular strategy, it might be easier to reach this desired strategy with a perturbed reward function. Hence, we can then define a reward function spaceR, train a population of policy profiles in parallel with sampled reward functions from R and select the desired strategy by evaluating the obtained policy profiles in the original game M . Formally, instead of purely learning in the original game M = (S,O,A, R, P ), we define a proper subspace R over possible reward functions R : S × A × A → R and use M(R′) = (S,O,A, R′, P ) to denote the induced Markov game by replacing the original reward functionRwith anotherR′ ∈ R. To apply reward randomization, we drawN samplesR(1), . . . , R(N) from R, run PG to learn (π(i)1 , π (i) 2 ) on each induced game M(R\n(i)), and pick the desired policy profile (π(k)1 , π (k) 2 ) by calculating E in the original game M . Lastly, we can fine-tune the policies π (k) 1 , π (k) 2 in M to further boost the practical performance (see discussion below). We call this learning procedure, Reward-Randomized Policy Gradient (RPG), which is summarized in Algo. 1.\nReward-function space: In general, the possible space for a valid reward function is intractably huge. However, in practice, almost all the games designed by human have low-dimensional reward structures based on objects or events, so that we can (almost) always formulate the reward function in a linear form R(s, a1, a2; i) = φ(s, a1, a2; i)Tw where φ(s, a1, a2; i) is a low-dimensional feature vector and w is some weight.\nA simple and general design principle for R is to fix the feature vector φ while only randomize the weight w, i.e., R = {Rw : Rw(s, a1, a2; i) = φ(s, a1, a2; i)Tw, ‖w‖∞ ≤ Cmax}. Hence, the overall search space remains a similar structure as the original game M but contains a diverse range of preferences over different feature dimensions. Notably, since the optimal strategy is invariant to the scale of the reward function R, theoretically any Cmax > 0 results in the same search space.\nHowever, in practice, the scale of reward may significantly influence MARL training stability, so we typically ensure the chosen Cmax to be compatible with the PG algorithm in use.\nNote that a feature-based reward function is a standard assumption in the literature of inverse RL (Ng et al., 2000; Ziebart et al., 2008; Hadfield-Menell et al., 2017). In addition, such a reward structure is also common in many popular RL application domains. For example, in navigation games (Mirowski et al., 2016; Lowe et al., 2017; Wu et al., 2018), the reward is typically set to the negative distance from the target location LT to the agent’s location LA plus a success bonus, so the feature vector φ(s, a) can be written as a 2-dimensional vector [‖LT − LA‖2, I(LT = LA)]; in real-time strategy games (Wu & Tian, 2016; Vinyals et al., 2017; OpenAI et al., 2019), φ is typically related to the bonus points for destroying each type of units; in robotics manipulation (Levine et al., 2016; Li et al., 2020; Yu et al., 2019), φ is often about the distance between the robot/object and its target position; in general multi-agent games (Lowe et al., 2017; Leibo et al., 2017; Baker et al., 2020), φ could contain each agent’s individual reward as well as the joint reward over each team, which also enables the representation of different prosociality levels for the agents by varying the weight w.\nFine tuning: There are two benefits: (1) the policies found in the perturbed game may not remain an equilibrium in the original game, so fine-tuning ensures convergence; (2) in practice, fine-tuning could further help escape a suboptimal mode via the noise in PG (Ge et al., 2015; Kleinberg et al., 2018). We remark that a practical issue for fine-tuning is that when the PG algorithm adopts the actor-critic framework (e.g., PPO), we need an additional critic warm-start phase, which only trains the value function while keeps the policy unchanged, before the fine-tuning phase starts. This warm-start phase significantly stabilizes policy learning by ensuring the value function is fully functional for variance reduction w.r.t. the reward function R in the original game M when estimating policy gradients." }, { "heading": "3.1 LEARNING TO ADAPT WITH DIVERSE OPPONENTS", "text": "Algorithm 2: Learning to Adapt Input: game M , policy set Π2, initial πa1 ; repeat\ndraw a policy π′2 from Π2; evaluate πa1 and π′2 on M and collect data; update θa via PG if enough data collected;\nuntil enough iterations; return πa1 (θa);\nIn addition to the final policies π?1 , π ? 2 , another benefit from RPG is that the population of N policy profiles contains diverse strategies (more in Sec. 5). With a diverse set of strategies, we can build an adaptive agent by training with a random opponent policy sampled from the set per episode, so that the agent is forced to behave differently based on its opponent’s behavior. For simplicity, we consider learning an adaptive policy πa1 (θ\na) for agent 1. The procedure remains the same for agent 2. Suppose a policy population P = {π(1)2 , . . . , π (N) 2 } is obtained during the RR phase, we first construct a diverse strategy set Π2 ⊆ P that contains all the discovered behaviors from P . Then we construct a mixed strategy by randomly sampling a policy π′2 from Π2 in every training episode and run PG to learn πa1 by competing against this constructed mixed strategy. The procedure is summarized in Algo. 2. Note that setting Π2 = P appears to be a simple and natural choice. However, in practice, since P typically contains just a few strategic behaviors, it is unnecessary for Π2 to include every individual policy from P . Instead, it is sufficient to simply ensure Π2 contains at least one policy from each equilibrium in P (more details in Sec. 5.3). Additionally, this method does not apply to the one-shot game setting (i.e., horizon is 1) because the adaptive agent does not have any prior knowledge about its opponent’s identity before the game starts.\nImplementation: We train an RNN policy for πa1 (θa). It is critical that the policy input does not directly reveal the opponent’s identity, so that it is forced to identify the opponent strategy through what it has observed. On the contrary, when adopting an actor-critic PG framework (Lowe et al., 2017), it is extremely beneficial to include the identity information in the critic input, which makes critic learning substantially easier and significantly stabilizes training. We also utilize a multi-head architecture adapted from the multi-task learning literature (Yu et al., 2019), i.e., use a separate value head for each training opponent, which empirically results in the best training performance." }, { "heading": "4 TESTBEDS FOR RPG: TEMPORAL TRUST DILEMMAS", "text": "We introduce three 2-player Markov games as testbeds for RPG. All these games have a diverse range of NE strategies including both “risky” cooperative NEs with high payoffs but hard to discover and “safe” non-cooperative NEs with lower payoffs. We call them temporal trust dilemmas. Game descriptions are in a high level to highlight the game dynamics. More details are in Sec. 5 and App. B.\nGridworlds: We consider two games adapted from Peysakhovich & Lerer (2018b), Monster-Hunt (Fig. 3) and Escalation (Fig. 4). Both games have a 5-by-5 grid and symmetric rewards.\nMonster-Hunt contains a monster and two apples. Apples are static while the monster keeps moving towards its closest agent. If a single agent meets the monster, it loses a penalty of 2; if two agents catch the monster together, they both earn a bonus of 5. Eating an apple always raises a bonus of 2. Whenever an apple is eaten or the monster meets an agent, the entity will respawn randomly. The optimal payoff can only be achieved when both agents precisely catch the monster simultaneously.\nEscalation contains a lit grid. When two agents both step on the lit grid, they both get a bonus of 1 and a neighboring grid will be lit up in the next timestep. If only one agent steps on the lit grid, it gets a penalty of 0.9L, where L denotes the consecutive cooperation steps until that timestep, and the lit grid will respawn randomly. Agents need to stay together on the lit grid to achieve the maximum payoff despite of the growing penalty. There are multiple NEs: for each L, that both agents cooperate for L steps and then leave the lit grid jointly forms an NE.\nAgar.io is a popular multiplayer online game. Players control cells in a Petri dish to gain as much mass as possible by eating smaller cells while avoiding being eaten by larger ones. Larger cells move slower. Each player starts with one cell but can split a sufficiently large cell into two, allowing them to control multiple cells (Wikipedia, 2020). We consider a simplified scenario (Fig. 5) with 2 players (agents) and tiny script cells, which automatically runs away when an agent comes by. There is a low-risk non-cooperative strategy, i.e., two agents stay away from each other and hunt script cells independently. Since the script cells move faster, it is challenging for a single agent to hunt them. By contrast, two agents can cooperate to encircle the script cells to accelerate hunting. However, cooperation is extremely risky for the agent with less mass: two agents need to stay close to cooperate but the larger agent may defect by eating the smaller one and gaining an immediate big bonus." }, { "heading": "5 EXPERIMENT RESULTS", "text": "In this section, we present empirical results showing that in all the introduced testbeds, including the real-world game Agar.io, RPG always discovers diverse strategic behaviors and achieves an equilibrium with substantially higher rewards than standard multi-agent PG methods. We use PPO (Schulman et al., 2017) for PG training. Training episodes for RPG are accumulated over all the perturbed games. Evaluation results are averaged over 100 episodes in gridworlds and 1000 episodes in Agar.io. We repeat all the experiments with 3 seeds and use X (Y ) to denote mean X with standard deviation Y in all tables. Since all our discovered (approximate) NEs are symmetric for both players, we simply take E(π1, π2) = U1(π1, π2) as our evaluation function and only measure the reward of agent 1 in all experiments for simplicity. More details can be found in appendix.\n5.1 GRIDWORLD GAMES\nMonster-Hunt: Each agent’s reward is determined by three features per timestep: (1) whether two agents catch the monster together; (2) whether the agent steps on an apple; (3) whether the agent meets the monster alone. Hence, we write φ(s, a1, a2; i) as a 3-dimensional 0/1 vector with one dimension for one feature. The original game corresponds to w = [5, 2,−2]. We set Cmax = 5 for sampling w. We compare RPG with a collection of baselines, including standard PG (PG), PG with shared reward (PG+SR), population-based training (PBT), which trains the same amount of parallel PG policies as RPG, as\nwell as popular exploration methods, i.e., count-based exploration (PG+CNT) (Tang et al., 2017) and MAVEN (Mahajan et al., 2019). We also consider an additional baseline, DIAYN (Eysenbach et al., 2019), which discovers diverse skills using a trajectory-based diversity reward. For a fair comparison, we use DIAYN to first pretrain diverse policies (conceptually similar to the RR phase), then evaluate the rewards for every pair of obtained policies to select the best policy pair (i.e., evaluation phase, shown with the dashed line in Fig. 6), and finally fine-tune the selected policies until convergence (i.e., fine-tuning phase). The results of RPG and the 6 baselines are summarized in Fig. 6, where RPG consistently discovers a strategy with a significantly higher payoff. Note that the strategy with the optimal payoff may not always directly emerge in the RR phase, and there is neither a particular value of w constantly being the best candidate: e.g., in the RR phase, w = [5, 0, 2] frequently produces a sub-optimal cooperative strategy (Fig. 7(a)) with a reward lower than other w values, but it can also occasionally lead to the optimal strategy (Fig. 7(b)). Whereas, with the fine-tuning phase, the overall procedure of RPG always produces the optimal solution. We visualize both two emergent cooperative strategies in Fig. 7: in the sub-optimal one (Fig. 7(a)), two agents simply move to grid (1,1) together, stay still and wait for the monster, while in the optimal one (Fig. 7(b)), two agents meet each other first and then actively move towards the monster jointly, which further improves hunting efficiency.\nEscalation: We can represent φ(s, a1, a2; i) as 2-dimensional vector containing (1) whether two agents are both in the lit grid and (2) the total consecutive cooperation steps. The original game corresponds to w = [1,−0.9]. We set Cmax = 5 and show the total number of cooperation steps per episode for several selected w values throughout training in Fig. 8, where RR is able to discover different NE strategies. Note that w = [1, 0] has already produced the strategy with the optimal payoff in this game, so the fine-tuning phase is no longer needed.\n5.2 2-PLAYER GAMES IN Agar.io\nThere are two different settings of Agar.io: (1) the standard setting, i.e., an agent gets a penalty of −x for losing a mass x, and (2) the more challenging aggressive setting, i.e., no penalty for mass loss. Note in both settings: (1) when an agent eats a mass x, it always gets a bonus of x; (2) if an agent loses all the mass, it immediately dies while the other agent can still play in the game. The aggressive setting promotes agent interactions and typically leads to more diverse strategies in practice. Since both settings strictly define the penalty function for mass loss, we do not randomize this reward term. Instead, we consider two other factors: (1) the bonus for eating the other agent; (2) the prosocial level of both agents. We use a 2-dimensional vector w = [w0, w1], where 0 ≤ w0, w1 ≤ 1, to denote a particular reward function such that (1) when eating a cell of mass x from the other agent, the bonus is w0 × x, and (2) the final reward is a linear interpolation between R(·; i) and 0.5(R(·; 0) +R(·; 1)) w.r.t. w1, i.e., when w1 = 0, each agent optimizes its individual reward while when w1 = 1, two agents have a shared reward. The original game in both Agar.io settings corresponds to w = [1, 0].\nStandard setting: PG in the original game (w = [1, 0]) leads to a typical trust-dilemma dynamics: the two agents first learn to hunt and occasionally Cooperate (Fig. 9(a)), i.e., eat a script cell with the other agent close by; then accidentally one agent Attacks the other agent (Fig. 9(b)), which yields a big\nPBT w=[0.5, 1] w=[0, 1] w=[0, 0] RPG RND\nRew. 3.3(0.2) 4.8(0.6) 5.1(0.4) 6.0(0.5) 8.9(0.3) 3.2(0.2) #Attack 0.4(0.0) 0.7(0.2) 0.3(0.1) 0.5(0.1) 0.9(0.1) 0.4(0.0) #Coop. 0.0(0.0) 0.6(0.6) 2.3(0.3) 1.6(0.1) 2.0(0.2) 0.0(0.0) #Hunt 0.7(0.1) 0.6(0.3) 0.3(0.0) 0.7(0.0) 0.9(0.1) 0.7(0.0)\nTable 3: Results in the aggressive setting of Agar.io: PBT: population training of parallel PG policies; RR: w=[0, 0] is the best candidate via RR; RPG: fine-tuned policy; RND: PG with RND bonus.\nimmediate bonus and makes the policy aggressive; finally policies converge to the non-cooperative equilibrium where both agents keep apart and hunt alone. The quantitative results are shown in Tab. 2. Baselines include population-based training (PBT) and a state-the-art exploration method for high-dimensional state, Random Network Distillation (RND) (Burda et al., 2019). RND and PBT occasionally learns cooperative strategies while RR stably discovers a cooperative equilibrium with w = [1, 1], and the full RPG further improves the rewards. Interestingly, the best strategy obtained in the RR phase even has a higher Cooperate frequency than the full RPG: fine-tuning transforms the strong cooperative strategy to a more efficient strategy, which has a better balance between Cooperate and selfish Hunt and produces a higher average reward.\nAggressive setting: Similarly, we apply RPG in the aggressive setting and show results in Tab. 3. Neither PBT nor RND was able to find any cooperative strategies in the aggressive game while RPG stably discovers a cooperative equilibrium with a significantly higher reward. We also observe a diverse set of complex strategies in addition to normal Cooperate and Attack. Fig. 10 visualizes the Sacrifice strategy derived with w = [1, 1]: the smaller agent rarely hunts script cells; instead, it waits in the corner for being eaten by the larger agent to contribute all its mass to its partner. Fig. 11 shows another surprisingly novel emergent strategy by w = [0.5, 1]: each agent first hunts individually to gain enough mass; then one agent splits into smaller cells while the other agent carefully eats a portion of the split agent; later on, when the agent who previously lost mass gains sufficient mass, the larger agent similarly splits itself to contribute to the other one, which completes the (ideally) never-ending loop of partial sacrifice. We name this strategy Perpetual for its conceptual similarity to the perpetual motion machine. Lastly, the best strategy is produced by w = [0, 0] with a balance between Cooperate and Perpetual: they cooperate to hunt script cells to gain mass efficiently and quickly perform mutual sacrifice as long as their mass is sufficiently large for split-and-eat. Hence, although the RPG policy has relatively lower Cooperate frequency than the policy by w = [0, 1], it yields a significantly higher reward thanks to a much higher Attack (i.e., Sacrifice) frequency." }, { "heading": "5.3 LEARNING ADAPTIVE POLICIES", "text": "Monster-Hunt: We select policies trained by 8 differentw values in the RR phase and use half of them for training the adaptive policy and the remaining half as hidden opponents for evaluation. We also make sure that both training and evaluation policies cover the following 4 strategy modes: (1) M(onster): the agent always moves towards the monster; (2) M(onster)-Alone: the agent moves towards the monster but\nalso tries to keeps apart from the other agent; (3) M(onster)-Coop.: the agent seeks to hunt the monster together with the other agent; (4) Apple: the agent only eats apple. The evaluation results are shown in Tab. 4, where the adaptive policy successfully exploits all the test-time opponents, including M(onster)-Alone, which was trained to actively avoids the other agent.\nAgent Adapt. Coop. Comp. Opponent: Cooperative −→ Competitive #Attack 0.2(0.0) 0.3(0.0) 0.1(0.1)\nRew. 0.7(0.7) -0.2(0.6) 0.8(0.5) Opponent: Competitive −→ Cooperative #Coop. 1.0(0.3) 1.4(0.4) 0.3(0.4) Rew. 2.5(0.7) 3.6(1.2) 1.1(0.7)\nTable 5: Adaptation test in Agar.io. Opponent type is switched half-way per episode. #Attack, #Coop.: episode statistics; Rew.: agent reward. Adaptive agents’ rewards are close to oracles.\nAgar.io: We show the trained agent can choose to cooperate or compete adaptively in the standard setting. We pick 2 cooperative policies (i.e., Cooperate preferred, w=[1, 0]) and 2 competitive policies (i.e., Attack preferred, w=[1, 1]) and use half of them for training and the other half for testing. For a hard challenge at test time, we switch the opponent within an episode, i.e., we use a cooperative opponent in the first half and then immediately switch to a competitive one, and vice versa. So, a desired policy should adapt quickly at halftime. Tab. 5 compares the second-half behavior of the adaptive agent with the oracle pure-competitive/cooperative agents. The rewards of the adaptive agent is close to the oracle: even with half-way switches, the trained policy is\nable to exploit the cooperative opponent while avoid being exploited by the competitive one." }, { "heading": "6 RELATED WORK AND DISCUSSIONS", "text": "Our core idea is reward perturbation. In game theory, this is aligned with the quantal response equilibrium (McKelvey & Palfrey, 1995), a smoothed version of NE obtained when payoffs are perturbed by a Gumbel noise. In RL, reward shaping is popular for learning desired behavior in various domains (Ng et al., 1999; Babes et al., 2008; Devlin & Kudenko, 2011), which inspires our idea for finding diverse strategic behavior. By contrast, state-space exploration methods (Pathak et al., 2017; Burda et al., 2019; Eysenbach et al., 2019; Sharma et al., 2020) only learn low-level primitives without strategy-level diversity (Baker et al., 2020).\nRR trains a set of policies, which is aligned with the population-based training in MARL (Jaderberg et al., 2017; 2019; Vinyals et al., 2019; Long et al., 2020; Forestier et al., 2017). RR is conceptually related to domain randomization (Tobin et al., 2017) with the difference that we train separate policies instead of a single universal one, which suffers from mode collapse (see appendix D.2.3). RPG is also inspired by the map-elite algorithm (Cully et al., 2015) from evolutionary learning community, which optimizes multiple objectives simultaneously for sufficiently diverse polices. Our work is also related to Forestier et al. (2017), which learns a set of policies w.r.t. different fitness functions in the singleagent setting. However, they only consider a restricted fitness function class, i.e., the distance to each object in the environment, which can be viewed as a special case of our setting. Besides, RPG helps train adaptive policies against a set of opponents, which is related to Bayesian games (Dekel et al., 2004; Hartline et al., 2015). In RL, there are works on learning when to cooperate/compete (Littman, 2001; Peysakhovich & Lerer, 2018a; Kleiman-Weiner et al., 2016; Woodward et al., 2019; McKee et al., 2020), which is a special case of ours, or learning robust policies (Li et al., 2019; Shen & How, 2019; Hu et al., 2020), which complements our method.\nAlthough we choose decentralized PG in this paper, RR can be combined with any other multi-agent learning algorithms for games, such as fictitious play (Robinson, 1951; Monderer & Shapley, 1996; Heinrich & Silver, 2016; Kamra et al., 2019; Han & Hu, 2019), double-oracle (McMahan et al., 2003; Lanctot et al., 2017; Wang et al., 2019; Balduzzi et al., 2019) and regularized self-play (Foerster et al., 2018; Perolat et al., 2020; Bai & Jin, 2020). Many of these works have theoretical guarantees to find an (approximate) NE but there is little work focusing on which NE strategy these algorithms can converge to when multiple NEs exist, e.g., the stag-hunt game and its variants, for which many learning dynamics fail to converge to a prevalence of the pure strategy Stag (Kandori et al., 1993; Ellison, 1993; Fang et al., 2002; Skyrms & Pemantle, 2009; Golman & Page, 2010)..\nIn this paper, we primarily focus on how reward randomization empirically helps MARL discover better strategies in practice and therefore only consider stag hunt as a particularly challenging example where an “optimal” NE with a high payoff for every agent exists. In general cases, we can select a desired strategy w.r.t. an evaluation function. This is related to the problem of equilibrium refinement (or equilibrium selection) (Selten, 1965; 1975; Myerson, 1978), which aims to find a subset of equilibria satisfying desirable properties, e.g., admissibility (Banks & Sobel, 1987), subgame perfection (Selten, 1965), Pareto efficiency (Bernheim et al., 1987) or robustness against opponent’s deviation from best response in security-related applications (Fang et al., 2013; An et al., 2011)." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work is supported by National Key R&D Program of China (2018YFB0105000). Co-author Fang is supported, in part, by a research grant from Lockheed Martin. Co-author Wang is supported, in part, by gifts from Qualcomm and TuSimple. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the funding agencies. The authors would like to thank Zhuo Jiang and Jiayu Chen for their support and input during this project. Finally, we particularly thank Bowen Baker for initial discussions and suggesting the Stag Hunt game as our research testbed, which eventually leads to this paper." }, { "heading": "A PROOFS", "text": "Proof of Theorem 1. We apply self-play policy gradient to optimize θ1 and θ2. Here we consider a projected version, i.e., if at some time t, θ1 or θ2 6∈ [0, 1], we project it to [0, 1] to ensure it is a valid distribution.\nWe first compute the utility given a pair (θ1, θ2)\nU1(θ1, θ2) =aθ1θ2 + cθ1(1− θ2) + b(1− θ1)θ2 + d(1− θ1)(1− θ2) U2(θ1, θ2) =aθ1θ2 + bθ1(1− θ2) + c(1− θ1)θ2 + d(1− θ1)(1− θ2).\nWe can compute the policy gradient\n∇U1(θ1, θ2) =aθ2 + c(1− θ2)− bθ2 − d(1− θ2) = (a+ d− b− c)θ2 + c− d ∇U2(θ1, θ2) =aθ2 − bθ1 + c(1− θ1)− d(1− θ1) = (a+ d− b− c)θ1 + c− d\nRecall in order to find the optimal solution both θ1 and θ2 need to increase. Also note that the initial θ1 and θ2 determines the final solution. In particular, only if θ1 and θ2 are increasing at the beginning, they will converge to the desired solution.\nTo make either θ1 or θ2 increase, we need to have\n(a+ d− b− c)θ1 + c− d > 0 or (a+ d− b− c)θ2 + c− d > 0 (1)\nConsider the scenario a− b = (d− c). In order to make Inequality equation 1 to hold, we need at least either θ1, θ2 ≥ 11+ .\nIf we initialize θ1 ∼ [0, 1] and θ2 ∼ [0, 1], the probability of either θ1, θ2 ≥ 11+ is 1− ( 1 1+ )2 =\n2 + 2\n1+2 + 2 = O ( ).\nProof of Theorem 2. Using a similar observation as in Theorem 1, we know a necessary condition to make PG converge to a sub-optimal NE is\n(a+ d− b− c)θ1 + c− d < 0 or (a+ d− b− c)θ2 + c− d < 0.\nBased on our generating scheme on a, b, c, d and the initialization scheme on θ1, θ2, we can verify that Therefore, via a union bound, we know\nP ((a+ d− b− c)θ1 + c− d < 0 or (a+ d− b− c)θ2 + c− d < 0) ≤ 0.6. (2)\nSince each round is independent, the probability that PG fails for all N times is upper bounded by 0.6N . Therefore, the success probability is lower bounded by 1− 0.6N = 1− exp (−Ω (N))." }, { "heading": "B ENVIRONMENT DETAILS", "text": "B.1 Iterative Stag-Hunt\nIn Iterative Stag-Hunt, two agents play 10 rounds, that is, both PPO’s trajectory length and episode length are 10. Action of each agent is a 1-dimensional vector, ai = {ti, i ∈ {0, 1}}, where ti = 0 denotes taking Stag action and ti = 1 denotes taking Hare action. Observation of each agent is actions taking by itself and its opponent in the last round, i.e., ori = {a r−1 i , a r−1 1−i ; i ∈ {0, 1}}, where r denotes the playing round. Note that neither agent has taken action at the first round, so the observation oi = {−1,−1}.\nB.2 Monster-Hunt\nIn Monster-Hunt, two agents can move one step in any of the four cardinal directions (Up, Down, Left, Right) at each timestep. Let ai = {ti, i ∈ {0, 1}} denote action of agent i, where ti is a discrete 4-dimensional one-hot vector. The position of each agent can not exceed the border of 5-by-5 grid, where action execution is invalid. One Monster and two apples respawn in the different grids at the initialization. If an agent eats (move over in the grid world) an apple, it can gain 2 points. Sometimes, two agents may try to eat the same apple, the points will be randomly assigned to only one agent. Catching the monster alone causes an agent lose 2 points, but if two agents catch the stag simultaneously, each agent can gain 5 points. At each time step, the monster and apples will respawn randomly elsewhere in the grid world if they are wiped. In addition, the monster chases the agent closest to it at each timestep. The monster may move over the apple during the chase, in this case, the agent will gain the sum of points if it catches the monster and the apple exactly. Each agent’s observation oi is a 10-dimensional vector and formed by concatenating its own position pi, the other agent’s position p1−i, monster’s positionpmonster and sorted apples’ position papple0, papple1, i.e., oi = {pi, p1−i, pmonster, papple0, papple1; i ∈ {0, 1}}, where p = (u, v) denotes the 2-dimensional coordinates in the gridworld.\nB.3 Monster-Hunt WITH MORE THAN 2 AGENTS\nHere we consider extending RPG to the general setting of N agents. In most of the multi-agent games, the reward function are fully symmetric for the same type of agents. Hence, as long as we can formulate the reward function in a linear form over a feature vector and a shared weight, i.e., R(s, a1, . . . , aN ; i) = φ(s, a1, . . . , aN ; i)\nTw, we can directly apply RPG without any modification by setting R = {Rw : Rw(s, a1, . . . , aN ; i) = φ(s, a1, . . . , aN ; i)Tw}. Note that typically the dimension of the feature vector φ(·) remains fixed w.r.t. different number of agents (N ). For example, in the Agar.io game, no matter how many players are there in the game, the rule of how to get reward bonus and penalties remains the same.\nHere, we experiment RPG in Monster-Hunt with 3 agents. The results are shown in Fig. 12. We consider baselines including the standard PG (PG) and population-based training (PBT). RPG reliably discovers a strong cooperation strategy with a substantially higher reward than the baselines.\nB.4 Escalation\nIn Escalation, two agents appear randomly and one grid lights up at the initialization. If two agents step on the lit grid simultaneously, each agent can gain 1 point, and the lit grid will go out with an adjacent grid lighting up. Both agents can gain 1 point again if they step on the next lit grid together. But if one agent steps off the path, the other agent will lose 0.9L points, where L is the current length of stepping together, and the game is over. Another option is that two agents choose to step off the path simultaneously, neither agent will be punished, and the game continues. As the length L of stepping together increases, the cost of betrayal increases linearly. ai = {ti, i ∈ {0, 1}} denotes\naction of agent i, where ti is a discrete 4-dimensional one-hot vector. The observation ai of agent i is composed of its own position pi, the other agent’s positionp1−i and the lit grid’s position plit, i.e., oi = {pi, p1−i, plit; i ∈ {0, 1}}, where p = (u, v) denotes the 2-dimensional coordinates in the gridworld. Moreover, we utilize GRU to encode the length L implicitly, instead of observing that explicitly.\nB.5 Agar.io\nIn the original online game Agar.io, multiple players are limited in a circle petri dish. Each player controls one or more balls using only a cursor and 2 keyboard keys \"space\" and \"w\". all balls belonging to the player will move forward to where the cursor pointing at. Balls larger than a threshold will split to 2 smaller balls and rush ahead when the player pressing the key \"space\". Balls larger than another threshold will emit tiny motionless food-like balls when the player pressing \"w\". Agar.io has many play modes like \"Free-For-All\" mode (All players fight for their own and can eat each other) and \"Team\" mode (Players are separated to two groups. They should cooperate with other players in the same group and eat other players belonging to another group).\nWe simplified settings of the original game Agar.io: Now agents don’t need to emit tiny motionless balls and all fight with each other (FFA mode). The action space of the game is target × {split, no_split}. target ∈ [0, 1]2 means the target position that all balls belonging to the agent move to. binary action split or no_split means whether the player chooses to split, which will cause all balls larger than a threshold split to 2 smaller ones and rush ahead for a short while. These split balls will re-merge after some time, then the agent can split again. When one agent’s ball meets another agent’s ball and the former one is at least 1.2 times larger than the later, the later will be eaten and the former will get all its mass. The reward is defined as the increment of balls’ mass. So every agent’s goal is getting larger by eating others while avoid being eaten. But larger ball moves slower. So it’s really hard to catch smaller balls only by chasing after it. Split will help, but it needs high accuracy to rush to the proper direction. In our experiments, there were 7 agents interacting with each other. 2 agents were learned by our algorithm and would quit the game if all balls were eaten. 5 agents were controlled by a script and would reborn at a random place if all balls were eaten. Learn-based agents were initialized larger than script-based agents so it was basically one-way catching. In this setting, cooperation was the most efficient behavior for learn-based agents to gain positive reward, where they coordinated to surround script-based agents and caught them.\nObservation space: We denote partial observation of agent i as oi, which includes global information of the agent (denoted as oi,global) and descriptions of all balls around the agent (including balls owned by the agent, denoted as oi,balls. and oi,balls = {oi,ball,1, oi,ball,2, ..., oi,ball,m}, where oi,ball,j denotes the j-th ball around the agent and there are m observed balls in all). oi,global = {li,obs, wi,obs, pi,center, vi, si,alive, ni,own, ni,script, ni,other, ai,last, ri,max, ri,min,mi} where li,obs, wi,obs (they are both 1D filled with a real number, from here the form like (1D, real) will be used as the abbreviation) are the length and width of the agent’s observation scope, pi,center (2D, real) is its center position, vi (2D, real) is the speed of its center, si,alive(1D, binary) is whether the other learn-based agent is killed, ni,own, ni,script, ni,other(1D, real) are numbers of each type of balls nearby (3 types: belonging to me, or belonging to a script agent, or belonging to another learn-based agent), ai,last(3D, real) is the agent’s last action, ri,max, ri,min(1D, real) are maximal and minimal radius of all balls belonging to the agent. for any j = 1, 2, ...,m, oi,ball,j = {pi,j,relative, pi,j,absolute, vi,j , vi,j,rush, ri,j , log(ri,j), di,j , ei,j,max, ei,j,min, si,j,rem, ti,j}, where pi,j,relative, pi,j,absolute(2D, real) are the ball’s relative and absolute position, vi,j is its speed, vi,j,rush is the ball’s additional rushing speed(when a ball splits to 2 smaller balls, these 2 balls will get additional speed and it’s called vi,j,rush, otherwise vi,j,rush = 0), ri,j(1D, real) is its radius, di,j is the distance between the ball and the center of the agent, ei,j,max, ei,j,min(1D, binary) are whether the ball can be eaten by the maximal or minimal balls of the observing agent, si,j,rem(1D, binary) is whether the ball is able to remerge at present. ti,j(3D, one hot) is the type of the ball.\nThe script-base agent can automatically chase after and split towards other smaller agents. When facing extreme danger (we define \"extreme danger\" as larger learn-based agents being very close to it), it will use a 3-step deep-first-search to plan a best way for escape. More details of the script can be seen in our code. We played against the script-base agent using human intelligence for many times and we could never hunt it when having only one ball and rarely catch it by split." }, { "heading": "C TRAINING DETAILS", "text": "C.1 GRIDWORLD GAMES\nIn Monster-Hunt and Escalation, agents’ networks are organized by actor-critic (policy-value) architecture. We consider N = 2 agents with a policy profile π = {π0, π1} parameterized by θ = {θ0, θ1}. The policy network πi takes observation oi as input, two hidden layers with 64 units are followed after that, and then outputs action ai. While the value network takes as input observations of two agents, o = {o0, o1} and outputs the V-value of agent i, similarly two hidden layers with 64 units are added before the output.\nIn Escalation, we also place an additional GRU module before the output in policy network and value network respectively, to infer opponent’s intentions from historical information. Note that 64-dimensional hidden state of GRU h will change if the policy network is updated. In order to both keep forward information and use backward information to compute generalized advantage estimate (GAE) with enough trajectories, we split buffer data into small chunks, e.g., 10 consecutive timesteps as a small data chunk. The initial hidden state hinit, which is the first hidden state h0, is kept for each data chunk, but do another forward pass to re-compute {h1, ..., hM−1}, where M represents the length of one data chunk, and keep buffer-reuse low, e.g., 4 in practice.\nAgents in Monster-Hunt and Escalation are trained by PPO with independent parameters. Adam optimizer is used to update network parameters and each experiment is executed for 3 times with random seeds. More optimization hyper-parameter settings are in Tab.6. In addition, Monster-Hunt also utilizes GRU modules to infer opponent’s identity during adaption training and the parallel threads are set to 64.\nCount-based exploration: We just add the count-based exploration intrinsic reward rint to the environment reward during training. when the agent’s observation is o, rint = α/no where α is a hyperparameter adjusted properly (0.3 in Monster-Hunt and 1 in Escalation) and no is the number of times the agent have the observation o.\nDIAYN: In Monster-Hunt, we use DIAYN to train 10 diverse policy in the first 140k episodes (DIAYN’s discriminator has 3 FC layers with 256, 128, 10 units respectively) and choose the policy which has the best performance in Monster-Hunt’s reward settings to fine-tune in the next 280k episodes. Note that DIAYN doesn’t have a warm-start phase before fine-tuning in its original paper so we didn’t do so as well. Note that in the first unsupervised learning phase, DIAYN does not optimize for any specific reward function. Hence, we did not plot the reward curve for DIAYN in Fig.7 for this phase. Instead, we simply put a dashed line showing the reward of the best selected pair of policies from DIAYN pretraining.\nMAVEN: We use the open-sourced implementation of MAVEN from https://github.com/ AnujMahajanOxf/MAVEN.\nPopulation-based training: In each PBT trial, we straightforward train the same amount of parallel PG policies as RPG with different random seeds in each problem respectively and choose the one with best performance as the final policy. Note that the final training curve is averaged over 3 PBT trials.\nC.2 Agar.io\nIn Agar.io, we used PPO as our algorithm and agents’ networks were also organized by actor-critic (policy-value) architecture with a GRU unit (i.e., PPO-GRU). We consider N = 2 agents with a policy profile π = {π0, π1} sharing parameter θ. The policy network πi takes observation oi as input. At the beginning, like (Baker et al., 2019), oi,balls is separated to 3 groups according to balls’ types: oi,ownballs, oi,scriptballs and oi,otherballs. 3 different multi-head attention models with 4 heads and 64 units for transformation of keys, inquiries and values are used to embed information of 3 types of balls respectively, taking corresponding part of oi,balls as values and inquiries and oi,global as keys. Then their outputs are concatenated and transformed by an FC layer with 128 units before being sent to a GRU block with 128 units. After that, the hidden state is copied to 2 heads for policy’s and value’s output. The policy head starts with 2 FC layers both with 128 units and ends with 2 heads to generate discrete(split or no_split) and continuous(target) actions. The value head has 3 FC layers with 128, 128, 1 unit respectively and outputs a real number.\nPPO-GRU was trained with 128 parallel environment threads. Agar.io’s episode length was uniformrandomly sampled between 300 and 400 both when training and evaluating. Buffer data were split to small chunks with length = 32 in order to diversify training data and stabilize training process. and the buffer was reused for 4 times to increase data efficiency. Hidden states of each chunk except at the beginning were re-computed after each reuse to sustain PPO’s \"on-policy\" property as much as possible. Action was repeated for 5 times in the environment whenever the policy was executed and only the observation after the last action repeat was sent to the policy. Each training process started with a curriculum-learning in the first 1.5e7 steps: Speed of script agents was multiplied with x, where x is uniformly random-sampled between max{0, (n− 1e7)/5e6} and min{1,max{0, (n−5e6)/5e6}} at the beginning of each episode, where n was the steps of training. After the curriculum learning, Speed was fixed to the standard. Each experiment was executed for 3 times with different random seeds. Adam optimizer was used to update network parameters. More optimization hyper-parameter settings are in Tab.7." }, { "heading": "D ADDITIONAL EXPERIMENT RESULTS", "text": "D.1 Monster-Hunt\nIn Monster-Hunt, we set Cmax = 5 for sampling w. Fig. 13 illustrates the policies discovered by several selected w values, where different strategic modalities can be clearly observed: e.g., with w = [0, 5, 0], agents always avoid monsters and only eat apples. In Fig. 14, it’s worth noting that w = [5, 0, 2] could yield the best policy profile (i.e., two agents move together to hunt the monster.)\nand doesn’t even require further fine-tuning with some seeds. But the performance of w = [5, 0, 2] is significantly unstable and it may converge to another NE (i.e., two agents move to a corner and wait for the monster.) with other seeds. So w = [5, 0, 5], which yields stable strong cooperation strategies with different seeds, will be chosen in RR phase when w = [5, 0, 2] performs poorly. We demonstrate the obtained rewards from different policies in Fig. 14, where the policies learned by RPG produces the highest rewards.\nD.2 Agar.io\nD.2.1 STANDARD SETTING\nWe sampled 4 differentw and they varied in different degrees of cooperation. We also did experiments using only baseline PG or PG with intrinsic reward generated by Random Network distillation (RND) to compare with RPG. RR lasted for 40M steps, but only the best reward parameter in RR (w = [1, 1]) was warmed up for 3M steps and fine-tuned for 17M steps later. PG and RND were also trained for 60M steps in order to compare with RPGfairly. In Fig. 15, we can see that PG and RND produced very low rewards because they all converged to non-cooperative policies. w = [1, 1] produced highest rewards after RR, and rewards boosted higher after fine-tuning.\nD.2.2 AGGRESSIVE SETTING\nWe sampled 5 different w and their behavior were much more various. the other training settings were the same as standard setting. in Fig. 16, we should notice that simply sharing reward (w = [1, 1]) didn’t get very high reward because attacking each other also benefits each other, so 2 agents just learned to sacrifice, Again, Fig. 16 illustrates that rewards of RPG was far ahead the other policies while both PG and PG+RND failed to learn cooperative strategies.\nWe also listed all results of Standard and Aggressive setting in Tab. 8 for clearer comparison.\nD.2.3 UNIVERSAL REWARD-CONDITIONED POLICY\nWe also tried to train a universal policy conditioned on w by randomly sampling different w at the beginning of each episode during training rather than fixing different w and training the policy later\non. But as Fig. 17 illustrates, the learning process was very unstable and model performed almost the same under different w due to the intrinsic disadvantage of an on-policy algorithm dealing with multi-tasks: the learning algorithm may pay more effort on w where higher rewards are easier to get but ignore the performance on other w, which made it very hard to get diverse behaviors.\nD.3 LEARN ADAPTIVE POLICY\nIn this section, we add the opponents’ identity ψ in the input of the value network to stable the training process and boost the performance of the adaptive agent. ψ is a C-dimensional one-hot vector, where C denotes the number of opponents.\nD.3.1 Iterative Stag-Hunt\nIn Iterative Stag-Hunt, we randomize the payoff matrix, which is a 4-dimensional vector, and set Cmax = 4 for sampling w. The parallel threads are 512 and the episode length is 10. Other training hyper-parameter settings are the same as Tab.6. Fig 18 describes different w = [a, b, c, d] (i.e.,\n[4, 0, 0, 0], [0, 0, 0, 4], [0, 4, 4, 0], [4, 1, 4, 0]) yields different policy profiles. e.g., with w = [0, 0, 0, 4], both agents tend to eat the hare. The original game corresponds to w = [4, 3,−50, 1]. Tab. 9 reveals w = [4, 0, 0, 0] yields the highest reward and reaches the optimal NE without further fine-tuning.\nUtilizing 4 different strategies obtained in the RR phase as opponents, we could train an adaptive policy which can make proper decisions according to opponent’s identity. Fig. 19 shows the adaption training curve, we can see that the policy yields adaptive actions stably after 5e4 episodes. At the evaluation stage, we introduce 4 hand-designed opponents to test the performance of the adaptive policy, including Stag opponent (i.e., always hunt the stag), Hare opponent (i.e., always eat the hare), Tit-for-Tat (TFT) opponent (i.e., always hunt the stag at the first step, and then take the action executed by the other agent in the last step), and Random opponent (i.e., randomly choose to hunt the stag or eat the hare at each step). Tab. 10 illustrates that the adaptive policy exploits all hand-designed strategies, including Tit-for-Tat opponent, which significantly differ from the trained opponents.\nD.3.2 Monster-Hunt\nWe use the policy population Π2 trained by 4 w values (i.e., w = [5, 1,−5], w = [4, 2,−2],w = [0, 5, 0],w = [5, 0, 5]) in the RR phase as opponents for training the adaptive policy. In addition, we sample other 4 w values (i.e., w = [5, 0, 0], w = [−5, 5,−5],w = [−5, 0, 5],w = [5,−5, 5]) from Cmax = 5 to train new opponents for evaluation. Fig. 20 shows the adaption training curve of the\nmonster-hunt game, where the adaptive policy could take actions stably according to the opponent’s identity.\nD.3.3 Agar.io\nIn Agar.io, we used 2 types of policies from RR: w = [1, 0] (i.e. cooperative) and w = [0, 1] (i.e. competitive) as opponents, and trained a adaptive policy facing each opponent with probability=50% in standard setting while only its value head could know the opponent’s type directly. Then we supposed the policy could cooperate or compete properly with corresponding opponent. As Fig. 21 illustrates, the adaptive policy learns to cooperate with cooperative partners while avoid being exploited by competitive partners and exploit both partners.\nMore details about training and evaluating process: Oracle pure-cooperative policies are learned against a competitive policy for 4e7 steps. So do oracle pure-competitive policies. And the adaptive policy is trained for 6e7 steps. the length of each episode is 350 steps (the half is 175 steps). When evaluating, The policy against the opponent was the adaptive policy in first 175 steps whatever we are testing adaptive or oracle policies. When we tested adaptive policies, the policy against the opponent would keep going for another 175 steps while the opponent would changed to another type and its hidden state would be emptied to zero. When we tested oracle policies, the policy against the opponent would turn to corresponding oracle policies and the opponent would also changed its type while their hidden states were both emptied." } ]
2,021
DISCOVERING DIVERSE MULTI-AGENT STRATEGIC BEHAVIOR VIA REWARD RANDOMIZATION
SP:24344b20e162a68ed6631aa050c2c09a8f91d5ac
[ "The authors propose to use a VQVAE-2 setup for video prediction. In particular, they propose a hierarchical discrete latent variable model that compresses videos into a latent space. An autoregressive model is then used to model dynamics in this latent space, which has reduced dimensionality, and can be used together with the VQVAE decoder to predict video. Empirical results show that this model is comparable to SOTA GAN models and a human evaluation suggests that humans have a preference for the " ]
In recent years, the task of video prediction—forecasting future video given past video frames—has attracted attention in the research community. In this paper we propose a novel approach to this problem with Vector Quantized Variational AutoEncoders (VQ-VAE). With VQ-VAE we compress high-resolution videos into a hierarchical set of multi-scale discrete latent variables. Compared to pixels, this compressed latent space has dramatically reduced dimensionality, allowing us to apply scalable autoregressive generative models to predict video. In contrast to previous work that has largely emphasized highly constrained datasets, we focus on very diverse, large-scale datasets such as Kinetics-600. We predict video at a higher resolution, 256× 256, than any other previous method to our knowledge. We further validate our approach against prior work via a crowdsourced human evaluation.
[]
[ { "authors": [ "Mohammad Babaeizadeh", "Chelsea Finn", "Dumitru Erhan", "Roy H. Campbell", "Sergey Levine" ], "title": "Stochastic variational video prediction", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Yoshua Bengio", "Nicholas Lonard", "Aaron C. Courville" ], "title": "Estimating or propagating gradients through stochastic neurons for conditional computation", "venue": "CoRR, abs/1308.3432,", "year": 2013 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale gan training for high fidelity natural image synthesis", "venue": "In ICLR. OpenReview.net,", "year": 2019 }, { "authors": [ "Joo Carreira", "Eric Noland", "Andras Banki-Horvath", "Chloe Hillier", "Andrew Zisserman" ], "title": "A short note about kinetics-600", "venue": null, "year": 2018 }, { "authors": [ "Xi Chen", "Nikhil Mishra", "Mostafa Rohaninejad", "Pieter Abbeel" ], "title": "Pixelsnail: An improved autoregressive generative model", "venue": "ICML, volume 80 of Proceedings of Machine Learning Research,", "year": 2018 }, { "authors": [ "Aidan Clark", "Jeff Donahue", "Karen Simonyan" ], "title": "Adversarial video generation on complex datasets, 2020", "venue": null, "year": 2020 }, { "authors": [ "Prafulla Dhariwa", "Heewoo Jun", "Christine Payne", "Jong Wook Kim", "Alec Radfor", "Ilya Sutskever" ], "title": "Jukebox: A generative model for music", "venue": "arXiv preprint arXiv:2005.00341,", "year": 2020 }, { "authors": [ "Sander Dieleman", "Aron van den Oord", "Karen Simonyan" ], "title": "The challenge of realistic music generation: modelling raw audio at scale", "venue": null, "year": 2018 }, { "authors": [ "Frederik Ebert", "Chelsea Finn", "Alex X. Lee", "Sergey Levine" ], "title": "Self-supervised visual planning with temporal skip connections", "venue": "In CoRL,", "year": 2017 }, { "authors": [ "Jeffrey De Fauw", "Sander Dieleman", "Karen Simonyan" ], "title": "Hierarchical autoregressive image models with auxiliary decoders", "venue": null, "year": 1903 }, { "authors": [ "Chelsea Finn", "Ian Goodfellow", "Sergey Levine" ], "title": "Unsupervised learning for physical interaction through video prediction", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Hang Gao", "Huazhe Xu", "Qi-Zhi Cai", "Ruth Wang", "Fisher Yu", "Trevor Darrell" ], "title": "Disentangling propagation and generation for video prediction", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Zekun Hao", "Xun Huang", "Serge Belongie" ], "title": "Controllable video generation with sparse trajectories", "venue": null, "year": 2018 }, { "authors": [ "Yunseok Jang", "Gunhee Kim", "Yale Song" ], "title": "Video prediction with appearance and motion conditions", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Xu Jia", "Bert De Brabandere", "Tinne Tuytelaars", "Luc Van Gool" ], "title": "Dynamic filter networks", "venue": "Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Nal Kalchbrenner", "Aron van den Oord", "Karen Simonyan", "Ivo Danihelka", "Oriol Vinyals", "Alex Graves", "Koray Kavukcuoglu" ], "title": "Video pixel networks", "venue": "In Doina Precup and Yee Whye Teh (eds.), ICML,", "year": 2017 }, { "authors": [ "Tero Karras", "Timo Aila", "Samuli Laine", "Jaakko Lehtinen" ], "title": "Progressive growing of gans for improved quality, stability, and variation", "venue": null, "year": 2017 }, { "authors": [ "Manoj Kumar", "Mohammad Babaeizadeh", "Dumitru Erhan", "Chelsea Finn", "Sergey Levine", "Laurent Dinh", "Durk Kingma" ], "title": "Videoflow: A conditional flow-based model for stochastic video generation", "venue": "In ICLR. OpenReview.net,", "year": 2020 }, { "authors": [ "Alex X. Lee", "Richard Zhang", "Frederik Ebert", "Pieter Abbeel", "Chelsea Finn", "Sergey Levine" ], "title": "Stochastic adversarial video prediction", "venue": "arXiv preprint arXiv:1804.01523,", "year": 2018 }, { "authors": [ "Yijun Li", "Chen Fang", "Jimei Yang", "Zhaowen Wang", "Xin Lu", "Ming-Hsuan Yang" ], "title": "Flow-grounded spatial-temporal video prediction from still images", "venue": "ECCV (9),", "year": 2018 }, { "authors": [ "Yitong Li", "Martin Renqiang Min", "Dinghan Shen", "David E. Carlson", "Lawrence Carin" ], "title": "Video generation from text", "venue": null, "year": 2018 }, { "authors": [ "Shikun Liu", "C. Lee Giles", "Alexander Ororbia" ], "title": "Learning a hierarchical latent-variable model of 3d shapes", "venue": "In 3DV,", "year": 2018 }, { "authors": [ "Pauline Luc", "Natalia Neverova", "Camille Couprie", "Jakob Verbeek", "Yann LeCun" ], "title": "Predicting deeper into the future of semantic segmentation", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Pauline Luc", "Camille Couprie", "Yann LeCun", "Jakob Verbeek" ], "title": "Predicting future instance segmentation by forecasting convolutional features", "venue": "ECCV (9),", "year": 2018 }, { "authors": [ "Pauline Luc", "Aidan Clark", "Sander Dieleman", "Diego de Las Casas", "Yotam Doron", "Albin Cassirer", "Karen Simonyan" ], "title": "Transformation-based adversarial video prediction on large-scale data", "venue": null, "year": 2003 }, { "authors": [ "Michal Mathieu", "Camille Couprie", "Yann LeCun" ], "title": "Deep multi-scale video prediction beyond mean square error", "venue": "In Yoshua Bengio and Yann LeCun (eds.), ICLR (Poster),", "year": 2016 }, { "authors": [ "Jacob Menick", "Nal Kalchbrenner" ], "title": "Generating high fidelity images with subscale pixel networks and multidimensional upscaling", "venue": "In ICLR. OpenReview.net,", "year": 2019 }, { "authors": [ "Charlie Nash", "Christopher K.I. Williams" ], "title": "The shape variational autoencoder: A deep generative model of part-segmented 3d objects", "venue": "Comput. Graph. Forum,", "year": 2017 }, { "authors": [ "Junhyuk Oh", "Xiaoxiao Guo", "Honglak Lee", "Richard L. Lewis", "Satinder P. Singh" ], "title": "Actionconditional video prediction using deep networks in atari games", "venue": null, "year": 2015 }, { "authors": [ "Marc Oliu", "Javier Selva", "Sergio Escalera" ], "title": "Folded recurrent neural networks for future video prediction", "venue": "ECCV (14),", "year": 2018 }, { "authors": [ "Aaron van den Oord", "Sander Dieleman", "Heiga Zen", "Karen Simonyan", "Oriol Vinyals", "Alex Graves", "Nal Kalchbrenner", "Andrew Senior", "Koray Kavukcuoglu" ], "title": "Wavenet: A generative model for raw audio", "venue": null, "year": 2016 }, { "authors": [ "Viorica Patraucean", "Ankur Handa", "Roberto Cipolla" ], "title": "Spatio-temporal video autoencoder with differentiable memory", "venue": null, "year": 2015 }, { "authors": [ "Alec Radford", "Jeff Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": null, "year": 2019 }, { "authors": [ "Marc’Aurelio Ranzato", "Arthur Szlam", "Joan Bruna", "Michal Mathieu", "Ronan Collobert", "Sumit Chopra" ], "title": "Video (language) modeling: a baseline for generative models of natural videos", "venue": "CoRR, abs/1412.6604,", "year": 2014 }, { "authors": [ "Ali Razavi", "Aron van den Oord", "Oriol Vinyals" ], "title": "Generating diverse high-fidelity images with vq-vae-2", "venue": null, "year": 2019 }, { "authors": [ "Scott E. Reed", "Aron van den Oord", "Nal Kalchbrenner", "Sergio Gomez Colmenarejo", "Ziyu Wang", "Yutian Chen", "Dan Belov", "Nando de Freitas" ], "title": "Parallel multiscale autoregressive density estimation", "venue": "In Doina Precup and Yee Whye Teh (eds.), ICML,", "year": 2017 }, { "authors": [ "Masaki Saito", "Shunta Saito" ], "title": "Tganv2: Efficient training of large models for video generation with multiple subsampling", "venue": "layers. CoRR,", "year": 2018 }, { "authors": [ "Christian Schuldt", "Ivan Laptev", "Barbara Caputo" ], "title": "Recognizing human actions: a local svm approach", "venue": "In Pattern Recognition,", "year": 2004 }, { "authors": [ "Nitish Srivastava", "Elman Mansimov", "Ruslan Salakhutdinov" ], "title": "Unsupervised learning of video representations using lstms", "venue": "ICML, volume 37 of JMLR Workshop and Conference Proceedings,", "year": 2015 }, { "authors": [ "Lucas Theis", "Aron van den Oord", "Matthias Bethge" ], "title": "A note on the evaluation of generative models", "venue": "In Yoshua Bengio and Yann LeCun (eds.),", "year": 2016 }, { "authors": [ "Bart Thomee", "David A. Shamma", "Gerald Friedland", "Benjamin Elizalde", "Karl Ni", "Douglas Poland", "Damian Borth", "Li-Jia Li" ], "title": "The new data and new challenges in multimedia research", "venue": "CoRR, abs/1503.01817,", "year": 2015 }, { "authors": [ "Sergey Tulyakov", "Ming-Yu Liu", "Xiaodong Yang", "Jan Kautz" ], "title": "Mocogan: Decomposing motion and content for video generation", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Thomas Unterthiner", "Sjoerd van Steenkiste", "Karol Kurach", "Raphal Marinier", "Marcin Michalski", "Sylvain Gelly" ], "title": "Towards accurate generative models of video: A new metric & challenges", "venue": null, "year": 2018 }, { "authors": [ "Aron van den Oord", "Nal Kalchbrenner", "Koray Kavukcuoglu" ], "title": "Pixel recurrent neural networks", "venue": "ICML, volume 48 of JMLR Workshop and Conference Proceedings,", "year": 2016 }, { "authors": [ "Aron van den Oord", "Oriol Vinyals", "Koray Kavukcuoglu" ], "title": "Neural discrete representation learning", "venue": null, "year": 2017 }, { "authors": [ "Ruben Villegas", "Jimei Yang", "Seunghoon Hong", "Xunyu Lin", "Honglak Lee" ], "title": "Decomposing motion and content for natural video sequence prediction", "venue": "In ICLR (Poster). OpenReview.net,", "year": 2017 }, { "authors": [ "Ruben Villegas", "Jimei Yang", "Yuliang Zou", "Sungryull Sohn", "Xunyu Lin", "Honglak Lee" ], "title": "Learning to generate long-term future via hierarchical prediction", "venue": "In Doina Precup and Yee Whye Teh (eds.), ICML,", "year": 2017 }, { "authors": [ "Carl Vondrick", "Hamed Pirsiavash", "Antonio Torralba" ], "title": "Generating videos with scene dynamics", "venue": null, "year": 2016 }, { "authors": [ "Jacob Walker", "Carl Doersch", "Abhinav Gupta", "Martial Hebert" ], "title": "An uncertain future: Forecasting from static images using variational autoencoders", "venue": "ECCV (7),", "year": 2016 }, { "authors": [ "Jacob Walker", "Kenneth Marino", "Abhinav Gupta", "Martial Hebert" ], "title": "The pose knows: Video forecasting by generating pose futures", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Dirk Weissenborn", "Oscar Tckstrm", "Jakob Uszkoreit" ], "title": "Scaling autoregressive video models", "venue": "In ICLR. OpenReview.net,", "year": 2020 }, { "authors": [ "Wei Xiong", "Wenhan Luo", "Lin Ma", "Wei Liu", "Jiebo Luo" ], "title": "Learning to generate time-lapse videos using multi-stage dynamic generative adversarial networks", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Tianfan Xue", "Jiajun Wu", "Katherine L. Bouman", "Bill Freeman" ], "title": "Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks", "venue": null, "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "When it comes to real-world image data, deep generative models have made substantial progress. With advances in computational efficiency and improvements in architectures, it is now feasible to generate high resolution, realistic images from vast and highly diverse datasets (Brock et al., 2019; Razavi et al., 2019; Karras et al., 2017). Apart from the domain of images, deep generative models have also shown promise in other data domains such as music (Dieleman et al., 2018; Dhariwa et al., 2020), speech synthesis (Oord et al., 2016), 3D voxels (Liu et al., 2018; Nash & Williams, 2017), and text (Radford et al., 2019). One particular fledgling domain is video.\nWhile some work in the area of video generation (Clark et al., 2020; Vondrick et al., 2016; Saito & Saito, 2018) has explored video synthesis—generating videos with no prior frame information— many approaches actually focus on the task of video prediction conditioned on past frames (Ranzato et al., 2014; Srivastava et al., 2015; Patraucean et al., 2015; Mathieu et al., 2016; Lee et al., 2018; Babaeizadeh et al., 2018; Oliu et al., 2018; Xiong et al., 2018; Xue et al., 2016; Finn et al., 2016; Luc et al., 2020). It can be argued that video synthesis is a combination of image generation and video prediction. In other words, one could decouple the problem of video synthesis into unconditional image generation and conditional video prediction from a generated image. Therefore, we specifically focus on video prediction in this paper. Potential computer vision applications of video forecasting include interpolation, anomaly detection, and activity understanding. More generally, video prediction also has more general implications for intelligent systems—the ability to anticipate the dynamics of the environment. The problem is thus also relevant for robotics and reinforcement learning (Finn et al., 2016; Ebert et al., 2017; Oh et al., 2015; Ha & Schmidhuber, 2018; Racanire et al., 2017).\nApproaches toward video prediction have largely skewed toward variations of generative adversarial networks (Mathieu et al., 2016; Lee et al., 2018; Clark et al., 2020; Vondrick et al., 2016; Luc et al., 2020). In comparison, we are aware of only a relatively small number of approaches which propose variational autoencoders (Babaeizadeh et al., 2018; Xue et al., 2016; Denton & Fergus, 2018), autoregressive models (Kalchbrenner et al., 2017; Weissenborn et al., 2020), or flow based approaches (Kumar et al., 2020). There may be a number of reasons for this situation. One is the explosion in the dimensionality of the input space. A generative model of video needs to model not only one image but tens of them in a coherent fashion. This makes it difficult to scale up such models to large datasets or high resolutions. In addition, previous work (Clark et al., 2020) suggests that video prediction may be fundamentally more difficult than video synthesis; a synthesis model can generate simple samples from the dataset while prediction potentially forces the model to forecast conditioned on videos that are outliers in the distribution. Furthermore, most prior work has focused on datasets with low scene\ndiversity such as Moving MNIST (Srivastava et al., 2015), KTH (Schuldt et al., 2004), or robotic arm datasets (Finn et al., 2016; Ebert et al., 2017). While there have been attempts to synthesize video at a high resolution (Clark et al., 2020), we know of no attempt—excluding flow based approaches—to predict video beyond resolutions of 64x64.\nIn this paper we address the large dimensionality of video data through compression. Using Vector Quantized Variational Autoencoders (VQ-VAE) (van den Oord et al., 2017), we can compress video into a space requiring only 1.3% of the bits expressed in pixels. While this compressed encoding is lossy, we can still reconstruct the original video from the latent representation with a high degree of fidelity. Furthermore, we can leverage the modularity of VQ-VAE and decompose our latent representation into a hierarchy of encodings, separating high-level, global information from details such as fine texture or small motions. Instead of training a generative model directly on pixel space, we can instead model this much more tractable discrete representation, allowing us to train much more powerful models, use large diverse datasets, and generate at a high resolution. While most prior work has focused on GANs, this discrete representation can also be modeled by likelihood-based models. Likelihood models in concept do not suffer from mode-collapse, instability in training, and lack of diversity of samples often witnessed in GANs (Denton & Fergus, 2018; Babaeizadeh et al., 2018; Razavi et al., 2019). In this paper, we propose a PixelCNN augmented with causal convolutions in time and spatiotemporal self-attention to model this space of latents. In addition, because the latent representation is decomposed into a hierarchy, we can exploit this decomposition and train separate specialized models at different levels of the hierarchy.\nOur paper makes four contributions. First, we demonstrate the novel application of VQ-VAE to video data. Second, we propose a set of spatiotemporal PixelCNNs to predict video by utilizing the latent representation learned with VQ-VAE. Third, we explicitly predict video at a higher resolution than ever before. Finally, we demonstrate the competitive performance of our model with a crowdsourced human evaluation." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 VECTOR QUANTIZED AUTOENCODERS", "text": "VQ-VAEs (van den Oord et al., 2017) are autoencoders which learn a discrete latent encoding for input data x. First, the output of non-linear encoder ze(x), implemented by a neural network, is passed through a discretization bottleneck. ze(x) is mapped via nearest-neighbor into a quantized codebook e ∈ RK×D where D is the dimensionality of each vector ej and K is the number of categories in the codebook. The discretized representation is thus given by:\nzq(x) = ek where k = argminj ||ze(x)− ej ||2 (1)\nEquation 1 is not differentiable; however, (van den Oord et al., 2017) notes that copying the gradient of zq(x) to ze(x) is a suitable approximation similar to the straight-through estimator (Bengio et al., 2013). A decoder D, also implemented by a neural network, then reconstructs the input from zq(x). The total loss function for the VQ-VAE is thus:\nL = ||D(zg(x))− x||22 + ||sg[zg(x)]− e||22 + β||zg(x)− sg[e]||22 (2)\nWhere sg is a stop gradient operator, and β is a parameter which regulates the rate of code change. As in previous work (van den Oord et al., 2017; Razavi et al., 2019), we replace the second term in equation 2 and learn the codebook e ∈ RK×D via an exponential moving average of previous values during training:\nN (t) i := N (t−1) i ∗ γ + n (t) i (1− γ), m (t) i := m (t−1) i ∗ γ + n (t) i∑ j ze(x) (t) i,j (1− γ), e (t) i := m (t) i N (t) i (3)\nWhere γ is a decay parameter and n(t)i is the numbers of vectors in zg(x) in a batch that will map to ei." }, { "heading": "2.2 PIXELCNN MODELS", "text": "PixelCNN and related models have shown promise in modeling a wide variety of data domains (van den Oord et al., 2016; Oord et al., 2016; Kalchbrenner et al., 2017; Weissenborn et al., 2020). These autoregressive models are likelihood-based—they explicitly optimize negative log-likelihood. They exploit the fact that the joint probability distribution input data x can be factored into a product of conditional distributions for each dimension of the data:\nPθ(x) = n∏ i=0 pθ(xi|x<i) (4)\nWhere n is the full dimensionality of the data. This factorization is implemented by a neural network, and the exact set of conditional dependencies is determined by the data domain. Image pixels may depend on regions above and to the left of them (van den Oord et al., 2016), while temporal dimensions may depend on past dimensions (Oord et al., 2016; Kalchbrenner et al., 2017; Weissenborn et al., 2020)." }, { "heading": "3 METHOD", "text": "Our approach consists of two main components. First, we compress video segments into a discrete latent representation using a hierarchical VQ-VAE. We then propose a multi-stage autoregressive model based on the PixelCNN architecture, exploiting the low dimensionality of the compressed latent space and the hierarchy of the representation." }, { "heading": "3.1 COMPRESSING VIDEO WITH VQ-VAE", "text": "Similar to (Razavi et al., 2019), we use VQ-VAE to compress video in a hierarchical fashion. This multi-stage composition of the latent representation allows decomposition of global, high level information from low-level details such as edges or fine motion. For image information (Razavi et al., 2019) this approach confers a number of advantages. First, the decomposition allows latent codes to specialize at each level. High level information can be represented in an even more compressed manner, and the total reconstruction error is lower. In addition, this hierarchy leads to a naturally modular generative model. We can then develop a generative model that specializes in modeling the high-level, global information. We can then train a separate model, conditioned on global information, that fills in the details and models the low-level information further down the hierarchy. In this paper, we adopt the terminology of (Razavi et al., 2019) and call the set of high-level latents the top layer and the low-level latents the bottom layer.\nConsistent with the experimental setup of previous work in video prediction, we deal with 16-frame videos. Most of the videos in our training dataset are 25 frames per second. We use frames at a 256 × 256 resolution. The full video voxel is thus 256 × 256 × 16. Using residual blocks with 3D convolutions, we downsample the video spatiotemporally. At the bottom layer, the video is downsampled to a quantized latent space of 64× 64× 8, reducing the spatial dimension by 4 and the temporal dimension by 2. Another stack of blocks reduces all dimensions by 2, with a top layer of 32× 32× 4. Each of the voxels in the layer is quantized into 512 codes with a different codebook for both layers.\nThe decoder then concatenates the bottom layer and the top layer after upsampling using transposed convolutions. From this concatentation as input, the decoder deterministically outputs the full 256 × 256 × 16 video. Overall, we reduce a 256 × 256 × 16 × 3 × log(256) space down to a 64×64×8× log(512)+32×32×4× log(512) space, a greater than 98% reduction in bits required." }, { "heading": "65.7% 12.8% 21.5%", "text": "During training, we randomly mask out the bottom layer in the concatenated input to the decoder. Masking encourages the model to utilize the top latent layer and prevent codebook collapse." }, { "heading": "3.2 PREDICTING VIDEO WITH PIXELCNNS", "text": "With VQ-VAE, our 256× 256× 16 video is now decomposed into a hierarchy of quantized latents at 64× 64× 8 and 32× 32× 4. Previous autoregressive approaches involved full pixel videos at 64 × 64 × 16 (Weissenborn et al., 2020) or 64 × 64 × 20 (Kalchbrenner et al., 2017). Our latent representation is thus well within the range of tractability for these models. Furthermore, given the hierarchy, we can factorize our generative model into a coarse-to-fine fashion. We denote the model of the top layer the top prior and model of the bottom layer the bottom prior. Because we are focusing on video prediction, we emphasize that both are still conditioned on a series of input frames. While previous work used 5 frames and predicted 11 (Weissenborn et al., 2020; Clark et al., 2020), the power-of-two design of our architecture leads us to condition on 4 and predict 12. When conditioning our prior models on these frames, we need not use a large stack directly on the original images but save memory and computation by training a smaller stack of residual layers on their latent representation, compressing these 4 conditional frames into a small latent space of 32× 32 and 64× 64× 2. We first model the top layer with a conditional prior model. Our prior model is based on a PixelCNN with multi-head self attention layers (Chen et al., 2018). We adapt this architecture by extending the PixelCNN into time; instead of a convolutional stack over a square image, we use a comparable 3D convolutional stack over the cube representing the prior latents. The convolutions are masked in the same way as the original PixelCNN in space—at each location in space, the convolutions only have access to information to the left and above them. In time, present and future timesteps are masked out, and convolutions only have access to previous timesteps. While spatiotemporal convolutions can be resource intensive, we can implement most of this functionality with 2D convolutions separately in the x− t plane and the y − t plane. We take 1D convolutions in the horizontal and vertical stacks in the original PixelCNN (van den Oord et al., 2016) and add the extra dimension of time to them, making them 2D. Our only true 3D convolution is at the first layer before the addition of gated stacks. We use multi-head attention layers analogous to (Razavi et al., 2019); this time the attention is applied to a 3D voxel instead of a 2D layer as in (Razavi et al., 2019). Attention is applied every five layers. During sampling we can generate voxels left-to-right, top-to-bottom within each temporal step as in the original PixelCNN. Once a final step is generated, we can generate the next step conditioned on the previous generated steps.\nOnce we have a set of latents from the top layer, we can condition our bottom conditional prior model and generate the final bottom layer. Because the bottom layer has a higher number of dimensions and relies on local information, we don’t necessarily need a 3D PixelCNN. Instead, we use a 2D PixelCNN with multi-head self attention every five layers analogous to (Razavi et al., 2019). We implement a 3D conditional stack, however, that takes in a window of time steps from the top layer as well as a window of past generated time steps in the bottom layer. The window sizes we used were 4 and 2 respectively. This conditional stack is used as conditioning to the 2D PixelCNN at the current timestep." }, { "heading": "4 RELATED WORK", "text": "Video Prediction and Synthesis: In the last few years, the research community has focused a spotlight on the topic of video generation—either in the form of video synthesis or prediction. Early approaches involved direct, deterministic pixel prediction (Ranzato et al., 2014; Srivastava et al., 2015; Oh et al., 2015; Patraucean et al., 2015). Given the temporal nature of video, such approaches often incorporated LSTMs. These papers usually applied their deterministic models on datasets such\nas moving MNIST characters (Srivastava et al., 2015); because of their deterministic nature, rarely were they successfully applied to more complex datasets. Given this situation, researchers started to adapt popular models for image generation to the problem starting with generative adversarial models (Mathieu et al., 2016; Vondrick et al., 2016; Lee et al., 2018; Babaeizadeh et al., 2018; Clark et al., 2020; Saito & Saito, 2018; Luc et al., 2020; Xiong et al., 2018), variational autoencoders (Xue et al., 2016), and autoregressive models (Kalchbrenner et al., 2017; Weissenborn et al., 2020). Others stepped aside from the problem of full pixel prediction and instead predicted pixel motion (Finn et al., 2016; Walker et al., 2016; Jia et al., 2016) or a decomposition of pixels and motion (Denton & Fergus, 2018; Gao et al., 2019; Jang et al., 2018; Hao et al., 2018; Li et al., 2018a; Tulyakov et al., 2018; Villegas et al., 2017a). Finally, some have proposed a hierarchical approach based on structured information—generating video conditioned on text (Li et al., 2018b), semantic segments (Luc et al., 2017; 2018), or human pose (Walker et al., 2017; Villegas et al., 2017b)\nCompressing Data with Latents: The key element in our video prediction framework is compression—representing videos through lower dimensional latents. We apply the framework of VQ-VAE (van den Oord et al., 2017; Razavi et al., 2019) which has been successfully applied to compress image and sound data. Related to VQ-VAE, other researchers have explored hierarchies of latents for generation of images (Fauw et al., 2019) and music (Dieleman et al., 2018).\nAutoregressive Models: The foundation of our model is based on PixelCNN (van den Oord et al., 2016). Distinct from implicit likelihood models such as GANs and approximate methods such as VAEs, the family of PixelCNN architectures have shown promise in modeling a variety of data domains including images (van den Oord et al., 2016), sound (Oord et al., 2016), and video (Kalchbrenner et al., 2017; Weissenborn et al., 2020). In line with our paper, recent work with these models has shifted toward decomposing autoregression through hierarchies (Menick & Kalchbrenner, 2019; Reed et al., 2017) and latent compression (van den Oord et al., 2017; Razavi et al., 2019; Dhariwa et al., 2020)." }, { "heading": "5 EXPERIMENTS", "text": "In this section, we evaluate our model quantitively and qualitatively on the Kinetics-600 dataset (Carreira et al., 2018). This dataset of videos is very large and highly diverse, consisting of hundreds of thousands of videos selected from YouTube across 600 actions. While most previous work has focused on more constrained datasets, only a few (Clark et al., 2020; Luc et al., 2020; Weissenborn et al., 2020) have attempted to scale to larger size and complexity. We train our top and bottom models for around 1000000 iterations with a total batch sizes of 512 and 32 respectively. Our VQ-VAE model was trained on a batch size of 16 for 1000000 iterations." }, { "heading": "5.1 QUALITATIVE EVALUATION", "text": "While the Kinetics-600 dataset is publicly available for use, the individual videos in the dataset may not be licensed for display in an academic paper. Therefore, in this paper, we apply our model trained on Kinetics-600 to videos licensed under Creative Commons from the YFCC100m dataset (Thomee et al., 2015). In figure 4 we show some selected predictions. We find that our approach is able to\nmodel camera perspective, parallax, inpainting, deformable human motions, and even aspects of crowd motion across a variety of different visual contexts." }, { "heading": "5.2 QUANTITATIVE EVALUATION", "text": "Quantitative evaluation of generative models of images, especially across different classes of models, is an open problem (Theis et al., 2016). It is even less explored in the realm of video prediction. One proposed metric used on larger datasets such as Kinetics-600 is the Fréchet Video Distance (Unterthiner et al., 2018). As no previous approach has attempted 256×256 resolution, we downscale our videos to 64 × 64 for a proper comparison against prior work. We also compute FVD on the full resolution as a baseline for future work. We use 2-fold cross-validation over 39000 samples to compute FVD. We show our results in Table 7. We find performance exceeds (Clark et al., 2020) but not necessarily (Luc et al., 2020). We also find that comparing the VQ-VAE samples to the reconstructions, not the original videos, leads to an even better score (shown by FVD*). This result is similar to the results on images for VQ-VAE (Razavi et al., 2019). As GAN-based approaches are explicitly trained on classifier (discriminator) based losses, FVD—a metric based on a neural-network classifier—may favor GAN-based approaches versus log-likelihood based models even if the quality of the samples are comparable. Given the possible flaws in this metric, we also conduct a human evaluation similar to (Vondrick et al., 2016). We had 15 participants compare up to 30 side-by-side videos generated from our approach and that of (Luc et al., 2020). Each video had at least 13 judgements. For each comparison, both models used the exact set of conditioning frames and had a resolution of 64x64. Participants could choose a preference for either video, or they could choose indifference—meaning the difference in quality between two videos is too close to perceive. We show our results in Table 1. Out of a total of 405 judgements, participants preferred ours 65.7% of the time, (Luc et al., 2020) 12.8%, and 21.5% were judged to be too close in quality. Even though (Luc et al., 2020) has a much lower FVD score, we find that our participants had stronger preference for samples generated from our model." }, { "heading": "6 CONCLUSION", "text": "In this paper we have explored the application of VQ-VAE towards the task of video prediction. With this learned compressed space, we can utilize powerful autoregressive models to generate possible future events in video at higher resolutions. We show that we are also able to achieve a level of performance comparable to contemporary GANs." }, { "heading": "A APPENDIX", "text": "" } ]
2,020
null
SP:070b8df785712a7741fa4a986ef99f3c47f52b1a
[ "This paper studies initialization and regularization in factorized neural networks (reparameterize a weight matrix by the product of several weight matrices). The authors proposed spectral initialization, that is to initialize the factorized matrices using the SVD of the un-factorized matrix. The authors also proposed Frobenius decay that is to regularize the Frobenius norm of the product of the factorized weight matrices. The motivation is to simulate the routines for non-decomposed counterparts. The authors empirically showed the effectiveness of spectral initialization and Frobenius decay in different applications: compressed model training, knowledge distillation, and multi-head self-training. " ]
Factorized layers—operations parameterized by products of two or more matrices—occur in a variety of deep learning contexts, including compressed model training, certain types of knowledge distillation, and multi-head selfattention architectures. We study how to initialize and regularize deep nets containing such layers, examining two simple, understudied schemes, spectral initialization and Frobenius decay, for improving their performance. The guiding insight is to design optimization routines for these networks that are as close as possible to that of their well-tuned, non-decomposed counterparts; we back this intuition with an analysis of how the initialization and regularization schemes impact training with gradient descent, drawing on modern attempts to understand the interplay of weight-decay and batch-normalization. Empirically, we highlight the benefits of spectral initialization and Frobenius decay across a variety of settings. In model compression, we show that they enable low-rank methods to significantly outperform both unstructured sparsity and tensor methods on the task of training low-memory residual networks; analogs of the schemes also improve the performance of tensor decomposition techniques. For knowledge distillation, Frobenius decay enables a simple, overcomplete baseline that yields a compact model from over-parameterized training without requiring retraining with or pruning a teacher network. Finally, we show how both schemes applied to multi-head attention lead to improved performance on both translation and unsupervised pre-training.
[ { "affiliations": [], "name": "Mikhail Khodak" } ]
[ { "authors": [ "Sanjeev Arora", "Nadav Cohen", "Elad Hazan" ], "title": "On the optimization of deep networks: Implicit acceleration by overparameterization", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Z.D. Bai", "Y.Q. Yin" ], "title": "Limit of the smallest eigenvalue of a large dimensional sample covariance matrix", "venue": "The Annals of Probability,", "year": 1993 }, { "authors": [ "Guillaume Bellec", "David Kappel", "Wolfgang Maass", "Robert Legenstein" ], "title": "Deep rewiring: Training very sparse deep networks", "venue": "In Proceedings of the 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Alberto Bernacchia", "Máté Lengyel", "Guillaume Hennequin" ], "title": "Exact natural gradient in deep linear networks and application to the nonlinear case", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Davis Blalock", "Jose Javier Gonzalez Ortiz", "Jonathan Frankle", "John Guttag" ], "title": "What is the state of neural network pruning", "venue": "In Proceedings of Machine Learning and Systems,", "year": 2020 }, { "authors": [ "Changxiao Cai", "Gen Li", "H. Vincent Poor", "Yuxin Chen" ], "title": "Nonconvex low-rank tensor completion from noisy data", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Mauro Cettolo", "Jan Niehues", "Sebastian Stüker", "Luisa Bentivogli", "Marcello Federico" ], "title": "Report on the 11th IWSLT evaluation campaign", "venue": "In Proceedings of the International Workshop on Spoken Language Translation,", "year": 2014 }, { "authors": [ "Yuejie Chi", "Yue M. Lu", "Yuxin Chen" ], "title": "Nonconvex optimization meets low-rank matrix factorization", "venue": "IEEE Transactions on Signal Processing,", "year": 2019 }, { "authors": [ "Tri Dao", "Nimit Sohoni", "Albert Gu", "Matthew Eichhorn", "Amit Blonder", "Megan Leszczynski", "Atri Rudra", "Christopher Ré" ], "title": "Kaleidoscope: An efficient, learnable representation for all structured linear maps", "venue": "In Proceedings of the 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "ImageNet: A large-scale hierarchical image database", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Misha Denil", "Babak Shakibi", "Laurent Dinh", "Marc’Aurelio Ranzato", "Nando de Freitas" ], "title": "Predicting parameters in deep learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": "In Proceedings of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2019 }, { "authors": [ "Xiaohan Ding", "Yuchen Guo", "Guiguang Ding", "Jungong Han" ], "title": "ACNet: Strengthening the kernel skeletons for powerful CNN via asymmetric convolution blocks", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Simon S. Du", "Wei Hu" ], "title": "Width provably matters in optimization for deep linear neural networks", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural", "venue": "In Proceedings of the 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Jonas Gehring", "Michael Auli", "David Grangier", "Denis Yarats", "Yann N. Dauphin" ], "title": "Convolutional sequence to sequence learning", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Gavin Gray", "Elliot J. Crowley", "Amos Storkey" ], "title": "Separable layers enable structured efficient linear substitutions", "venue": null, "year": 2019 }, { "authors": [ "Suriya Gunasekar", "Jason Lee", "Daniel Soudry", "Nathan Srebro" ], "title": "Implicit bias of gradient descent on linear convolutional networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Shuxuan Guo", "Jose M. Alvarez", "Mathieu Salzmann" ], "title": "ExpandNets: Linear over-parameterization to train compact convolutional networks", "venue": null, "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Elad Hoffer", "Ron Banner", "Itay Golan", "Daniel Soudry" ], "title": "Norm matters: Efficient and accurate normalization schemes in deep networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yerlan Idelbayev", "Miguel A. Carreira-Perpiñán" ], "title": "Low-rank compression of neural nets: Learning the rank of each layer", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Yani Ioannou", "Duncan Robertson", "Jamie Shotton", "Roberto Cipolla", "Antonio Criminisi" ], "title": "Training CNNs with low-rank filters for efficient image classification", "venue": "In Proceedings of the 4th International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In Proceedings of the 32nd International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Raghunandan H. Keshavan", "Andrea Montanari", "Sewoong Oh" ], "title": "Matrix completion from a few entries", "venue": "IEEE Transactions on Information Theory,", "year": 2010 }, { "authors": [ "Jangho Kim", "SeongUk Park", "Nojun Kwak" ], "title": "Paraphrasing complex network: Network compression via factor transfer", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Jean Kossaifi", "Zachary C. Lipton", "Arinbjorn Kolbeinsson", "Aran Khanna", "Tommaso Furlanello", "Anima Anandkumar" ], "title": "Tensor regression networks", "venue": "Journal of Machine Learning Research,", "year": 2020 }, { "authors": [ "Jean Kossaifi", "Antoine Toisoul", "Adrian Bulat", "Yannis Panagakis", "Maja Pantic" ], "title": "Factorized higherorder CNNs with an application to spatio-temporal emotion estimation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Alex Krizhevksy" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Yann LeCun", "John S. Decker", "Sara A. Solla" ], "title": "Optimal brain damage", "venue": "In Advances in Neural Information Processing Systems,", "year": 1990 }, { "authors": [ "Namhoon Lee", "Thalaiyasingam Ajanthan", "Philip H.S. Torr" ], "title": "SNIP: Single-shot network pruning based on connection sensitivity", "venue": "In Proceedings of the 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Decoupled weight decay regularization", "venue": "In Proceedings of the 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Dmytro Mishkin", "Jiri Matas" ], "title": "All you need is a good init", "venue": "In Proceedings of the 4th International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Decebal Constantin Mocanu", "Elena Mocanu", "Peter Stone", "Phuong H. Nguyen", "Madeleine Gibescu", "Antonio Liotta" ], "title": "Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science", "venue": "Nature Communications,", "year": 2018 }, { "authors": [ "Marcin Moczulski", "Misha Denil", "Jeremy Appleyard", "Nando de Freitas" ], "title": "ACDC: A structured efficient linear layer", "venue": null, "year": 2015 }, { "authors": [ "Hesham Mostafa", "Xin Wang" ], "title": "Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Kevin P. Murphy" ], "title": "Machine Learning: A Probabilistic Perspective", "venue": null, "year": 2012 }, { "authors": [ "Preetum Nakkiran", "Raziel Alvarez", "Rohit Prabhavalkar", "Carolina Parada" ], "title": "Compressing deep neural networks using a rank-constrained topology", "venue": "In Proceedings of the 16th Annual Conference of the International Speech Communication Association,", "year": 2015 }, { "authors": [ "Behnam Neyshabur", "Srinadh Bhojanapalli", "Nathan Srebro" ], "title": "A PAC-Bayesian approach to spectrally-normalized margin bounds for neural networks", "venue": "In Proceedings of the 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Ivan V. Oseledets" ], "title": "Tensor-train decomposition", "venue": "SIAM Journal on Scientific Computing,", "year": 2011 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga", "Alban Desmaison", "Andreas Kopf", "Edward Yang", "Zachary DeVito", "Martin Raison", "Alykhan Tejani", "Sasank Chilamkurthy", "Benoit Steiner", "Lu Fang", "Junjie Bai", "Soumith Chintala" ], "title": "PyTorch: An imperative style, high-performance deep learning library", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy Liang" ], "title": "SQuAD: 100,000+ questions for machine comprehension of text", "venue": "In Proceedings of the Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Alex Renda", "Jonathan Frankle", "Michael Carbin" ], "title": "Comparing rewinding and fine-tuning in neural network pruning", "venue": "In Proceedings of the 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Andrew M. Saxe", "James L. McClelland", "Surya Ganguli" ], "title": "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks", "venue": "In Proceedings of the 2nd International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In Proceedings of the 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Nathan Srebro", "Adi Shraibman" ], "title": "Rank, trace-norm and max-norm", "venue": "In Proceedings of the International Conference on Computational Learning Theory,", "year": 2005 }, { "authors": [ "Jingtong Su", "Yihang Chen", "Tianle Cai", "Tianhao Wu", "Ruiqi Gao", "Liwei Wang", "Jason D. Lee" ], "title": "Sanity-checking pruning methods: Random tickets can win the jackpot", "venue": null, "year": 2020 }, { "authors": [ "Cheng Tai", "Tong Xiao", "Yi Zhang", "Xiaogang Wang", "Weinan E" ], "title": "Convolutional neural networks with low-rank regularization", "venue": "In Proceedings of the 4th International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Yonglong Tian", "Dilip Krishnan", "Phillip Isola" ], "title": "Contrastive representation distillation", "venue": "In Proceedings of the 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Li Wan", "Matthew Zeiler", "Sixin Zhang", "Yann LeCun", "Rob Fergus" ], "title": "Regularization of neural networks using DropConnect", "venue": "In Proceedings of the 30th International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Chaoqi Wang", "Roger Grosse", "Sanja Fidler", "Guodong Zhang" ], "title": "EigenDamage: Structured pruning in the Kronecker-factored eigenbasis", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Chaoqi Wang", "Guodong Zhang", "Roger Grosse" ], "title": "Picking winning tickets before training by preserving gradient flow", "venue": "In Proceedings of the 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Kunran Xu", "Lai Rui", "Yishi Li", "Lin Gu" ], "title": "Feature normalized knowledge distillation for image classification", "venue": "In Proceedings of the European Conference on Computer Vision,", "year": 2020 }, { "authors": [ "Ting-Bing Xu", "Cheng-Lin Liu" ], "title": "Data-distortion guided self-distillation for deep neural networks", "venue": "In Proceedings of the 33th AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Atsushi Yaguchi", "Taiji Suzuki", "Shuhei Nitta", "Yukinobu Sakata", "Akiyuki Tanizawa" ], "title": "Decomposable-Net: Scalable low-rank compression for neural networks", "venue": null, "year": 2019 }, { "authors": [ "Chenglin Yang", "Lingxi Xie", "Chi Su", "Alan L. Yuille" ], "title": "Snapshot distillation: Teacher-student optimization in one generation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Greg Yang", "Samuel S. Schoenholz" ], "title": "Mean field residual networks: On the edge of chaos", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Huanrui Yang", "Minxue Tang", "Wei Wen", "Feng Yan", "Daniel Hu", "Ang Li", "Hai Li", "Yiran Chen" ], "title": "Learning low-rank deep neural networks via singular vector orthogonality regularization and singular value sparsification", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Yang You", "Jing Li", "Sashank Reddi", "Jonathan Hseu", "Sanjiv Kumar", "Srinadh Bhojanapalli", "Xiaodan Song", "James Demmel", "Kurt Keutzer", "Cho-Jui Hsieh" ], "title": "Large batch optimization for deep learning: Training BERT in 76 minutes", "venue": "In Proceedings of the 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "In Proceedings of the British Machine Vision Conference,", "year": 2016 }, { "authors": [ "Wenyuan Zeng", "Raquel Urtasun" ], "title": "MLPrune: Multi-layer pruning for automated neural network compression", "venue": "OpenReview,", "year": 2019 }, { "authors": [ "Guodong Zhang", "Chaoqi Wang", "Bowen Xu", "Roger Grosse" ], "title": "Three mechanisms of weight decay regularization", "venue": "In Proceedings of the 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yuge Zhang", "Zejun Lin", "Junyang Jiang", "Quanlu Zhang", "Yujing Wang", "Hui Xue", "Chen Zhang", "Yaming Yang" ], "title": "Deeper insights into weight sharing in neural architecture", "venue": "search. arXiv,", "year": 2020 }, { "authors": [ "Wang" ], "title": "Table 5: Pruning, sparse training, and low-rank training for VGG-19NFC, i.e. the model of Simonyan", "venue": null, "year": 2021 } ]
[ { "heading": "1 INTRODUCTION", "text": "Most neural network layers consist of matrix-parameterized functions followed by simple operations such as activation or normalization. These layers are the main sources of model expressivity, but also the biggest contributors to computation and memory cost; thus modifying these layers to improve computational performance while maintaining performance is highly desirable. We study the approach of factorizing layers, i.e. reparameterizing them so that their weights are defined as products of two or more matrices. When these are smaller than the original matrix, the resulting networks are more efficient for both training and inference (Denil et al., 2013; Moczulski et al., 2015; Ioannou et al., 2016; Tai et al., 2016), resulting in model compression. On the other hand, if training cost is not a concern, one can increase the width or depth of the factors to over-parameterize models (Guo et al., 2020; Cao et al., 2020), improving learning without increasing inference-time cost. This can be seen as a simple, teacher-free form of knowledge distillation. Factorized layers also arise implicitly, such as in the case of multi-head attention (MHA) (Vaswani et al., 2017).\nDespite such appealing properties, networks with factorized neural layers are non-trivial to train from scratch, requiring custom initialization, regularization, and optimization schemes. In this paper we focus on initialization, regularization, and how they interact with gradient-based optimization of factorized layers. We first study spectral initialization (SI), which initializes factors using singular value decomposition (SVD) so that their product approximates the target un-factorized matrix. Then, we study Frobenius decay (FD), which regularizes the product of matrices in a factorized layer rather than its individual terms. Both are motivated by matching the training regimen of the analogous un-factorized optimization. Note that SI has been previously considered in the context of model compression, albeit usually for factorizing pre-trained models (Nakkiran et al., 2015; Yaguchi et al., 2019; Yang et al., 2020) rather than low-rank initialization for end-to-end training; FD has been used in model compression using an uncompressed teacher (Idelbayev & Carreira-Perpiñán, 2020).\nWe formalize and study the justifications of SI and FD from both the classical perspective— matching the un-factorized objective and scaling—and in the presence of BatchNorm (Ioffe & Szegedy, 2015), where this does not apply. Extending recent studies of weight-decay (Zhang et al., 2019), we argue that the effective step-size at spectral initialization is controlled by the factorization’s Frobenius norm and show convincing evidence that weight-decay penalizes the nuclear norm.\nWe then turn to applications, starting with low-memory training, which is dominated by unstructured sparsity methods—i.e. guessing “lottery tickets” (Frankle & Carbin, 2019)—with a prevailing trend of viewing low-rank methods as uncompetitive for compression (Blalock et al., 2020; Zhang et al., 2020; Idelbayev & Carreira-Perpiñán, 2020; Su et al., 2020). Here we show that, without tuning, factorized neural layers outperform all structured sparsity methods on ResNet architectures (He et al., 2016), despite lagging on VGG (Simonyan & Zisserman, 2015). Through ablations, we show that this result is due to using both SI and FD on the factorized layers. We further compare to a recent evaluation of tensor-decomposition approaches for compressed WideResNet training (Zagoruyko & Komodakis, 2016; Gray et al., 2019), showing that (a) low-rank approaches with SI and FD can outperform them and (b) they are themselves helped by tensor-variants of SI and FD.\nWe also study a fledgling subfield we term overcomplete knowledge distillation (Arora et al., 2018; Guo et al., 2020; Cao et al., 2020) in which model weights are over-parameterized as overcomplete factorizations; after training the factors are multiplied to obtain a compact representation of the same network. We show that FD leads to significant improvements, e.g. we outperform ResNet110 with an overcomplete ResNet56 that takes 1.5x less time to train and has 2x fewer parameters at test-time.\nFinally, we study Transformer architectures, starting by showing that FD improves translation performance when applied to MHA. We also show that SI is critical for low-rank training of the model’s linear layers. In an application to BERT pre-training (Devlin et al., 2019), we construct a Frobeniusregularized variant—FLAMBé—of the LAMB method (You et al., 2020), and show that, much like for transformers, it improves performance both for full-rank and low-rank MHA layers.\nTo summarize, our main contributions are (1) motivating the study of training factorized layers via both the usual setting (model compression) and recent applications (distillation, multi-head attention), (2) justifying the use of SI and FD mathematically and experimentally, and (3) demonstrating their effectiveness by providing strong baselines and novel advances in many settings. Code to reproduce our results is available here: https://github.com/microsoft/fnl_paper." }, { "heading": "1.1 RELATED WORK", "text": "We are not the first study gradient descent on factorized layers; in particular, deep linear nets are well-studied in theory (Saxe et al., 2014; Gunasekar et al., 2019). Apart from Bernacchia et al. (2018) these largely examine existing algorithms, although Arora et al. (2018) do effectively propose overcomplete knowledge distillation. Rather than the descent method, we focus on the initialization and regularization. For the former, several papers use SI after training (Nakkiran et al., 2015; Yaguchi et al., 2019; Yang et al., 2020), while Ioannou et al. (2016) argue for initializing factors as though they were single layers, which we find inferior to SI in some cases. Outside deep learning, spectral methods have also been shown to yield better initializations for certain matrix and tensor problems (Keshavan et al., 2010; Chi et al., 2019; Cai et al., 2019). For regularization, Gray et al. (2019) suggest compression-rate scaling (CRS), which scales weight-decay using the reduction in parameter count; this is justified via the usual Bayesian understanding of `2-regularization (Murphy, 2012). However, we find that FD is superior to any tuning of regular weight-decay, which subsumes CRS. Our own analysis is based on recent work suggesting that the function of weight-decay is to aid optimization by preventing the effective step-size from becoming too small (Zhang et al., 2019)." }, { "heading": "2 PRELIMINARIES ON FACTORIZED NEURAL LAYERS", "text": "In the training phase of (self-)supervised ML, we often solve optimization problems of the form minθPΘ 1 |S| ř\npx,yqPS `pfθpxq,yq ` Ωpθq, where fθ : X ÞÑ Y is a function from input domain X to output domain Y parameterized by elements θ P Θ, ` : Y ˆY ÞÑ R is a scalar-valued loss function, Ω : Θ ÞÑ R is a scalar-valued regularizer, and S Ă X ˆY is a finite set of (self-)supervised training examples. We study the setting where fθ is a neural network, an L-layer function whose parameters θ consist of L matrices Wi P Rmiˆni and whose output fθpxq given input x is defined recursively using L functions gi via the formula xi “ gipWi,xi´1q, with x0 “ x and fθpxq “ xL.\nThe standard approach to training fθ is to specify the regularizer Ω, (randomly) pick an initialization in Θ, and iteratively update the parameters using some first-order algorithm such as SGD to optimize the objective above until some stopping criterion is met. However, in many cases we instead optimize over factorized variants of these networks, in which some or all of the matrices Wi P Rmiˆni are re-parameterized as a product Wi “ Uip śdi j“1 MijqV Ti for some inner depth di ě 0 and matrices Ui P Rmiˆri ,Vi P Rniˆri , and Mij P Rriˆri @ j. As discussed in the following examples, this can be done to obtain better generalization, improve optimization, or satisfy practical computational or memory constraints during training or inference. For simplicity, we drop the subscript i whenever re-parameterizing only one layer and only consider the cases when inner depth d is 0 or 1." }, { "heading": "2.1 FULLY-CONNECTED LAYERS", "text": "A fully-connected layer takes an n-dimensional input xi´1 and outputs an m-dimensional vector xi “ σpWxi´1q, where σ : Rm ÞÑ Rm is an element-wise activation function. Here, decomposing W P Rmˆn into the product UV T , where U P Rmˆr,V P Rnˆr, and setting r ! mintm,nu reduces computation and memory costs from Opmnq to Opmr`nrq. We refer to this setting as model compression. Standard learning theory suggests that a small rank r also improves generalization, e.g. for a factorized fully-connected ReLU network, applying }W }2F {}W }22 ď rankpW q to Neyshabur et al. (2018, Theorem 1) and substituting Wi “ UiV Ti gives a w.h.p. margin-bound Õp a\nmr{|S|q suggesting that generalization error varies with the square root of the rank (see Corollary A.1).\nAlternatively, by setting r ě mintm,nu and/or including an inner matrix M P Rrˆr, we can attempt to take advantage of improved optimization due to increased width (Du & Hu, 2019) and/or increased depth (Arora et al., 2018). Crucially, this does not increase inference costs because we can recompose the matrix after training and just use the product. As the goal is to obtain a better small model by first training a large one, we refer to this setting as overcomplete knowledge distillation; of course, unlike regular distillation it is much simpler since there is no student-teacher training stage." }, { "heading": "2.2 CONVOLUTIONAL LAYERS", "text": "A 2d convolutional layer takes an hˆ w ˆ ci´1-dimensional input xi´1 and outputs a hˆ w ˆ cidimensional output xi defined by convolving ci different k ˆ k filters over each of ci´1 input channels. Often the result is passed through a nonlinearity. 2d convolutional layers are parameterized by ciˆ ci´1ˆ kˆ k tensors and require Opk2cici´1q memory and compute. A straightforward way of factorizing this tensor without using tensor decomposition is to reshape it into a cik ˆ ci´1k matrix W , which can then be decomposed as W “ UV T for U P Rcikˆr, V P Rci´1kˆr and some rank r ą 0. As in the fully-connected case, we can either set the rank r to be small in order to reduce the number of parameters or alternatively increase the width (r) or the depth (d) of the factorization to do overcomplete knowledge distillation.\nNote that in the low-rank case a naive approach does not save computation since we must first multiply U and V T , reshape the product UV T , and then use the resulting tensor in a regular 2d convolution of the original size and complexity. However, as shown by Tai et al. (2016), applying the 2d k ˆ k convolution with ci´1 input channels and ci output channels obtained by reshaping UV T is equivalent to a composition of two 1d convolutions: the first defined by V T P Rrˆci´1k consists of r output channels and filters of size k along one input dimension and the second defined by U P Rcikˆr consists of ci output channels and filters of size k along the other input dimension. Together the two 1d convolutions require Opkrpci ` ci´1qq memory and computation, which is significantly better than the Opk2cici´1q cost of the unfactorized case if r ! kmintci, ci´1u." }, { "heading": "2.3 MULTI-HEAD ATTENTION", "text": "An MHA layer (Vaswani et al., 2017) with H attention heads and hidden dimension d can be expressed as being parameterized with 4H matrices: one each of Qh,Kh,Vh,Oh P Rdˆd{H for each head h. Then for a length-T input x P RTˆd it outputs\nH ÿ h“1 Softmax\n˜\nxQhK T h x T\na\nd{H\n¸\nxVhO T h (1)\nMHA combines 2H quadratic forms QhKTh , VhO T h of rank r “ d{H , each a product of matrices, i.e. a factorized layer. We refer to the first form as “Query-Key” and the second as “Output-Value.” Note that r can be varied independently of d to change expressivity, memory, and computation." }, { "heading": "3 INITIALIZATION AND REGULARIZATION", "text": "We now define the initialization and regularization schemes we study as natural extensions of techniques for non-factorized models. They thus require no tuning when an existing training implementation of a non-factorized deep net is available. We later discuss how to justify these schemes when layers are normalized, e.g. using BatchNorm (Ioffe & Szegedy, 2015). In all experiments with convolutional models in this and subsequent sections we factorize all layers except the first and last, which are small, and determine layer ranks by multiplying a uniform scaling factor by the product of a layer’s output channels and kernel width. This rank-scale can be varied to attain the desired number of parameters. Note that our approach may be further improved via a more sophisticated or adaptive rank-assignment scheme (Idelbayev & Carreira-Perpiñán, 2020)." }, { "heading": "3.1 SPECTRAL INITIALIZATION", "text": "Initialization is a major focus of deep learning research (He et al., 2015; Mishkin & Matas, 2016; Yang & Schoenholz, 2017). A common approach is to prevent compounding changes in the norms of the intermediate representations across layers caused by repeated multiplication by the weight matrices. The spectral initialization scheme for initializing low-rank factorized layers attempts to inherit, when possible, this property from an existing initialization by using SVD to ensure that the resulting product matrix is as close as possible to the original parameter: Definition 3.1. Let W P Rmˆn be a parameter of an unfactorized layer. For r ď mintm,nu the spectral initialization (SI) of the factors U P Rmˆr,V P Rnˆr of the corresponding factorized layer sets U “ Ũ ? Σ and V “ Ṽ ? Σ, for Ũ ,Σ, Ṽ “ SVDrpW q given by the rank-r SVD of W .\nSI preserves the largest singular value of W , so if the original scheme did not suffer from a compounding increase in representation norm then neither will spectral initialization. On the other hand, while low-rank layers impose a nullspace, SI aligns it with the directions minimized by W ." }, { "heading": "3.2 FROBENIUS DECAY", "text": "Weight-decay is a common regularizer for deep nets, often implemented explicitly by adding Ωpθq “ λ2 ř\nWPθ }W }2F to the objective for some λ ě 0. Classically, it is thought to improve generalization by constraining model capacity. When training factorized layers parameterized by Up\nśd j“1 MjqV T , the easiest approach to implement is to replace each λ 2 }W } 2 F term in Ωpθq by\nλ 2\n´\n}U}2F ` }V }2F ` řd j“1 }Mj}2F\n¯\n. However, this yields a very different optimization problem: for example, if we consider the case of d “ 0 then this regularizer is in fact an upper bound on the nuclear norm of the recomposed matrix UV T (Srebro & Shraibman, 2005, Lemma 1):\nλ\n2\n` }U}2F ` }V }2F ˘ ě min ŨṼ T“UV T\nλ\n2\n´ }Ũ}2F ` }Ṽ }2F ¯ “ λ}UV T }˚ (2)\nIn fact, Figure 1 shows that for weight-decay this upper bound is tight throughout the training of factorized ResNet20 across a variety of ranks and initializations, suggesting that the naive approach is indeed regularizing the nuclear norm rather than the Frobenius norm.\nSince compression already constrains capacity, in the low-rank case one might favor just reducing regularization, e.g. multiplying λ by the compression rate (Gray et al., 2019). However, Figure 2 shows that this can lead to worse performance, and the approach still penalizes the nuclear norm. Frobenius decay avoids this issue by simply penalizing the squared norm of the entire factorization: Definition 3.2. For λ ě 0 let λ2 }W } 2 F be the contribution of an unfactorized layer parameterized by W P Rmˆn to the penalty term. Then the Frobenius decay (FD) penalty on matrices U P Rmˆr, V P Rnˆr, and Mj P Rrˆr of the corresponding factorized layer is λ2 › › › Up śd j“1 MjqV T › › › 2\nF .\nBy substituting the factorization directly, FD makes the least change to the problem: rank-r optima of the non-factorized objective will also minimize the factorized one. We can also bound generalization error of the ReLU net from Section 2.1 by a term Õ ´b\nm |S| řL i“1 }Uip śd j“1 MijqV Ti }2F\n¯\nvarying directly with the quantity penalized by FD (see Corollary A.2). Notably, FD is a stronger penalty than the nuclear norm implicitly regularized by weight-decay, yet it still yields better models." }, { "heading": "3.3 INITIALIZATION AND REGULARIZATION IN THE PRESENCE OF NORMALIZATION", "text": "The use of spectral initialization and Frobenius decay is largely motivated by the norms of the recomposed matrices: SI prevents them increasing feature vector norms across layers, while FD constrains model capacity via parameter norms. However, normalization layers like BatchNorm (Ioffe & Szegedy, 2015) and others (Ba et al., 2016) largely negate the forward-pass and modelcapacity effects of the norms of the weights parameterizing the layers they follow. Thus for most modern models we need a different explanation for the effectiveness of SI and FD.\nDespite the fact that most layers’ norms do not affect inference or capacity, weight-decay remains useful for optimizing deep nets. Recently, Zhang et al. (2019) extended the analysis of Hoffer et al. (2018) to argue that the effective step-size of the weight direction Ŵ is roughly η{}W }2F , where η is the SGD step-size. Thus by preventing the norm of W from growing too large weight-decay maintains a large-enough effective step-size during training. We draw on this analysis to explore the effect of SI and FD on factorized models. For simplicity, we define a normalized layer to be one that does not depend on the scale of its parameter. Ignoring stability offset terms, this definition roughly holds for normalized linear layers, convolutional layers in ResNets, and the Output-Value quadratic form in Transformers if the residual connection is added after rather than before normalization. Definition 3.3. A normalized layer gpW ,xq parameterized by W P Rmˆn is one that satisfies gpW ,xq “ gpρW ,xq for all W P Rmˆn and all positive scalars ρ.\nBecause the output does not depend on the magnitude of UV T , what matters is the direction of the composed matrix. During an SGD step this direction is updated as follows (proof in Appendix B): Claim 3.1. At all steps t ě 0 let g be a normalized layer of a differentiable model fθt : X ÞÑ Y parameterized by UtV Tt for Ut P Rmˆr,V Tt P Rnˆr, r ě 1. Suppose we update P “ U and P “ V by SGD, setting Pt`1 Ð Pt´η∇Pt using a gradient ∇Pt “ 1|B| ř\npx,yqPB ∇Pt`pfθtpxq,yq over batch B Ă pX ,Yq and η ą 0 sufficiently small. Then for ∇̂t “ ∇ŴtVtV T t `UtUTt ∇Ŵt we have that the vectorized direction ŵt “ vecpŴtq of Wt “ UtV Tt is updated as\nŵt`1 Ð ŵt ´ η\n}Wt}2F\n` Imn ´ ŵtŵTt ˘ vecp∇̂tq `Opη2q (3)\nNote that (1) we ignore decay because λ “ Opηq so any resulting term is Opη2q and (2) the update rule is almost the same as that obtained for the unfactorized case by Zhang et al. (2019), except they have ∇̂t as the true gradient of the direction. Thus, apart from a rank-one correction, Ŵt is approximately updated with step-size η{}Wt}2F multiplying a linear transformation of its gradient. To understand the nature of this transformation, note that at spectral initialization we have that V0V T 0 “ U0UT0 “ Σr are diagonal matrices of singular values of the full-rank initialization W ;\nfurthermore, if W is a Gaussian ensemble with scale 1{ ? n, which roughly aligns with common initialization schemes (He et al., 2015), then its singular values are roughly distributed around 1 and supported on r0, 2s (Bai & Yin, 1993). Since ∇̂t “ ∇ŴtVtV T t `UtUTt ∇Ŵt , this suggests that, at spectral initialization, an effective learning rate of η{}W0}2F is a reasonable approximation for the factorized update. This points to the role of SI being to initialize the factorization at an appropriate scale and perhaps also to make the first update more aligned with the gradient w.r.t. Ŵ0.\nAs in the unfactorized case, our analysis suggests that, the main role of decay may be to maintain a large effective learning rate η{}W }2F ; furthermore, FD may be more effective than regular decay because it provides stronger regularization and directly penalizes the quantity of interest. We support this hypothesis using experiments analogous to Zhang et al. (2019) by comparing training a low-rank ResNet-20 with FD to training it with no decay but at each step normalizing all BatchNormed layers to have the same Frobenius norm as the FD run. In Figure 3 we see that the latter scheme closely tracks the former in terms of both the training loss and the test accuracy. Figure 3 also shows that FD maintains a higher effective step-size than regular weight-decay throughout training." }, { "heading": "4 COMPRESSED MODEL TRAINING: LOW-RANK, SPARSE, AND TENSORIAL", "text": "We first study SI and FD for training factorized models with low-rank convolutional layers, comparing against the dominant approaches to low-memory training: sparse layers and tensor decomposition. For direct comparison we evaluate models with near-identical parameter counts; note that by normalizing by memory we disadvantage the low-rank approach, which often has a comparative advantage for speed. All models are trained for 200 epochs with the same optimizer settings as for the unfactorized models; the weight-decay coefficient is left unchanged when replacing by FD.\nLow-Rank and Sparse Training: We train modified ResNet32 and VGG19 used in the lotteryticket-guessing literature (Wang et al., 2020; Su et al., 2020). Motivated by Frankle & Carbin (2019), such methods fix a sparsity pattern at initialization and train only the unpruned weights. While they achieve high parameter savings, their computational advantages over full models are less clear, as software and accelerators often do not efficiently implement arbitrary sparsity (Paszke et al., 2019; NVIDIA, 2020). In contrast, we do see acceleration via low-rank convolutions, almost halving the time of ResNet’s forward and backward pass at the highest compression. For completeness, we also show methods that vary the sparsity pattern (dynamic), prune trained models (pruning), and prune trained models and retrain (lottery); note that the latter two require training an uncompressed model.\nIn Table 1 we see that the low-rank approach, with SI & FD, dominates at the higher memory settings of ResNet across all three datasets considered, often outperforming even approaches that train an uncompressed model first. It is also close to the best compressed training approach in the lowest memory setting for CIFAR-100 (Krizhevksy, 2009) and Tiny-ImageNet (Deng et al., 2009).\nOn the other hand, the low-rank approach is substantially worse for VGG; nevertheless, ResNet is both smaller and more accurate, so if the goal is an accurate compressed model learned from scratch then one should prefer the low-rank approach. Our results demonstrate a strong, simple baseline not frequently compared to in the low-memory training literature (Frankle & Carbin, 2019; Wang et al., 2020; Su et al., 2020). In fact, since it preserves the top singular components of the original weights, SI can itself be considered a type of (spectral) magnitude pruning. Finally, Table 1 highlights the complementary nature of SI and FD, which outperform regular low-rank training on both models, although interestingly they consistently decrease ResNet performance when used separately.\nMatrix and Tensor Decomposition: We next compare against tensor decompositions, another common approach to small model training (Kossaifi et al., 2020a;b). A recent evaluation of tensors and other related approaches by Gray et al. (2019) found that Tensor-Train decomposition (Oseledets, 2011) obtained the best memory-accuracy trade-off on WideResNet; we thus compare directly to this approach. Note that, while we normalize according to memory, Tensor-Train must be expanded to the full tensor prior to convolution and thus increases the required compute, unlike low-rank factorization. In Table 2 we show that at 6.7% of the original parameters the low-rank approach with FD and SI significantly outperforms Tensor-Train. Tensor-Train excels at the highly compressed 1.7% setting but is greatly improved by leveraging tensor analogs of SI—decomposing a random initialization rather than randomly initializing tensor cores directly—and FD—penalizing the squared Frobenius norm of the full tensor. We also compare to CRS (Gray et al., 2019), which scales regular weight-decay by the compression rate. It is roughly as beneficial as FD across different evaluations of Tensor-Train, but FD is significantly better for low-rank factorized neural layers." }, { "heading": "5 OVERCOMPLETE KNOWLEDGE DISTILLATION", "text": "In contrast to compressed model training, in knowledge distillation (KD) we have the capacity to train a large model but want to deploy a small one. We study what we call overcomplete KD, in which a network is over-parameterized in such a way that an equivalent small model can be directly recovered. Similar approaches have been previously studied only with small models (Arora et al., 2018; Guo et al., 2020) or using convolution-specific methods (Ding et al., 2019; Cao et al., 2020). We take a simple factorization approach in which we decompose weights W P Rmˆn to be products of 2 or 3 matrices while increasing the parameter count, either via depth or width. Here we consider three cases: the full (rank) setting where W “ UV T for U P Rmˆm,V P Rnˆm, the deep setting where W “ UMV T for U ,M P Rmˆm, V P Rnˆm, and the wide setting where W “ UV T for U P Rmˆ3m, V P Rnˆ3m. As before, we factorize all but the first and last layer and train factorized networks using the same routine as for the base model, except when replacing weight-decay by FD. We do not study SI here, using the default initialization for U ,V T and setting M “ Im. Table 3 shows the strength of this approach: we can train ResNet32 to beat ResNet56 and ResNet56 to beat ResNet110 while needing about the same number of parameters during training and only half as many at inference. Note that training ResNet56 using the “full” setting is also 1.5x faster than training ResNet110 (see Table 8 for timings). Furthermore, we improve substantially upon DOConv (Cao et al., 2020), which obtains much smaller improvements over the unfactorized baseline. In fact, Table 9 suggests our approach compares favorably even with regular KD methods; combined with its simplicity and efficiency this suggests that our overcomplete method can be considered as a baseline in the field. Finally, we also show that Frobenius decay is critical: without it, overcomplete KD performs worse than regular training. A detailed visualization of this, comparing the CIFAR performance of factorized ResNets across a range of rank-scale settings covering both the current high-rank (distillation) case and the previous low-rank (compression) case, can be found in Figure 2." }, { "heading": "6 MULTI-HEAD ATTENTION AS FACTORIZED QUADRATIC FORMS", "text": "Our final setting is multi-head attention (Transformer) architectures (Vaswani et al., 2017). As discussed in Section 2, the MHA component is already a factorized layer, consisting of an aggregation over the output of two quadratic forms per head: the “Query-Key” (QK) forms passed into the softmax and the “Output-Value” (OV) forms that multiply the resulting attention scores (c.f. Equation 1). Transformers also contain large linear layers, which can also be factorized for efficiency.\nImproving Transformer Training and Compression: We start with the original Transformer architecture on the IWSLT-14 translation task (Cettolo et al., 2014) but use an SGD-based training routine as a baseline (Gehring et al., 2017). As there is no weight-decay by default, we first tune both this and FD on the non-factorized model; here FD is applied to the implicitly factorized MHA layers.\nIn Figure 4 we show that this alone yields an improvement: whereas the effect of weight-decay is either negligible or negative, tuning FD does improve the BLEU score. Furthermore, we see that tuning just the OV form in MHA is more robust at higher regularization levels than tuning both OV and QK. We conclude by examining both SI and FD when reducing the number of parameters by (1) factorizing all linear and embedding layers and (2) scaling down the embedding dimension in MHA. In Figure 4 we see that the benefit of Frobenius decay disappears when compressing; on the other hand, SI provides a strong boost under both types of decay, and is in fact necessary for FD to work at all. Note that the major effect here is for the factorized linear layers—we found that SI has minimal effect when applied to MHA, likely because those initializations have already been tuned.\nFLAMBé for Unsupervised BERT Pre-Training: Lastly, we examine BERT (Devlin et al., 2019), a large transformer trained on a massive unsupervised text corpus and evaluated on downstream language tasks. The state-of-the-art training approach is via the LAMB optimizer (You et al., 2020) using weight-decay based on the AdamW algorithm of Loshchilov & Hutter (2019), in which λη times each parameter is subtracted from itself; this is equivalent to `2-regularization for SGD but not for adaptive methods. We can define a similar Frobenius alternative by subtracting λη times the Frobenius gradients UV TV “ ∇U 12}UV T }2F and V UTU “ ∇V 12}UV T }2F from U and V , respectively; when used with the LAMB optimizer we call this method FLAMBé. We see in Table 4 that FLAMBé outperforms the simple FD modification of LAMB and, as with IWSLT, leads to an improvement in downstream task performance without changing model size. For BERT, however, applying FD via FLAMBé also leads to better downstream performance when scaling down the MHA embedding dimension by half. Besides achieving a better compressed model, the success of FLAMBé also shows the potential of new types of decay schemes for adaptive methods." }, { "heading": "7 CONCLUSION", "text": "In this paper we studied the design of training algorithms for deep nets containing factorized layers, demonstrating that two simple specializations of standard initialization and regularization schemes from the unfactorized case lead to strong improvements for model compression, knowledge distillation, and MHA-based architectures. While we largely focused on the case where the unfactorized model uses Gaussian initialization and `2-regularization, we believe our work provides guidance for the many cases where other schemes are used to enforce alternative priors or improve optimization. For example, SI as defined can be applied to any random initialization, while our FD results suggest that regularizers such as penalty terms and DropConnect (Wan et al., 2013) should be applied on the product matrix rather than directly on individual factors. The success of SI and FD for both low-rank and tensor decomposition also suggests that these schemes may be useful for other types of factorized neural layers, such as ACDC (Moczulski et al., 2015) or K-matrices (Dao et al., 2020)." }, { "heading": "A GENERALIZATION ERROR OF FACTORIZED LAYERS", "text": "In this section we briefly discuss how to apply the generalization bounds in Neyshabur et al. (2018) to factorized models.\nDefinition A.1. For any γ ą 0 and distribution D over classification data X ˆY the γ-margin-loss of a model fθ : X ÞÑ Y is defined as `γDpfθq “ Ppx,yq„D pfθpxqrys ď γ `maxy1‰y fθpxqry1sq.\nNote that `0D is the expected classification loss over the distribution and ` γ UniformpSq is the empirical γ-margin-loss. We have the following corollaries:\nCorollary A.1. Let fθ be a neural network with L ´ 1 factorized fully-connected ReLU layers gipUiV Ti ,xi´1q “ max UiV T i xi´1,0m (\nof hidden dimension m and one factorized classification layer gLpULV TL ,xL´1q “ ULV TL xL´1. Let D be a distribution over B-bounded classification data X ˆ Y and suppose we have a finite set S of i.i.d. samples from it. If rankpUiV Ti q ď r and }UiV Ti }2u ď σ for all i then for any δ ą 0 we have w.p. 1´ δ that\n`0Dpfθq ď ` γ UniformpSq `O\n¨\n˝\nd\nB2L3mσ2Lr logpLmq śL i“1 }UiV Ti }22 ` log L|S| δ\nγ2|S|\n˛\n‚ (4)\nProof. Apply the inequality }Wi}2F {}Wi}22 ď rankpWiq to Neyshabur et al. (2018, Theorem 1) and substitute Wi “ UiV Ti .\nCorollary A.2. Let fθ be a neural network with L ´ 1 factorized fully-connected ReLU layers gipUip śd j“1 MijqV Ti ,xi´1q “ max ! Uip śd j“1 MijqV Ti xi´1,0m ) of hidden dimension m and\none factorized classification layer gLpULp śd j“1 MLjqV TL ,xL´1q “ ULp śd j“1 MLjqV TL xL´1. Let D be a distribution over B-bounded classification data X ˆY from which we have a finite set S of i.i.d. samples. If }Ui śd j“1 MijV T i }2u ď σ for all i then for any δ ą 0 we have w.p. 1´ δ that\n`0Dpfθq ď ` γ UniformpSq `O\n¨\n˝\nd\nB2L2mσ2L´2 logpLmq řL i“1 }Uip śd j“1 MijqV Ti }2F ` log L|S| δ\nγ2|S|\n˛\n‚\n(5)\nProof. Apply the equality ´\nśL i“1 }Wi}22\n¯\nřL i“1 }Wi}2F }Wi}22 “ řL i“1 }Wi}2F ś j‰i }Wj}22 to Neyshabur\net al. (2018, Theorem 1) and substitute Wi “ Uip śd j“1 MijqV Ti ." }, { "heading": "B PROOF OF CLAIM 3.1", "text": "Proof. Let ρt “ }Wt}F be the Frobenius norm of the composed matrix at time t. Applying the update rules for Ut and Vt and using the fact that ρt∇Wt “ ∇Ŵt yields\nWt`1 “ pUt ´ η∇UtqpV Tt ´ η∇V Tt q “ pUt ´ η∇WtVtqpV T t ´ ηUTt ∇WTt q\n“Wt ´ η\nρt ∇̂t ` η2∇WtW Tt ∇Wt\n(6)\nTaking the squared norm of both sides yields ρ2t`1 “ ρ2t ´ 2ηTrpŴ Tt ∇̂tq ` Opη2q; we can then take the square root of both sides and use a Tayor expansion to obtain\nρt`1 “ ρt\nd\n1´ 2η ρ2t TrpŴ Tt ∇̂tq `Opη2q “ ρt ´ η ρt TrpŴ Tt ∇̂tq `Opη2q (7)\nThen, starting from Equation 6 divided by ρt`1, substituting Equation 7, and applying a Taylor expansion yields\nŴt`1 “ ρt ρt`1 Ŵt ´ η ρtρt`1 ∇̂t `Opη2q\n“ 1 1´ η\nρ2t TrpŴ Tt ∇̂tq\nŴt ´ η\nρ2t ´ ηTrpŴ Tt ∇̂tq ∇̂t `Opη2q\n“ ˆ 1` η ρ2t TrpŴ Tt ∇̂tq ˙ Ŵt ´ η ρ2t ∇̂t `Opη2q “ Ŵt ´ η\nρ2t p∇̂t ´ pŵTt vecp∇̂tqqŴtq `Opη2q\n(8)\nVectorizing and observing that pŵTt vecp∇̂tqqŵt “ ŵtŵTt vecp∇̂tq yields the result." }, { "heading": "C EXPERIMENTAL DETAILS FOR TRAINING CONVOLUTIONAL NETWORKS", "text": "For experiments with regular ResNets on CIFAR we used code provided here: https:// github.com/akamaster/pytorch_resnet_cifar10. All hyperparameter settings are the same, except initialization and regularization as appropriate, with the exception that we use a warmup epoch with a 10 times smaller learning rate for ResNet56 for stability (this is already done by default for ResNet110).\nFor comparisons with sparse model training of ResNet32x2 and VGG19NFC we use code by Wang et al. (2019, https://github.com/alecwangcq/EigenDamage-Pytorch), which is closely related to that of the lottery-ticket-guessing paper by Wang et al. (2020). All hyperparameter settings are the same, except (1) initialization and regularization as appropriate and (2) for Tiny-ImageNet we only train for 200 epochs instead of 300.\nFor comparisons with tensor decomposition training of WideResNet we use code by Gray et al. (2019, https://github.com/BayesWatch/deficient-efficient). All hyperparameter settings are the same, except initialization and regularization as appropriate." }, { "heading": "D EXPERIMENTAL DETAILS FOR TRAINING TRANSFORMER MODELS", "text": "For experiments with Transformer models on machine translation we used code provided here: https://github.com/StillKeepTry/Transformer-PyTorch. All hyperparameter settings are the same, except initialization and regularization as appropriate.\nFor comparisons with LAMB optimization of BERT, we use an implementation provided by NVIDIA: https://github.com/NVIDIA/DeepLearningExamples/tree/master/ PyTorch/LanguageModeling/BERT. All hyperparameter settings are the same, except initialization and regularization as appropriate. For fine-tuning on SQuAD we apply the same optimization routine to all pre-trained models." }, { "heading": "E PAST WORK ON KNOWLEDGE DISTILLATION", "text": "We first briefly summarize past work on overcomplete KD. While the use of factorized neural layers for improving training has theoretical roots (Arora et al., 2018; Du & Hu, 2019), we are aware of two other works focusing on experimental practicality: ExpandNets (Guo et al., 2020) and DOConv (Cao et al., 2020). As the former focuses on small student networks, numerically we compare directly to the latter, showing much better improvement due to distillation for both ResNet56 and ResNet110 on both CIFAR-10 and CIFAR-100 in Table 3. Note that we are also aware of one paper proposing a related method that also trains an over-parameterized model without increasing expressivity that can then be collapsed into a smaller “original” model (Ding et al., 2019); their approach, called ACNet, passes the input to each layer through differently-shaped kernels that can be composed additively. Note that it is unclear how to express this method as a factorization and it may not be easy to generalize to non-convolutional networks, so we do not view it as an overcomplete KD approach.\nWe now conduct a brief comparison with both these works and more standard approaches to KD. Direct comparison with past work is made difficult by the wide variety of training routines, teacher models, student models, and evaluation procedures employed by the community. Comparing our specific approach is made even more difficult by the fact that we have no teacher network, or at least not one that is a standard model used in computer vision. Nevertheless, in Table 9 we collect an array of existing results that can be plausibly compared to our own overcomplete distillation of ResNets. Even here, note that absolute numbers vary significantly, so we focus on changes in accuracy. As can be seen from the results, our overcomplete approach yields the largest improvements for ResNet56 and ResNet110 on both CIFAR-10 and CIFAR-100. For ResNet32, the Snapshot Distillation method of Xu & Liu (2019) outperforms our own, although it does not do so for ResNet110 and is not evaluated for ResNet56. On CIFAR-10 the additive ACNet approach also has a larger performance improvement for ResNet32. Nevertheless, our method is still fairly close in these cases, so the results in Table 9 together with the simplicity and short (single-stage) distillation routine of our overcomplete approach suggest that it should be a standard baseline for KD." } ]
2,021
FACTORIZED NEURAL LAYERS
SP:f33566d66d4f232d32107d392bb27c110b0b0ae3
[ "The paper proposes a two-stage defense method to improve the adversarial robustness over different perturbation types. Specifically, it first builds a hierarchical binary classifier to differentiable the perturbation types and then uses the result to guide to its corresponding defense models. It first proves the different types of perturbations could be separable and the adversary could be weakened to fool the binary classifier. It shows their methods achieve a clear improvement in the experiments." ]
Despite the recent advances in adversarial training based defenses, deep neural networks are still vulnerable to adversarial attacks outside the perturbation type they are trained to be robust against. Recent works have proposed defenses to improve the robustness of a single model against the union of multiple perturbation types. However, when evaluating the model against each individual attack, these methods still suffer significant trade-offs compared to the ones specifically trained to be robust against that perturbation type. In this work, we introduce the problem of categorizing adversarial examples based on their `p perturbation types. Based on our analysis, we propose PROTECTOR, a two-stage pipeline to improve the robustness against multiple perturbation types. Instead of training a single predictor, PROTECTOR first categorizes the perturbation type of the input, and then utilizes a predictor specifically trained against the predicted perturbation type to make the final prediction. We first theoretically show that adversarial examples created by different perturbation types constitute different distributions, which makes it possible to distinguish them. Further, we show that at test time the adversary faces a natural trade-off between fooling the perturbation type classifier and the succeeding predictor optimized with perturbation specific adversarial training. This makes it challenging for an adversary to plant strong attacks against the whole pipeline. In addition, we demonstrate the realization of this trade-off in deep networks by adding random noise to the model input at test time, enabling enhanced robustness against strong adaptive attacks. Extensive experiments on MNIST and CIFAR-10 show that PROTECTOR outperforms prior adversarial training based defenses by over 5%, when tested against the union of `1, `2, `∞ attacks.1
[]
[ { "authors": [ "Maksym Andriushchenko", "Francesco Croce", "Nicolas Flammarion", "Matthias Hein" ], "title": "Square attack: a query-efficient black-box adversarial attack via random search", "venue": "In European Conference on Computer Vision,", "year": 2020 }, { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Anand Bhattad", "Min Jin Chong", "Kaizhao Liang", "Bo Li", "D.A. Forsyth" ], "title": "Unrestricted adversarial examples via semantic manipulation", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Wieland Brendel", "Jonas Rauber", "Matthias Bethge" ], "title": "Decision-based adversarial attacks: Reliable attacks against black-box machine learning models", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "J. Buckman", "Aurko Roy", "Colin Raffel", "Ian J. Goodfellow" ], "title": "Thermometer encoding: One hot way to resist adversarial examples", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Adversarial examples are not easily detected: Bypassing ten detection methods", "venue": "In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security,", "year": 2017 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "In Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Yair Carmon", "Aditi Raghunathan", "Ludwig Schmidt", "John C Duchi", "Percy S Liang" ], "title": "Unlabeled data improves adversarial robustness", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Alessandro Cennamo", "Ido Freeman", "Anton Kummert" ], "title": "A statistical defense approach for detecting adversarial examples", "venue": "In Proceedings of the 2020 International Conference on Pattern Recognition and Intelligent Systems,", "year": 2020 }, { "authors": [ "Jeremy Cohen", "Elan Rosenfeld", "Zico Kolter" ], "title": "Certified adversarial robustness via randomized smoothing", "venue": "Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "Francesco Croce", "Matthias Hein" ], "title": "Provable robustness against all adversarial lp-perturbations for p ≥ 1", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Francesco Croce", "Matthias Hein" ], "title": "Minimally distorted adversarial examples with a fast adaptive boundary attack", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Francesco Croce", "Matthias Hein" ], "title": "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Yinpeng Dong", "Fangzhou Liao", "Tianyu Pang", "Hang Su", "Jun Zhu", "Xiaolin Hu", "Jianguo Li" ], "title": "Boosting adversarial attacks with momentum", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Reuben Feinman", "Ryan R Curtin", "Saurabh Shintre", "Andrew B Gardner" ], "title": "Detecting adversarial samples from artifacts", "venue": "arXiv preprint arXiv:1703.00410,", "year": 2017 }, { "authors": [ "Gil Fidel", "Ron Bitton", "Asaf Shabtai" ], "title": "When explainability meets adversarial learning: Detecting adversarial examples using shap signatures", "venue": null, "year": 1909 }, { "authors": [ "Ian J. Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Chuan Guo", "Mayank Rana", "Moustapha Cisse", "Laurens van der Maaten" ], "title": "Countering adversarial images using input transformations", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Warren He", "James Wei", "Xinyun Chen", "Nicholas Carlini", "Dawn Song" ], "title": "Adversarial example defense: Ensembles of weak defenses are not strong", "venue": "In 11th {USENIX}Workshop on Offensive Technologies ({WOOT}", "year": 2017 }, { "authors": [ "Dan Hendrycks", "Thomas Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Kevin Zhao", "Steven Basart", "Jacob Steinhardt", "Dawn Song" ], "title": "Natural adversarial examples", "venue": "arXiv preprint arXiv:1907.07174,", "year": 2019 }, { "authors": [ "Shengyuan Hu", "Tao Yu", "Chuan Guo", "Wei-Lun Chao", "Kilian Q Weinberger" ], "title": "A new defense against adversarial images: Turning a weakness into a strength", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras", "Logan Engstrom", "Brandon Tran", "Aleksander Madry" ], "title": "Adversarial examples are not bugs, they are features", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Daniel Kang", "Yi Sun", "Dan Hendrycks", "Tom Brown", "Jacob Steinhardt" ], "title": "Testing robustness against unforeseen adversaries", "venue": "arXiv preprint arXiv:1908.08016,", "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "University of Toronto,", "year": 2012 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial examples in the physical world", "venue": "ICLR Workshop,", "year": 2017 }, { "authors": [ "Yann LeCun", "Corinna Cortes", "CJ Burges" ], "title": "Mnist handwritten digit database", "venue": "ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist,", "year": 2010 }, { "authors": [ "Kimin Lee", "Kibok Lee", "Honglak Lee", "Jinwoo Shin" ], "title": "A simple unified framework for detecting out-of-distribution samples and adversarial attacks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Xingjun Ma", "Bo Li", "Yisen Wang", "Sarah M Erfani", "Sudanthi Wijewickrema", "Grant Schoenebeck", "Dawn Song", "Michael E Houle", "James Bailey" ], "title": "Characterizing adversarial subspaces using local intrinsic dimensionality", "venue": null, "year": 2018 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Pratyush Maini", "Eric Wong", "J. Zico Kolter" ], "title": "Adversarial robustness against the union of multiple perturbation models", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Pascal Frossard" ], "title": "Deepfool: a simple and accurate method to fool deep neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Xi Wu", "Somesh Jha", "Ananthram Swami" ], "title": "Distillation as a defense to adversarial perturbations against deep neural networks", "venue": "In 2016 IEEE Symposium on Security and Privacy (SP),", "year": 2016 }, { "authors": [ "Jonas Rauber", "Wieland Brendel", "Matthias Bethge" ], "title": "Foolbox: A python toolbox to benchmark the robustness of machine learning models", "venue": "In Reliable Machine Learning in the Wild Workshop, 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Leslie Rice", "Eric Wong", "J Zico Kolter" ], "title": "Overfitting in adversarially robust deep learning", "venue": "arXiv preprint arXiv:2002.11569,", "year": 2020 }, { "authors": [ "Jérôme Rony", "Luiz G Hafemann", "Luiz S Oliveira", "Ismail Ben Ayed", "Robert Sabourin", "Eric Granger" ], "title": "Decoupling direction and norm for efficient gradient-based l2 adversarial attacks and defenses", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Lukas Schott", "Jonas Rauber", "Matthias Bethge", "Wieland Brendel" ], "title": "Towards the first adversarially robust neural network model on mnist", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Leslie N Smith" ], "title": "A disciplined approach to neural network hyper-parameters: Part 1–learning rate, batch size, momentum, and weight decay", "venue": "arXiv preprint arXiv:1803.09820,", "year": 2018 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "P. Tabacof", "E. Valle" ], "title": "Exploring the space of adversarial images", "venue": "In 2016 International Joint Conference on Neural Networks (IJCNN),", "year": 2016 }, { "authors": [ "Florian Tramèr", "Dan Boneh" ], "title": "Adversarial training and robustness for multiple perturbations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Florian Tramèr", "Alexey Kurakin", "Nicolas Papernot", "Ian Goodfellow", "Dan Boneh", "Patrick McDaniel" ], "title": "Ensemble adversarial training: Attacks and defenses", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Florian Tramer", "Nicholas Carlini", "Wieland Brendel", "Aleksander Madry" ], "title": "On adaptive attacks to adversarial example defenses", "venue": "arXiv preprint arXiv:2002.08347,", "year": 2020 }, { "authors": [ "Dimitris Tsipras", "Shibani Santurkar", "Logan Engstrom", "Alexander Turner", "Aleksander Madry" ], "title": "Robustness may be at odds with accuracy", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Eric Wong", "Leslie Rice", "J Zico Kolter" ], "title": "Fast is better than free: Revisiting adversarial training", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Dong Yin", "Raphael Gontijo Lopes", "Jon Shlens", "Ekin Dogus Cubuk", "Justin Gilmer" ], "title": "A fourier perspective on model robustness in computer vision", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Xuwang Yin", "Soheil Kolouri", "Gustavo K Rohde" ], "title": "Adversarial example detection and classification with asymmetrical adversarial training", "venue": "arXiv preprint arXiv:1905.11475,", "year": 2019 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric Xing", "Laurent El Ghaoui", "Michael Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Zhang" ], "title": "2019) for our Mp models, and we train these models using their proposed TRADES loss. For CIFAR-10, we use the same training setup and model architecture", "venue": null, "year": 2019 }, { "authors": [ "Carmon" ], "title": "2019), which is based on a robust self-training algorithm that utilizes unlabeled data", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "There has been a long line of work studying the vulnerabilities of machine learning models to small changes in the input data. In particular, most existing works focus on `p bounded perturbations (Szegedy et al., 2013; Goodfellow et al., 2015). While majority of the prior work aims at achieving robustness against a single perturbation type (Madry et al., 2018; Kurakin et al., 2017; Tramèr et al., 2018; Dong et al., 2018; Zhang et al., 2019; Carmon et al., 2019), real-world deployment of machine learning models requires them to be robust against various imperceptible changes in the input, irrespective of the attack type. Prior work has shown that when models are trained to be robust against one perturbation type, such robustness typically does not transfer to attacks of a different type (Schott et al., 2018; Kang et al., 2019). As a result, recent works have proposed to develop models that are robust against the union of multiple perturbation types (Tramèr & Boneh, 2019; Maini et al., 2020). Specifically, these works consider adversaries limited by their `p distance from the original input for p ∈ {1, 2,∞}. While these methods improve the overall robustness against multiple perturbation types, when evaluating the robustness against each individual perturbation type, the robustness of models trained by these methods is still considerably worse than those trained on a single perturbation type. Further, these methods are found sensitive to small changes in hyperparameters.\nIn this work, we propose an alternative view that does not require a single predictor to be robust against a union of perturbation types. Instead, we propose to utilize a union of predictors to improve the\n1We will open-source the code, pre-trained models, and perturbation type datasets upon publication.\noverall robustness, where each predictor is specialized to defend against certain perturbation types. In particular, we introduce the problem of categorizing adversarial examples based on their perturbation types. Based on this idea, we propose PROTECTOR, a two-stage pipeline that performs Perturbation Type Categorization for Robustness against multiple perturbations. Specifically, first a perturbation type classifier predicts the type of the attack. Then, among the second-level predictors, PROTECTOR selects the one that is the most robust to the predicted perturbation type to make final prediction.\nWe validate our approach from both theoretical and empirical aspects. First, we present theoretical analysis to show that for benign samples with the same ground truth label, their distributions become highly distinct when added with different types of perturbations, and thus can be separated. Further, we show that there exists a natural tension between attacking the top-level perturbation classifier and the second-level predictors – strong attacks against the second-level predictors make it easier for the perturbation classifier to predict the adversarial perturbation type, and fooling the perturbation classifier requires planting weaker (or less representative) attacks against the second-level predictors. As a result, even an imperfect perturbation classifier is sufficient to significantly improve the overall robustness of the model to multiple perturbation types.\nEmpirically, we show that the perturbation type classifier generalizes well on classifying adversarial examples against different adversarially trained models. Then we further compare PROTECTOR to the state-of-the-art defenses against multiple perturbations on MNIST and CIFAR-10. PROTECTOR outperforms prior approaches by over 5% against the union of the `1, `2 and `∞ attacks. While past work has focused on the worst case metric against all attacks, on average they suffer significant tradeoffs against individual attacks. From the suite of 25 different attacks tested, the average improvement for PROTECTOR over all the attacks w.r.t. the state-of-art baseline defense is ∼ 15% on both MNIST and CIFAR10. In particular, by adding random noise to the model input at test time, we further increase the tension between attacking top-level and second-level components, and bring in additional improvement of robustness against adaptive attackers. Additionally, PROTECTOR provides a modular way to integrate and update defenses against a single perturbation type." }, { "heading": "2 RELATED WORK", "text": "Adversarial examples. The realization of the existence of adversarial examples in deep neural networks has spun active research on attack algorithms and defense proposals (Szegedy et al., 2013). Among different types of attacks (Madry et al., 2018; Hendrycks et al., 2019; Hendrycks & Dietterich, 2019; Bhattad et al., 2020), the most commonly studied ones constrain the adversarial perturbation within an `p region of radius p around the original input. To improve the model robustness in the presence of such adversaries, the majority of existing defenses utilize adversarial training (Goodfellow et al., 2015), which augments the training dataset with adversarial images. Till date, different variants of the original adversarial training algorithm remain the most successful defenses against adversarial attacks (Carmon et al., 2019; Zhang et al., 2019; Wong et al., 2020; Rice et al., 2020). Other types of defenses include input transformation (Guo et al., 2018; Buckman et al., 2018) and network distillation (Papernot et al., 2016), but were rendered ineffective under stronger adversaries (He et al., 2017; Carlini & Wagner, 2017a; Athalye et al., 2018; Tramer et al., 2020). Other works have explored the relation between randomizing the inputs and adversarial examples. Tabacof & Valle (2016) analyzed the change in adversarial robustness with varying levels of noise. Hu et al. (2019) evaluated the robustness of a data point to random noise to detect adversarial examples, whereas Cohen et al. (2019) utilized randomized smoothing for certified robustness to adversarial attacks.\nDefenses against multiple perturbation types. Recent research has been drawn towards the goal of universal adversarial robustness. Since `p-norm bounded attacks are amongst the strongest attacks in adversarial examples literature, defending against a union of such attacks is an important step towards this end goal. Schott et al. (2018); Kang et al. (2019) showed that models that were trained for a given `p-norm bounded attacks are not robust against attacks in a different `q region. Succeeding work has aimed at developing one single model that is robust against the union of multiple perturbation types. Schott et al. (2018) proposed the use of multiple variational autoencoders to achieve robustness to multiple `p attacks on the MNIST dataset. Tramèr & Boneh (2019) used simple aggregations of multiple adversaries to achieve non-trivial robust accuracy against the union of the `1, `2, `∞ regions. Maini et al. (2020) proposed the MSD algorithm that takes gradient steps in the union of multiple `p regions to improve multiple perturbation robustness. In a related line of work, Croce & Hein (2020a)\nproposed a method for provable robustness against all `p regions for p ≥ 1. Instead of presenting empirical results, they study the upper and lower bounds of certified robust test error on much smaller perturbation radii. Therefore, their work has a different focus, and is not directly comparable to empirical defenses studied in our work.\nDetection of adversarial examples. Multiple prior works have focused on detecting adversarial examples (Feinman et al., 2017; Lee et al., 2018; Ma et al., 2018; Cennamo et al., 2020; Fidel et al., 2019; Yin et al., 2019a;b). However, most of these defenses have been shown to be vulnerable in the presence of adaptive adversaries (Carlini & Wagner, 2017a; Tramer et al., 2020). In comparison, our work focuses on a more challenging problem of categorizing different perturbations types. However, we show that by establishing a trade-off between fooling the perturbation classifier and the individual `p-robust models, even an imperfect perturbation classifier is sufficient to make our pipeline robust." }, { "heading": "3 PROTECTOR: PERTURBATION TYPE CATEGORIZATION FOR ROBUSTNESS", "text": "In this section, we discuss our proposed PROTECTOR approach, which performs perturbation type categorization to improve the model robustness against multiple perturbation types. We first illustrate the PROTECTOR pipeline in Figure 1, then discuss the details of each component.\nAt a high level, PROTECTOR performs the classification task as a two-stage process. Given an input x, PROTECTOR first utilizes a perturbation classifier Cadv to predict its adversarial perturbation type. Then, based on the `p attack type predicted by Cadv, PROTECTOR uses the corresponding second-level predictor Mp to provide the final prediction, where Mp is specially trained to be robust against the `p attack. Formally, let fθ be the PROTECTOR model, then the final prediction is:\nfθ(x) = Mp(x); s.t. p = argmaxCadv(x) (1)\nNote that when the input is a benign image, it could be classified as any perturbation type by Cadv, since all secondlevel predictors should achieve a high test accuracy on benign images.\nAs shown in Figure 1, although we consider the robustness against three attack types, i.e., `1, `2, `∞ perturbations, unless otherwise specified, our perturbation classifier performs binary classification between p = {{1, 2},∞}. As will be discussed in Section 6, using two second-level predictors\nachieves better overall robustness than using three second-level predictors. We hypothesize that compared to the `∞ adversarial examples, `1 and `2 attacks are harder to separate, especially when facing an adaptive adversary which aims to attack the entire pipeline. To provide an intuitive illustration, we randomly sample 10K adversarial examples generated with PGD attacks on MNIST, and visualize the results of the Principal Component Analysis (PCA) in Figure 2. We observe that the first two principal components for `1 and `2 adversarial examples are largely overlapping, while those for `∞ are clearly from a different distribution. Note that this simple visualization by no means suggest that `1 and `2 adversarial examples are not separable, it merely serves as a motivation." }, { "heading": "4 THEORETICAL ANALYSIS", "text": "In this section, we provide a theoretical justification of our PROTECTOR framework design. First, we formally illustrate the setup of robust classification against multiple `p perturbation types, and we consider models trained for a binary classification task. Based on this problem setting, in Theorem 1, we show the existence of a classifier that can separate adversarial examples belonging to different perturbation types. Moreover, in Theorem 2, we show that our PROTECTOR framework naturally offers a trade-off between fooling the perturbation classifier Cadv and the individual robust models Mp, thus it is extremely difficult for adversaries to stage attacks against the entire pipeline. Note that we focus on the simplified binary classification task for the convenience of theoretical analysis, but our PROTECTOR framework could improve the robustness of models trained on real-world image classification benchmarks as well, and we will discuss the empirical examination in Section 6." }, { "heading": "4.1 PROBLEM SETTING", "text": "Data distribution. We consider a dataset of inputs sampled from the union of two multi-variate Gaussian distributions D, such that the input-label pairs (x,y) can be described as:\ny u.a.r∼ {−1,+1}; x0∼N (yα, σ2), x1, . . . , xd i.i.d∼ N (yη, σ2) (2)\nwhere x = [x0, x1, . . . , xd] ∈ Rd+1 and η = α√d , such that the absolute value of the mean for any dimension is equal for inputs sampled from both the positive and the negative labels. This setting demonstrates the distinction between a feature x0 that is strongly correlated with the input label, and d weakly correlated features that are (independently) normally distributed with mean yη and variance σ2. For the purposes of this work, we assume that ασ > 10 (x0 is strongly correlated) and d > 100 (remaining d features are weakly correlated, but together represent a strongly correlated feature). We adapt this problem setting from Ilyas et al. (2019), where they used a stochastic feature x0 = y with probability p, as opposed to a normally distributed input feature as in our case. Our results hold in their setting as well. However, our setting better represents the true data distribution, where input features are seldom stochastically flipped. More discussion could be found in Appendix A.\nPerturbation types. We focus our discussion on adversaries constrained within a fixed `p region of radius p around the original input, for p ∈ S = {1, 2,∞}. Such adversaries are frequently studied in existing work, primarily for finding the optimal first-order adversaries for different perturbation types. We define ∆p, as the `p threat model of radius and ∆S = ⋃ p∈S ∆p, . For a model fθ parametrized over θ, the objective of the adversary is to find the optimal perturbation δ∗, such that: δ∗ = arg max\nδ∈∆S `(fθ(x+ δ), y) (3)\nwhere `(·, ·) is the cross-entropy loss. Based on the model design in Section 3, we focus on discussing the separation of `1 and `∞ in the following theorems, but our proofs could also naturally be adapted to analyze the separability of other perturbation types." }, { "heading": "4.2 SEPARABILITY OF ADVERSARIAL PERTURBATIONS", "text": "Consider a standard classifier M trained with the objective of correctly classifying the label of inputs x ∈ D. Since the original distribution of the input data for each label is known to us, we first aim to examine how adversaries confined within different perturbation regions modify the input. The goal of the adversary is to fool the label predictor M , by finding the optimal perturbation δp ∀p ∈ S. The theorem below shows that the distributions of adversarial inputs within different `p regions can be separated with a high accuracy, and we present the formal proof in Appendix B.\nTheorem 1 (Separability of perturbation types). Given a binary Gaussian classifier M trained on D, consider Dyp to be the distribution of optimal adversarial inputs (for a class y) against M , within `p regions of radius p, where 1 = α, ∞ = α/ √ d. Distributions Dyp (p ∈ {1,∞}) can be accurately separated by a binary Gaussian classifier Cadv with a misclassification probability Pe ≤ 10−24.\nThe proof sketch is as follows. We first calculate the optimal weights of a binary Gaussian classifier M trained on D. Accordingly, for any input x ∈ D, we find the optimal adversarial perturbation δp∀p ∈ {1,∞} against M . We discuss how these perturbed inputs x + δp also follow a normal distribution, with shifted means. Finally, for data points belonging to a given classification label, we show that Cadv is able to predict the correct perturbation type with a very low classification error. We present the formal proof in Appendix B." }, { "heading": "4.3 ADVERSARIAL TRADE-OFF", "text": "In Section 4.2, we showed that the optimal perturbations corresponding to different perturbation types belong to distinct data distributions, and it is fairly easy to separate them using a simple classifier. However, in the white-box setting, the adversary has knowledge of both the perturbation classifier Cadv and specialized robust models Mp at test time. Therefore, the adversary can adapt the attack to fool the entire pipeline, instead of individual models Mp alone.\nNote that there are some overlapping regions among different `p perturbation regions. For example, every adversary could set δp = 0 as a valid perturbation, and thus it is clearly not possible for the attack classifier Cadv to correctly classify the attack (∀p ∈ {1, 2,∞}) in such a scenario. However, such perturbation is not useful, because all the base models can correctly classify unperturbed inputs with a high probability. In the following theorem, we examine the robustness of our PROTECTOR pipeline in the presence of such strong dynamic adversaries. Theorem 2 (Adversarial trade-off). Given a data distribution D, adversarially trained models Mp, p , and an attack classifier Cadv that distinguishes perturbations of different `p attack types for p ∈ {1,∞}, The probability of successful attack for the worst-case adversary over the entire PROTECTOR pipeline is Pe < 0.01 for 1 = α+ 2σ and ∞ = α+2σ√d .\nHere, the worst-case adversary refers to an adaptive adversary that has full knowledge of the defense strategy, and makes the strongest adversarial decision given the perturbation constraints. In Appendix C.2, we discuss how 1, ∞ are set so that the `1 and `∞ adversaries can fool M∞, ∞ and M1, 1 models respectively with a high success rate. Our proof sketch is as follows. We first show that when trained on D, an adversarially robust model Mp can achieve robust accuracy of greater than 99% against the attack type it was trained for. On the contrary, when subjected to attacks outside the trained perturbation region, such robust accuracy reduces to under 2%. Then, we analyze the modified distributions of the perturbed inputs by different `p attacks. Based on this analysis, we construct a simple decision rule for the perturbation classifier Cadv. Finally, we compute the perturbation induced by the worst-case adversary. We show that there exists a trade-off between fooling the perturbation classifier Cadv (to allow the alternate Mp, p model to make the final prediction for an `q attack ∀p, q ∈ {1,∞}; p 6= q), and fooling the alternate Mp, p model itself. Here, by “alternate” we mean that for an `q attack, the prediction is made by the Mp, p model, where p, q ∈ {1,∞}; p 6= q. We provide an illustration of the trade-off in Figure 1b, and present the formal proof in Appendix C." }, { "heading": "5 TRAINING AND INFERENCE", "text": "Having motivated PROTECTOR through a toy-task in Section 4, we now scale the approach to deep neural networks for common image classification benchmarks. Specifically, following prior work on defending against multiple perturbation types, we evaluate on MNIST (LeCun et al., 2010) and CIFAR-10 (Krizhevsky, 2012) datasets. Now, we discuss the training details, a strong adaptive white-box attack against PROTECTOR, and our inference procedure against such attacks." }, { "heading": "5.1 TRAINING", "text": "To train our perturbation classifier Cadv, we create a dataset that includes adversarial examples of different perturbation types. Specifically, we perform `1, `2, `∞ PGD attacks (Madry et al., 2018)\nagainst each of the two individual Mp models used in PROTECTOR. Thus the size of our dataset is 6 times that of the original MNIST and CIFAR10 datasets respectively. For the MNIST dataset, we use the M2,M∞ models in PROTECTOR, and we use M1,M∞ models for CIFAR10. The choice is made based on the robustness of {M2,M1} models against the {`1, `2} attacks respectively, as will be depicted in Table 2. As discussed in Section 3, to assign the ground truth label for training the perturbation classifier (Cadv), we find that it is sufficient to assign the same label to `1 and `2 attacks. In other words, Cadv performs a binary classification between `1/`2 attacks and `∞ attacks.\nIn contrast with prior defenses against multiple perturbation types (Tramèr & Boneh, 2019; Maini et al., 2020), which require adversarial training, we find that it is sufficient to train our PROTECTOR pipeline over a static dataset (constructed as mentioned above) to achieve high robustness. Therefore, the training of our perturbation classifier is fast and stable. Specifically, using a single P100 GPU, our perturbation classifier can be trained within 5 minutes on MNIST, and within an hour on CIFAR-10. On the other hand, training state-of-the-art models robust to a single perturbation type require up to 2 days to train on the same amount of GPU power, and existing defenses against multiple perturbation types take thrice as long as the training time for a model robust to a single perturbation type.\nA key advantage of PROTECTOR’s design is that it can build upon existing defenses against individual perturbation types. Specifically, we leverage the adversarially trained models developed in prior work (Zhang et al., 2019; Carmon et al., 2019) asMp models in our pipeline, and the CNN architecture of Cadv is also similar to a single Mp model. More details are deferred to Appendix D." }, { "heading": "5.2 ADAPTIVE ATTACKS AGAINST THE PROTECTOR PIPELINE", "text": "To generate adversarial examples against PROTECTOR, the most straightforward approach is to generate the adversarial perturbation to optimize Equation 3 using existing attack algorithms. Since the final prediction of the pipeline only depends on a single Mp model, the pipeline does not allow gradient flow across the two levels, and thus makes it difficult for gradient-based adversaries to attack PROTECTOR. Therefore, besides this standard adaptive attack, in our evaluation, we also consider a stronger adaptive adversary, which utilizes a combination of the predictions from each individual second-level Mp models, rather than only utilizing the predictions from a single Mp model with p = argmaxCadv(x) alone. Specifically, we modify fθ(x) in Equation 3 as follows:\nc = softmax(Cadv(x)); fθ(x) = ∑ p∈S cp ·Mp(x) (4)\nwhere cp denotes the probability of the input x being classified as the perturbation type p by Cadv. We also experiment with other strategies of aggregating the predictions of different components, e.g., tuning hyper-parameters to balance among attacking Cadv and each Mp model, but these alternative methods do not perform better. Note that Equation 4 is only used for the purpose of generating adversarial examples and performing gradient-based attack optimization. For consistency throughout the paper, we still use Equation 1 to compute the model prediction at inference (final forward-propagation). We do not see any significant performance advantages of either choice at inference time, and briefly report a comparison on two attacks in Appendix H.4." }, { "heading": "5.3 INFERENCE PROCEDURE AGAINST ADAPTIVE ADVERSARIES", "text": "Though training the perturbation classifier on a static dataset is sufficient to achieve robustness using existing attack approaches, we observe that the accuracy drops when PROTECTOR is presented with the stronger adaptive attacks discussed in Section 5.2. To improve the model robustness against such adversaries, we add random noise to the input before feeding it into PROTECTOR at the test time. While Hu et al. (2019) suggest that adding random noise does not help defend against adversarial inputs, it is the unique exhibition of the trade-off described in Theorem 2 that adversarial attacks against PROTECTOR, on the contrary, are highly likely to fail when added with random noise. Intuitively, the trade-off between fooling the two stages of PROTECTOR confines the adversary in a very small region for generating successful adversarial attacks.\nConsider the illustrative example in Figure 3, where the input x with the true label y = 0 is subjected to an `∞ attack. We assume that the M∞, ∞ model is a perfect classifier for inputs within a fixed ∞ ball. The dotted line shows the decision boundary for\nthe perturbation classifier Cadv, which correctly classifies inputs subjected to `∞ perturbations δ′′ as `∞ attacks (green), but can misclassify samples with smaller perturbations.\nWhen the adversary adds a large perturbation δ′′, the prediction of M1 for the resulted input x′′ becomes wrong, but the perturbation classifier also categorizes it as an M∞ attack, thus the final prediction of PROTECTOR is still correct since it will be produced by M∞, ∞ model instead. On the other hand, when the adversary adds a small perturbation δ′ to fool the perturbation classifier, adding a small amount of random noise can recover the correct prediction with a high probability. Note that every point on the boundary of the noise region (yellow circle) is correctly classified by the pipeline. In this way, adding random noise exploits an adversarial trade-off for PROTECTOR to achieve a high accuracy against adversarial examples, in the absence of adversarial training. In our implementation, we sample random noise z ∼ N (0, I), and add ẑ = 2 · z/|z|2 to the model input." }, { "heading": "6 EXPERIMENTS", "text": "In this section, we present our experiments on MNIST and CIFAR-10 datasets. We will discuss the results for both the perturbation classifier Cadv alone, and the entire PROTECTOR pipeline." }, { "heading": "6.1 EXPERIMENTAL SETUP", "text": "Baselines. We compare PROTECTOR with the state-of-the-art defenses against multiple perturbation types, which consider the union of `1, `2, `∞ adversaries (Tramèr & Boneh, 2019; Maini et al., 2020). For Tramèr & Boneh (2019), we compare two variants of adversarial training: (1) the MAX approach, where for each image, among different perturbation types, the adversarial sample that leads to the maximum increase of the model loss is augmented into the training set; (2) the AVG approach, where adversarial examples for all perturbation types are included for training. We also evaluate the MSD algorithm proposed by Maini et al. (2020), which modifies the standard PGD attack to incorporate the union of multiple perturbation types within the steepest decent itself. In addition, we also evaluate M1,M2,M∞ models trained with `1, `2, `∞ perturbations separately, as described in Appendix D.\nAttack evaluation. We evaluate our methods with the strongest attacks in the adversarial learning literature, and with adaptive attacks specifically designed for PROTECTOR (Section 5.2). First, we utilize a comprehensive suite of both gradient-based and gradient-free attacks from the Foolbox library (Rauber et al., 2017). Further, we also evaluate our method against the AutoAttack library from Croce & Hein (2020c), which achieves the state-of-art adversarial error rates against multiple recently published models. In line with prior work (Tramèr & Boneh, 2019; Maini et al., 2020), the radius of the {`1, `2, `∞} perturbation regions is {10, 2, 0.3} for the MNIST dataset and {12, 0.5, 0.03} for the CIFAR10 dataset. We present the full details of attack algorithms in Appendix F.\nFollowing prior work (Tramèr & Boneh, 2019; Maini et al., 2020), for both MNIST and CIFAR-10, we evaluate the models on adversarial examples generated from the first 1000 images of the test set. Our main evaluation metric is the accuracy on all attacks, which means that for an input image, if any of the attack algorithm in our suite could successfully fool the model, then the input is a failure case.\n6.2 EMPIRICAL PERTURBATION OVERLAP AND CHOICE OF p\nWhile we justify the choice of perturbation sizes in our theoretical proofs in Appendix B.4 and C.2, in this section we demonstrate the empirical agreement of the choices of perturbation sizes we make\nfor our results on MNIST and CIFAR10 datasets. To measure how often adversarial perturbations of different attacks overlap, we empirically quantify the overlapping regions by attacking a benign model with PGD attacks. In Table 1 we report the range of the norm of perturbations in the alternate perturbation region for any given attack type. The observed overlap is exactly 0% in all cases and the observation is consistent across MNIST and CIFAR10 datasets. A similar analysis on attacking PROTECTOR can be found in Appendix G.\n6.3 PERTURBATION TYPE CLASSIFICATION RESULTS OF Cadv\nTo examine the performance of the perturbation type classification, we evaluate Cadv on a dataset of adversarial examples, which are generated against the six models we use as the baseline defenses in our experiments. Note that Cadv is only trained on adversarial examples against the two Mp models that are part of PROTECTOR. We observe that Cadv transfers well across the board. First, Cadv generalizes to adversarial examples against new models; i.e., it preserves a high accuracy, even if the adversarial examples are generated against models that are unseen for Cadv during training. Further, Cadv also generalizes to new attack algorithms. As discussed in Section 5.1, we only include PGD adversarial examples in our training set for Cadv . However, on adversarial examples generated by the AutoAttack library, the classification accuracy of Cadv still holds up. In particular, the accuracy is > 95% across all the individual test sets created. These results suggest two important findings that validate our results in Theorem 1. That is, independent of (a) the model to be attacked; and (b) the algorithm for generating the optimal adversarial perturbation, the optimal adversarial images for a given `p region follow similar distributions. We present the full results in Appendix H.1." }, { "heading": "6.4 RESULTS OF THE PROTECTOR PIPELINE", "text": "Overall results. In Table 2, we summarize the worst-case performance against all attacks within a given perturbation type for MNIST and CIFAR-10 datasets. In particular, ‘Ours’ denotes the robustness of PROTECTOR against the adaptive attacks described in Section 5.2, and ‘Ours*’ denotes the robustness of PROTECTOR against standard attacks based on Equation 1. The adaptive strategy\neffectively reduces the overall accuracy of PROTECTOR by 2-5%, showing that incorporating the gradient and prediction information of all second-level predictors results in a stronger attack.\nDespite that we evaluate PROTECTOR against a stronger adaptive adversary, in terms of the all attacks accuracy, PROTECTOR still outperforms all baselines by 6.9% on MNIST, and 8.9% on CIFAR-10. Compared to the previous state-of-the-art defense against multiple perturbation types (MSD), the accuracy gain on `∞ attacks is especially notable, i.e., greater than 15%. In particular, if we compare the performance gain on each individual attack algorithm, as shown in Appendix H.2 and H.3 for MNIST and CIFAR-10 respectively, the improvement is also significant, with an average accuracy increase of 15.5% on MNIST, and 14.2% on CIFAR-10. These results demonstrate that PROTECTOR considerably mitigates the trade-off in accuracy against individual attack types.\nPROTECTOR retains a high accuracy on benign images, as opposed to past defenses that have to sacrifice the benign accuracy for the robustness on multiple perturbation types. In particular, the clean accuracy of PROTECTOR is over 6% higher than such baselines on CIFAR-10, and the accuracy is similar to that of Mp models trained for a single perturbation type.\nThe effect of noise. As discussed in Section 5.3, though adding random noise is not required to defend against standard attacks, it is helpful in defending against the stronger adaptive adversary against our pipeline. Specifically, in Table 3, we present the results on adversarial examples generated by PGD-based algorithms, which are amongst the strongest gradient-based attacks in the literature. We observe a consistent improvement among all attacks, increasing the accuracy by up to 10%.\nDifferent number of second-level Mp predictors. We also evaluate our PROTECTOR approach with three second-level predictors, i.e., M1, M2 and M∞, and we present the results in Table 3, this alternative design considerably reduces the overall accuracy of the pipeline model. We hypothesize that this happens because the M1 model is already reasonably robust against the `2 attacks, as shown in Table 2b. However, having both M1 and M2 models allows adaptive adversaries to find larger regions for fooling both Cadv andMp, thus hurts the overall performance against adaptive adversaries." }, { "heading": "7 CONCLUSION", "text": "In this work, we propose PROTECTOR, which performs perturbation type categorization towards achieving robustness against the union of multiple perturbation types. Based on a simplified problem setup, theoretically, we demonstrate that adversarial inputs of different attack types naturally have different distributions and can be separated. We further elaborate the existence of a natural tension for any adversary trying to fool our model – between fooling the attack classifier and the specialized robust predictors. Our empirical results on MNIST and CIFAR-10 datasets complement our theoretical analysis. In particular, by posing another adversarial trade-off through the effect of random noise, our PROTECTOR pipeline outperforms existing defenses against multiple `p attacks by over 5%.\nOur work serves as a stepping stone towards the goal of universal adversarial robustness, by dissecting various adversarial objectives into individually solvable pieces and combing them via PROTECTOR. Our study opens up various exciting future directions, including the new problem of perturbation categorization, extending our approach to defend attacks beyond `p adversarial examples, and defining sub-classes of perturbation types to further improve the overall adversarial robustness." }, { "heading": "A PROBLEM SETTING: THEORETICAL ANALYSIS", "text": "In this section, we formally define the problem setting and motivate the distinctions made with respect to the problem studied by Ilyas et al. (2019). The classification problem consists of two tasks: (1) Predicting the correct class label of an adversarially perturbed (or benign) image using adversarially robust classifier Mp; and (2) Predicting the type of adversarial perturbation that the input image was subjected to, using attack classifier Cadv .\nSetup We consider the data to consist of inputs to be sampled from two multi-variate Gaussian distributions such that the input-label pairs (x,y) can be described as:\ny u.a.r∼ {−1,+1},\nx0∼N (yα, σ2), x1, . . . , xd i.i.d∼ N (yη, σ2)\n(5)\nwhere the input x ∼ N (yµ,Σ) ∈ R(d+1); η = α/ √ d for some positive constant α; µ = [α, η, . . . , η] ∈ R+(d+1) and Σ = σ2I ∈ R+(d+1)×(d+1). We can assume without loss of generality, that the mean for the two distributions has the same absolute value, since for any two distributions with mean µ1,µ2, we can translate the origin to µ1+µ22 . This setting demonstrates the distinction between an input feature x0 that is strongly correlated with the input label and d weakly correlated features that are normally distributed (independently) with mean yη and variance σ2 each. We adapt this setting from Ilyas et al. (2019) who used a stochastic feature x0 = y with probability p, as opposed to a normally distributed input feature as in our case. All our findings hold in the other setting as well, however, the chosen setting better represents true data distribution, with some features that are strongly correlated to the input label, while others that have only a weak correlation." }, { "heading": "B SEPARABILITY OF PERTURBATION TYPES (THEOREM 1)", "text": "In this section, our goal is to evaluate whether the optimal perturbation confined within different `p balls have different distributions and whether they are separable. We do so by developing an error bound on the maximum error in classification of the perturbation types. The goal of the adversary is to fool a standard (non-robust) classifier M . Cadv aims to predict the perturbation type based on only viewing the adversarial image, and not the delta perturbation.\nFirst, in Appendix B.1 we define a binary Gaussian classifier that is trained on the given task. Given the weights of the binary classifier, we then identify the optimal adversarial perturbation for each of the `1, `2, `∞ attack types in Appendix B.2. In Appendix B.3 we define the difference between the adversarial input distribution for different `p balls. Finally, we calculate the error in classification of these adversarial input types in Appendix B.4 to conclude the proof of Theorem 1.\nB.1 BINARY GAUSSIAN CLASSIFIER\nWe assume for the purposes of this work that we have enough input data to be able to empirically estimate the parameters µ, σ of the input distribution via sustained sampling. The multivariate Gaussian representing the input data is given by:\np(x|y = yi) = 1√\n(2π)d|Σ| exp\n( −1\n2 (x− yi.µ)TΣ−1(x− yi.µ)\n) , ∀yi ∈ {−1, 1} (6)\nWe want to find p(y = yi|x) ∀yi ∈ {−1,+1}. From Bayesian Decision Theory, the optimal decision rule for separating the two distributions is given by:\np(y = 1)p(x|y = 1) y=1 > p(y = −1)p(x|y = −1)\np(y = 1)p(x|y = 1) y=−1 < p(y = −1)p(x|y = −1)\n(7)\nTherefore, for two Gaussian Distributions N (µ1, Σ1), N (µ2, Σ2), we have:\n0 y=1 < x>Ax− 2b>x+ c\nA = Σ−11 −Σ −1 2\nb = Σ−11 µ1 −Σ −1 2 µ2\nc = µ>1 Σ −1 1 µ1 − µ>2 Σ −1 2 µ2 + log ‖Σ1‖ ‖Σ2‖ − 2 log p(y = 1) p(y = −1)\n(8)\nSubstituting (6) and (7) in (8), we find that the optimal Bayesian decision rule for our problem is given by:\nx>µ y=1 > 0 (9)\nwhich means that the label for the input can be predicted with the information of the sign of x>µ alone. We can define the parameters W ∈ Rd+1 of the optimal binary Gaussian classifier MW , such that ‖W‖2 = 1 as:\nW0 = α√ 2 , Wi = α√ 2d ∀i ∈ {1, . . . , d}\nMW (x) = x>W\n(10)\nB.2 OPTIMAL ADVERSARIAL PERTURBATION AGAINST MW\nNow, we calculate the optimal perturbation δ that is added to an input by an adversary in order to fool our model. For the purpose of this analysis, we only aim to fool a model trained on the standard classification metric as discussed in Section 4 (and not an adversarially robust model). The parameters of our model are defined in (10).\nThe objective of any adversary δ ∈ ∆ is to maximize the loss of the label classifier MW . We assume that the classification loss is given by −y ×MW (x+ δ). The object of the adversary is to find δ∗ such that:\n`(x+ δ, y;MW ) = −y ×MW (x+ δ) = −yx>W δ∗ = arg max\nδ∈∆ `(x+ δ, y;MW )\n= arg max δ∈∆ −y(x+ δ)>W = arg max δ∈∆ −yδ>W\n(11)\nWe will now calculate the optimal perturbation in the `p balls ∀p ∈ {1, 2,∞}. For the following analyses, we restrict the perturbation region ∆ to the corresponding `p ball of radius { 1, 2, ∞} respectively. We also note that the optimal perturbation exists at the boundary of the respective `p balls. Therefore, the constraint can be re-written as :\nδ∗ = arg max ‖δ‖p= p −yδ>W (12)\nWe use the following properties in the individual treatment of `p balls:\n‖δ‖p = (∑ i |δi|p ) 1 p\n∂j‖δ‖p = 1\np (∑ i |δi|p ) 1 p−1 · p|δj |p−1 sgn(δj) = ( |δj | ‖δ‖p )p−1 sgn(δj)\n(13)\np = 2 Making use of langrange multipliers to solve (12), we have:\n∇δ(−δ>Σ−1µ) = λ∇δ(‖δ‖2p − 2p)\n−W = λ ′ ‖δ‖p∇δ(‖δ‖p)\n(14)\nCombining the results from (13) and replacing δ with δ2 we obtain :\n−W = λ ′ ‖δ2‖2 ( |δ2| ‖δ2‖2 ) sgn(δ2)\nδ2 = − 2 ( W\n‖W‖2\n) = − 2W\n(15)\np =∞ Recall that the optimal perturbation is given by :\nδ∗ = arg max ‖δ‖∞= ∞ −yδ>W\n= arg max ‖δ‖∞= ∞ −y d∑ i=0 δiWi\n(16)\nSince ‖δ‖∞ = ∞, we know that maxi |δi| = ∞. Therefore (16) is maximized when each δi = −y ∞ sgn Wi ∀i ∈ {0, . . . , d}. Further, since the weight matrix only contains non-negative elements (α is a positive constant), we can conclude that the optimal perturbation is given by:\nδ∞ = −y ∞1 (17)\np = 1 We attempt an analytical solution for the optimal perturbation δ1. Recall that the optimal perturbation is given by :\nδ∗ = arg max ‖δ‖1= 1 −y d∑ i=1 δiWi\n= arg max ‖δ‖1= 1 −yδ0W0 − y d∑ i=1 δiWi\n= arg max ‖δ‖1= 1 −yδ0 α√ 2 − y d∑ i=1 δi α√ 2d\n(18)\nSince ‖δ‖1 = 1, (18) is maximized when:\nδ0 = −y 1 sgn(α) = −y 1, δi = 0 ∀i ∈ {1 . . . d} (19)\nCombining the results From the preceding discussion, it may be noted that the new distribution of inputs within a given label changes by a different amount δ depending on the perturbation type. Moreover, if the mean and variance of the distribution of a given label are known (which implies that the corresponding true data label is also known), the optimal perturbation is independent of the input itself, and only dependent on the respective class statistics (Note that the input is still important in order to understand the true class).\nB.3 PERTURBATION CLASSIFICATION BY Cadv\nIn this section, we aim to verify if it is possible to accurately separate the optimal adversarial inputs crafted within different `p balls. For the purposes of this discussion, we only consider the problem of classifying perturbation types into `1 and `∞, but the same analysis may also be extended more generally to any number of perturbation types.\nWe will consider the problem of classifying the correct attack label for inputs from true class y = 1 for this discussion. Note that the original distribution:\nXtrue ∼ N (y.µ, Σ)\nSince the perturbation value δp is fixed for all inputs corresponding to a particular label, the new distribution of perturbed inputs X1 and X∞ in case of `1 and `∞ attacks respectively (for y = 1) is given by:\nX1 ∼ N (µ+ δ1, Σ) X∞ ∼ N (µ+ δ∞, Σ)\n(20)\nWe now try to evaluate the conditions under which we can separate the two Gaussian distributions with an acceptable worst-case error.\nB.4 CALCULATING A BOUND ON THE ERROR\nClassification Error A classification error occurs if a data vector x belongs to one class but falls in the decision region of the other class. That is in (7) the decision rule indicates the incorrect class. (This can be understood through the existence of outliers)\nPe =\n∫ P (error|x)p(x)dx\n= ∫ min [p(y = `1|x)p(x), p(y = `∞|x)p(x)] dx\n(21)\nPerturbation Size We set the radius of the `∞ ball, ∞ = η and the radius of the `1 ball, 1 = α. We further extend the discussion about suitable perturbation sizes in Appendix C.2. These values ensure that the `∞ adversary can make all the weakly correlated labels meaningless by changing the expected value of the adversarial input to less than 0 (E[xi + δ∞(i)] ∀i > 0), while the `1 adversary can make the strongly correlated feature x0 meaningless by changing its expected value to less than 0 (E[x0 + δ1(0)]). However, neither of the two adversaries can flip all the features together. Translating the axes We can translate the axis of reference by ( −µ− ( δ1+δ∞\n2\n)) and define\nµadv = ( δ1−δ∞\n2\n) , such that :\nX1 ∼ N (µadv, Σ) X∞ ∼ N (−µadv, Σ)\n(22)\nWe can once again combine this with the simplified Bayesian model in (9) to obtain the classification rule given by:\nx>µadv p=1 > 0 (23)\nCombining the optimal perturbation definitions in (17) and (19) that µadv = ( δ1−δ∞\n2\n) =\n1 2 [− 1 + ∞, ∞, . . . , ∞]. We can further substitute 1 = α and ∞ = η = α√ d . Notice that µadv(i) > 0 ∀i > 0. Without loss of generality, to simplify further discussion we can flip the coordinates of x0, since all dimensions are independent of each other. Therefore, µadv =\nα 2 √ d\n[√ d− 1, 1, . . . , 1 ] . Consider a new variable xz such that:\nxz = x0 · (\n1− 1√ d\n) +\n1√ d d∑ i=1 xi = 2 α ( x>µadv ) (24)\nsince each xi∀i ≥ 0 is independently distributed, the new feature xz ∼ N (µz, σ2z), where\nµz = α\n( 1− 1√\nd\n) +\n1√ d d∑ i=1 α√ d\n= 2α− α√ d\nσ2z = σ 2\n( 1 + 1\nd − 2 1√ d + d∑ i=1 1 d\n)\n= σ2 ( 2 + 1\nd − 2 1√\nd\n) (25)\nTherefore, the problem simplifies to calculating the probability that the meta-variable xz > 0.\nFor ασ > 10 and d > 1, we have in the z-table, z > 10:\nPe ≤ 10−24 (26)\nwhich suggests that the distributions are significantly distinct and can be easily separated. This concludes the proof for Theorem 1.\nNote: We can extend the same analysis to other `p balls as well, but we consider the case of `1 and `∞ for simplicity." }, { "heading": "C ROBUSTNESS OF THE PROTECTOR PIPELINE (THEOREM 2)", "text": "In the previous section, we show that it is indeed possible to distinguish between the distribution of inputs of a given class that were subjected to `1 and `∞ perturbations over a standard classifier. Now, we aim to develop further understanding of the robustness of our two-stage pipeline in a dynamic attack setting with multiple labels to distinguish among. The first stage is a preliminary classifier Cadv that classifies the perturbation type and the second stage consists of multiple models Mp that were specifically trained to be robust to perturbations to the input within the corresponding `p norm.\nFirst, in Appendix C.1, we calculate the optimal weights for a binary Gaussian classifier Mp, trained on dataset D to be robust to adversaries within the `p ball ∀p ∈ {1,∞}. Based on the weights of the individual model, we fix the perturbation size p to be only as large, as is required to fool the alternate model with high probability. Here, by ‘alternate’ we mean that for an `q attack, the prediction should be made by the Mp, p model,where p, q ∈ {1,∞}; p 6= q. In Appendix C.3 we calculate the robustness of individual Mp models to `p adversaries, given the perturbation size p as defined in Appendix C.2. In Appendix C.4, we analyze the modified distributions of the perturbed inputs after different `p attacks. Based on this analysis, we construct a simple decision rule for the perturbation classifier Cadv . Finally, in Appendix C.5 we determine the perturbation induced by the worst-case adversary that has complete knowledge of both Cadv and Mp, p∀p ∈ {1,∞}. We show how there exists a trade-off between fooling the perturbation classifier (to allow the alternate Mp, p model to make the final prediction), and fooling the alternate Mp, p model itself.\nPerturbation Size We set the radius of the `∞ ball, ∞ = η + ζ∞ and the radius of the `1 ball, 1 = α+ ζ1, where ζp are some small positive constants that we calculate in Appendix C.2. These values ensure that the `∞ adversary can make all the weakly correlated labels meaningless by changing the expected value of the adversarial input to less than 0 (E[xi + δ∞(i)] ∀i > 0), while the `1 adversary can make the strongly correlated feature x0 meaningless by changing its expected value to less than 0 (E[x0 + δ1(0)]). However, neither of the two adversaries can flip all the features together. The exact values of ζp determine the exact success probability of the attacks. We defer this calculation to later when we have calculated the weights of the models Mp. For the following discussion, it may be assumed that ζp → 0 ∀p ∈ {1,∞}.\nC.1 BINARY GAUSSIAN CLASSIFIER Mp\nExtending the discussion in Appendix B.1, we now examine the learned weights of a binary Gaussian classifier Mp that is trained to be robust against perturbations within the corresponding `p ball of radius p. The optimization equation for the classifier can be formulated as follows:\nmin W\nE [ −yx>W ] + 1\n2 λ||W||22 (27)\nwhere λ is tuned in order to make the `2 norm of the optimal weight distribution, ||W∗||2,= 1. Following the symmetry argument in Lemma D.1 Tsipras et al. (2018) we extend for the binary Gaussian classifier that :\nW∗i = W ∗ j = WM ∀i, j ∈ {1, . . . , d} (28)\nWe deal with the cases pertaining to p ∈ {∞, 1} in this section. For both the cases, we consider existential solutions for the classifier Mp to simplify the discussion. This gives us lower bounds on the performance of the optimal robust classifier. The robust objective under adversarial training can be defined as:\nmin W max ‖δ‖p≤ p\nE [ W0 · (x0 + δ0) + WM ·\nd∑ i=1 (xi + δi)\n] + 1\n2 λ‖W‖22\nmin W\n{ −1 (\nW0α+ d×WM α√ d\n) + 1\n2 λ‖W‖22 + max‖δ‖p≤ p E\n[ −y ( W0δ0 + WM\nd∑ i=1 δi )]} (29)\nFurther, since the λ constraint only ensures that ||W∗||2 = 1, we can simplify the optimization equation by substituting W0 = √ 1− d ·WM2 as follows,\nmin WM\n{ −1 ( α √ 1− d ·WM2 + d×WM\nα√ d ) + max ‖δ‖p≤ p E [ −y ( δ0 √ 1− d ·WM2 + WM d∑ i=1 δi )]} (30)\np =∞ As discussed in (17) the optimal perturbation δ∞ is given by −y ∞1. The optimization equation is simplified to:\nmin WM\n{ ( ∞ − α) √ 1− d ·WM2 + d×WM ( ∞ −\nα√ d\n)} (31)\nRecall that ∞ = α√d + ζ∞. To simplify the following discussion we use the weights of a classifier trained to be robust against perturbations within the `∞ ball of radius ∞ = α√d . The optimal solution is then given by:\nlim ζ∞→0 WM = 0 (32)\nTherefore, the classifier weights are given by W = [W0,W1, . . . ,Wd] = [1, 0, . . . , 0]. We also show later in Appendix C.3 that the model achieves greater than 99% accuracy against `∞ adversaries for the chosen values of ζ∞.\np = 1 We consider an analytical solution to yield optimal weights for this case. Recall from (19) that the optimal perturbation δ1 depends on the weight distribution of the classifier. Therefore, if W0 > WM the optimization equation can be simplified to\nmin W\n{ W0( 1 − α)− d×WM\nα√ d + 1 2 λ‖W‖22\n} , (33)\nand if WM > W0\nmin W\n{ −W0α−WM (√ dα− 1 ) + 1\n2 λ‖W‖22\n} (34)\nRecall that 1 = α + ζ1. Once again to simplify the discussion that follows we will lower bound the robust accuracy of the classifier M1 by considering the optimal solution when zeta1 = 0. The optimal solution is then given by:\nlim ζ1→0 WM = 1 (35)\nFor the robust classifier M1, the weights W = [W0,W1, . . . ,Wd] = [0, 1√d , 1√ d , . . . , 1√ d ]. While this may not be the optimal solution for all values of ζ1, we are only interested in a lower bound on the final accuracy and the classifier described by weights W simplifies the discussion hereon. We also show later in Appendix C.3 that the model achieves greater than 99% accuracy against `1 adversaries for the chosen values of ζ1.\nC.2 PERTURBATION SIZES FOR FOOLING Mp MODELS\nNow that we exactly know the weights of the learned robust classifiers M1 and M∞, we can move towards calculating values ζ1 and ζ∞ for the exact radius of the perturbation regions for the `1 and `∞ metrics. We set the radii of these regions in such a way that an `1 adversary can fool the model M∞ with probability ∼ 98% (corresponding to z = 2 in the z-table for normal distributions), and similarly, the success of `∞ attacks against the M1 model is ∼ 98%. Let Pp1,p2 represent the probability that model Mp1 correctly classifies an adversarial input in the `p2 region. For p1 =∞ and p2 = 1,\nP∞,1 = Px∼N (yµ,Σ)[y ·M∞(x+ δ1) > 0] = Px∼N (yµ,Σ)[y · (x+ δ1)>W > 0] ≥ Px∼N (µ,Σ)[x0 > 1]\nz = 1 − α σ = α+ ζ1 − α σ = ζ1 σ = 2\nζ1 = 2σ\n1 = α+ 2σ\n(36)\nTo simplify the discussion for the M1 model, we define a meta-feature xM as:\nxM = 1√ d d∑ i=1 xi,\nwhich is distributed as :\nxM ∼ N (yη √ d, σ2) ∼ N (yα, σ2)\nFor p1 = 1 and p2 =∞, P1,∞ = Px∼N (yµ,Σ)[y ·M1(x+ δ∞) > 0]\n= Px∼N (yµ,Σ)[y · (x+ δ∞)>W > 0]\n= Px∼N (yµ,Σ)[y · 1√ d d∑ i=1 (xi + δ∞(i)) > 0] = Px∼N (yµ,Σ)[y · (xM − √ d · ∞) > 0]\n≥ Px∼N (µ,Σ) [ xM > √ d · ∞ ] z = √ d · ∞ − α\nσ = α+ √ d · ζ∞ − α σ = √ d · ζ∞ σ = 2\nζ∞ = 2σ√ d\n∞ = α+ 2σ√\nd\n(37)\nC.3 ROBUSTNESS OF INDIVIDUAL Mp MODELS\nAdditional assumptions We add the following assumptions: (1) the dimensionality parameter d of input data is larger than 100; and (2) the ratio of the mean and variance for feature x0 is greater than 10.\nd ≥ 100, α σ ≥ 10 (38)\nWe define Pp as the probability that for any given input x ∼ N (yµ,Σ), the classifier Mp outputs the correct label y for the input x+ δp.\np =∞ P∞,∞ = Px∼N (yµ,Σ)[y ·M∞(x+ δ∞) > 0]\n= Px∼N (yµ,Σ)[y · (x+ δ∞)>W > 0] = Px∼N (yµ,Σ)[y · (x0 + δ∞(0)) > 0] ≥ Px∼N (µ,Σ)[x0 > ∞]\nz = ∞ − α σ = α σ ( 1√ d − 1 ) + 2√ d\n(39)\nusing the assumptions in (38), P∞,∞ ≥ 0.999 (40)\np = 1 P1,1 = Px∼N (yµ,Σ)[y ·M1(x+ δ1) > 0]\n= Px∼N (yµ,Σ)[y · (x+ δ1)>W > 0]\n= Px∼N (yµ,Σ)[y · 1√ d d∑ i=1 (xi + δ1(i)) > 0]\n= Px∼N (yµ,Σ)[y · (xM + δM ) > 0] ≥ Px∼N (µ,Σ) [ xM >\n1√ d ] z =\n1√ d − α σ = α σ ( 1√ d − 1 ) + 2√ d\n(41)\nusing the assumptions in (38), P1,1 ≥ 0.999 (42)\nC.4 DECISION RULE FOR Cadv\nWe aim to provide a lower bound on the worst-case accuracy of the entire pipeline, through the existence of a simple decision tree Cadv. For given perturbation budgets 1 and ∞, we aim to understand the range of values that can be taken by the adversarial input. Consider the scenarios described in Table 4 below:\nTable 4: The table shows the range of the values that the mean can take depending on the decision taken by the adversary. µadv0 and µ adv\nM represent the new mean of the distribution of features x0 and\nxM after the adversarial perturbation.\nAttack Type\nµadv0 µ adv M\ny = 1 y = -1 y = 1 y = -1\nNone α −α η √ d −η √ d `∞ {α− ∞, α+ ∞} {−α− ∞,−α+ ∞} {η √ d+ ∞d, η √ d− ∞d} {−η √ d+ ∞d,−η √ d− ∞d} `1 {α− 1, α+ 1} {−α− 1,−α+ 1} {η √ d+ 1, η √ d− 1} {−η √ d+ 1,−η √ d− 1}\nNote that any adversary that moves the perturbation away from the y-axis is uninteresting for our comparison, since irrespective of a correct perturbation type prediction by Cadv, either of the two second level models naturally obtain a high accuracy on such inputs. Hence, we define the following decision rule with all the remaining cases mapped to `1 perturbation type.\nCadv(x) = { 1, if ||x0| − α| < ∞ + α2 0, otherwise\n(43)\nwhere the output 1 corresponds to the classifier predicting the presence of `∞ perturbation in the input, while an output of 0 suggests that the classifier predicts the input to contain perturbations of the `1 type.\nIf we consider a black-box setting where the adversary has no knowledge of the classifier Cadv , and can only attack Mp it is easy to see that the proposed pipeline obtains a high adversarial accuracy against the union of `1 and `∞ perturbations:\nNote: (1) There exists a single model that can also achieve robustness against the union of `1 and `∞ perturbations, however, learning this model may be more challenging in real data settings. (2) The classifier need not be perfect.\nC.5 TRADE-OFF BETWEEN ATTACKING Mp AND Cadv\nTo obtain true robustness it is important that the entire pipeline is robust against adversarial attacks. More specifically, in this section we demonstrate the natural tension that exists between fooling the top level attack classifier (by making an adversarial attack less representative of its natural distribution) and fooling the bottom level adversarially robust models (requiring stronger attacks leading to a return to the attack’s natural distribution).\nThe accuracy of the pipelined model f against any input-label pair (x, y) sampled through some distribution N (yµadv,Σ) (where µadv incorporates the change in the input distribution owing to the adversarial perturbation) is given by:\nP [f(x) = y] = Px∼N (yµadv,Σ) [Cadv(x)]Px∼N (yµadv,Σ) [y ·M∞(x) > 0|Cadv(x)] + (1− Px∼N (yµadv,Σ) [Cadv(x)])Px∼N (yµadv,Σ) [y ·M1(x) > 0|¬Cadv(x)]\n= Px∼N (µadv,Σ) [Cadv(x)]Px∼N (µadv,Σ) [M∞(x) > 0|Cadv(x)] + (1− Px∼N (µadv,Σ) [Cadv(x)])Px∼N (µadv,Σ) [M1(x) > 0|¬Cadv(x)]\n(44)\n`∞ adversary: To simplify the analysis, we consider loose lower bounds on the accuracy of the model f against the `∞ adversary. Recall that the decision of the attack classifier is only dependent of the input x0. Irrespective of the input features xi∀i > 0, it is always beneficial for the adversary to perturb the input by µi = − ∞. However, the same does not apply for the input x0. Analyzing for the\nscenario when the true label y = 1, if the input x0 lies between α2 − ∞ of the mean α, irrespective of the perturbation, the output of the attack classifier Cadv = 1. The M∞ model then always correctly classifies these inputs. The overall robustness of the pipeline requires analysis for the case when input lies outside α2 − ∞ of the mean as well. However, we consider that the adversary always succeeds in such a case in order to only obtain a loose lower bound on the robust accuracy of the pipeline model f against `∞ attacks.\nP [f(x) = y] = Px∼N (µadv,Σ) [Cadv(x)]Px∼N (µadv,Σ) [M∞(x) > 0|Cadv(x)] + (1− Px∼N (µadv,Σ) [Cadv(x)])Px∼N (µadv,Σ) [M1(x) > 0|¬Cadv(x)]\n≥ Px∼N (µadv,Σ) [Cadv(x)]Px∼N (µadv,Σ) [M∞(x) > 0|Cadv(x)] ≥ Px∼N (µ,Σ) [ |x0 − α| ≤ α\n2 − ∞ ] ≥ 2Px∼N (µ,Σ) [ x0 ≤ α− α\n2 + ∞ ] z =\n(α− α2 + ∞)− α σ = − α 2σ + 3σ 2σ √ d\n(45)\nusing the assumptions in (38),\nP [f(x) = y] ∼ 0.99 (46)\n`1 adversary: It may be noted that a trivial way for the `1 adversary to fool the attack classifier is to return a perturbation δ1 = 0. In such a scenario, the classifier predicts that the adversarial image was subjected to an `∞ attack. The label prediction is hence made by the M∞ model. But we know from (40) that the M∞ model predicts benign inputs correctly with a probability P∞,∞ > 0.99, hence defeating the adversarial objective of misclassification. To achieve misclassification over the entire pipeline the optimal perturbation decision for the `1 adversary when x0 ∈ [ −α− α2 − 1,−α+ α 2 + 1 ] the adversary can fool the pipeline by ensuring that the Cadv(x) = 1. However, in all the other cases irrespective of the perturbation, either Cadv = 0 or the input features x0 has the same sign as the label y. Since, P1,1 > 0.99 for the M1 model, for all the remaining inputs x0 the model correctly predicts the label with probability greater than 0.99 (approximate lower bound). We formulate this trade-off to elaborate upon the robustness of the proposed pipeline.\nP [f(x) = y] = Px∼N (µadv,Σ) [Cadv(x)]Px∼N (µadv,Σ) [M∞(x) > 0|Cadv(x)] + (1− Px∼N (µadv,Σ) [Cadv(x)])Px∼N (µadv,Σ) [M1(x) > 0|¬Cadv(x)]\n≥ Px∼N (µ,Σ) [ −α− α\n2 − 1 ≤ x0 ≤ −α+\nα 2 + 1 ] + 0.999(Px∼N (µ,Σ) [ x0 < −α− α\n2 − 1 or x0 > −α+\nα 2 + 1\n] )\n≥ 0.999(Px∼N (µ,Σ) [ x0 < −α− α\n2 − 1 or x0 > −α+\nα 2 + 1\n] )\n(47)\nusing the assumptions in (38),\nP [f(x) = y] ∼ 0.99 (48)\nThis concludes the proof for Theorem 2, showing that an adversary can hardly stage successful attacks on the entire pipeline and faces a natural tension between attacking the label predictor and the attack classifier. Finally, we emphasize that the shown accuracies are lower bounds on the actual robust accuracy, and the objective of this analysis is not to find the optimal solution to the problem of multiple perturbation adversarial training, but to expose the existence of the trade-off between attacking the two stages of the pipeline." }, { "heading": "D MODEL ARCHITECTURE", "text": "Second-level Mp models. A key advantage of our PROTECTOR design is that we can build upon existing defenses against individual perturbation type. Specifically, for MNIST, we use the same CNN architecture as Zhang et al. (2019) for our Mp models, and we train these models using their proposed TRADES loss. For CIFAR-10, we use the same training setup and model architecture as Carmon et al. (2019), which is based on a robust self-training algorithm that utilizes unlabeled data to improve the model robustness.\nPerturbation classifier Cadv . For both MNIST and CIFAR-10 datasets, the architecture of the perturbation classifier Cadv is similar to the individual Mp models. Specifically, for MNIST, we use the CNN architecture in Zhang et al. (2019) with four convolutional layers, followed by two fully-connected layers. For CIFAR-10, Cadv is a WideResNet (Zagoruyko & Komodakis, 2016) model with depth 28 and widening factor of 10 (WRN-28-10)." }, { "heading": "E TRAINING DETAILS", "text": "E.1 SPECIALIZED ROBUST PREDICTORS Mp\nMNIST. We use the Adam optimizer (Kingma & Ba, 2015) to train our models along with a piece-wise linearly varying learning rate schedule (Smith, 2018) to train our models with maximum learning rate of 10−3. The base models M1,M2,M∞ are trained using the TRADES algorithm for 20 iterations, and step sizes α1 = 2.0, α2 = 0.3, and α∞ = 0.05 for the `1, `2, `∞ attack types within perturbation radii 1 = 10.0, 2 = 2.0, and ∞ = 0.3 respectively.2\nCIFAR10. The individual Mp models are trained to be robust against {`∞, `1, `2} perturbations of { ∞, 1, 2} = {0.003, 12.0, 0.05} respectively. For CIFAR10, the attack step sizes {α∞, α1, α2} = {0.005, 2.0, 0.1} respectively. The training of the individual Mp models is directly based on the work of Carmon et al. (2019).\nE.2 PERTURBATION CLASSIFIER Cadv\nMNIST. We use a learning rate of 0.01 and Adam optimizer for 10 epochs, with linear rate decay to 0.001 between the fourth epoch and the tenth epoch. The batch size is set to 100 for all experiments.\nCIFAR10. We use a learning rate of 0.01 and SGD optimizer for 5 epochs, with linear rate decay to 0.001 between the fourth epoch and the tenth epoch. The batch size is set to 100 for all experiments.\nCreating the Adversarial Perturbation Dataset. We create a static dataset of adversarially perturbed images and their corresponding attack label for training the perturbation classifier Cadv. For generating adversarial images, we perform weak adversarial attacks that are faster to compute. In particular, we perform 10 iterations of the PGD attack. For MNIST, the attack step sizes {α∞, α1, α2} = {0.05, 2.0, 0.3} respectively. For CIFAR10, the attack step sizes {α∞, α1, α2} = {0.005, 2.0, 0.1} respectively. Note that we perform the Sparse-`1 or the top-k PGD attack for the `1 perturbation ball, as introduced by Tramèr & Boneh (2019). We set the value of k to 10, that is we move by a step size α1k in each of the top 10 directions with respect to the magnitude of the gradient." }, { "heading": "F ATTACKS USED FOR EVALUATION", "text": "A description of all the attacks used for evaluation of the models is presented here. From the Foolbox library(Rauber et al., 2017), apart from `1, `2 and `∞ PGD adversaries, we also evaluate the following attacks for different perturbation types.\n(1) For `1 perturbations, we include the Salt & Pepper Attack (SAPA) (Rauber et al., 2017) and Pointwise Attack (PA) (Schott et al., 2018).\n2We use the Sparse `1 descent Tramèr & Boneh (2019) for the PGD attack in the `1 constraint.\n(2) For `2 perturbations, we include the Gaussian noise attack (Rauber et al., 2017), Boundary Attack (Brendel et al., 2018), DeepFool (Moosavi-Dezfooli et al., 2016), DDN attack (Rony et al., 2019), and C&W attack (Carlini & Wagner, 2017b).\n(3) For `∞ perturbations, we include FGSM attack (Goodfellow et al., 2015) and the Momentum Iterative Method (Dong et al., 2018).\nFrom the AutoAttack library from Croce & Hein (2020c), we make use of all the three variants of the Adaptive PGD attack (APGD-CE, APGD-DLR, APGD-T) (Croce & Hein, 2020c) along with the targeted and standard version of Fast Adaptive Boundary Attack (FAB, FAB-T) (Croce & Hein, 2020b) and the Square Attack (Andriushchenko et al., 2020). We utilize the AA+ version for strong attacks.\nAttack Hyperparameters For the attacks in the Foolbox and AutoAtack libraries we use the default parameter setting in the strongest available mode (such as AA+). For the custom PGD attacks, we evaluate the models with 10 restarts and 200 iterations of the PGD attack. The step size of the {`∞, `1, `2} PGD attacks are set as follows: For MNIST, the attack step sizes {α∞, α1, α2} = {0.01, 1.0, 0.1} respectively. For CIFAR10, the attack step sizes {α∞, α1, α2} = {0.003, 1.0, 0.02} respectively.\nFurther, similar to Tramèr & Boneh (2019); Maini et al. (2020) we evaluate our models on the first 1000 images of the test set of MNIST and CIFAR-10, since many of the attacks employed are extremely computationally expensive and slow to run. Specifically, on a single GPU, the entire evaluation for a single model against all the attacks discussed with multiple restarts will take nearly 1 month, and is not feasible." }, { "heading": "G EMPIRICAL PERTURBATION OVERLAP", "text": "Following Section 6.2, we also present results on the perturbation overlap when we attack PROTECTOR with PGD attacks. To contrast the results with that of attacking a vanilla model, we also present the table in the main paper for convenience. It is noteworthy that the presence of a perturbation classifier forces the adversaries to generate such attacks that increase the norm of the perturbations in alternate `q region. Secondly, we also observe that in the case of CIFAR10, the `2 PGD attack has a large overlap with the `1 norm of radius 10. However, recall that in case of `2 attacks for CIFAR10, both the base models M1 and M∞ were satisfactorily robust. Hence, the attacker has no incentive to reduce the perturbation radius for an `q norm since the perturbation classifier only performs a binary classification between `1 and `∞ attacks. The results can be observed in Tables 5 and 6." }, { "heading": "H BREAKDOWN OF COMPLETE EVALUATION", "text": "In this section, we present the results of the perturbation type classifier Cadv against transfer adversaries. We also present the breakdown results of the adversarial robustness of baseline approaches and our PROTECTOR pipeline against all the attacks that we tried, and also report the worst case performance against the union of all attacks.\nH.1 ROBUSTNESS OF Cadv\nThe results for the robustness of the perturbation classifierCadv in the presence of adaptive adversaries is presented in Table 7. Note thatCadv transfers well across the board, even if the adversarial examples are generated against new models that are unseen for Cadv during training, achieving extremely high test accuracy. Further, even if the adversarial attack was generated by a different algorithm such as from the AutoAttack library, the transfer success of Cadv still holds up. In particular, the obtained accuracy is > 95% across all the individual test sets created. The attack classification accuracy is in general highest against those generated by attacking M1 or M∞ for CIFAR10, and M2 or M∞ for MNIST. This is an expected consequence of the nature of generation of the static dataset for training the perturbation classifier Cadv as described in Section 5.1.\nH.2 MNIST\nIn Table 8, we provide a breakdown of the adversarial accuracy of all the baselines, individual Mp models and the PROTECTOR method, with both the adaptive and standard attack variants on the MNIST dataset. PROTECTOR outperforms prior baselines by 6.9% on the MNIST dataset. It is important to note that PROTECTOR shows significant improvements against most attacks in the suite. Compared to the previous state-of-the-art defense against multiple perturbation types (MSD), if we compare the performance gain on each individual attack algorithm, the improvement is also significant, with an average accuracy increase of 15.5% on MNIST dataset. These results demonstrate that PROTECTOR considerably mitigates the trade-off in accuracy against individual attack types.\nH.3 CIFAR-10\nIn Table 9, we provide a breakdown of the adversarial accuracy of all the baselines, individual Mp models and the PROTECTOR method, with both the adaptive and standard attack variants on the CIFAR10 dataset. PROTECTOR outperforms prior baselines by 8.9%. Once again, note that PROTECTOR shows significant improvements against most attacks in the suite. Compared to the previous state-of-the-art defense against multiple perturbation types (MSD), if we compare the performance gain on each individual attack algorithm, the improvement is significant, with an average accuracy increase of 14.2% on. These results demonstrate that PROTECTOR considerably mitigates the trade-off in accuracy against individual attack types.\nFurther, PROTECTOR also retains a higher accuracy on benign images, as opposed to past defenses that have to sacrifice the benign accuracy for the robustness on multiple perturbation types. In particular, the clean accuracy of PROTECTOR is over 6% higher than such existing defenses on CIFAR-10, and the accuracy is close to Mp models trained for a single perturbation type.\nH.4 AGGREGATING PREDICTIONS FROM DIFFERENT Mp AT INFERENCE\nIn all our experiments in this work the adversary constructs adversarial examples using the softmax based adaptive strategy for aggregating predictions from different Mp models, as described in Equation 4 for the column ‘Ours’ and using the ‘max’ strategy (Equation 1) for results described in the column ‘Ours*’\nHowever, for consistency of our defense strategy irrespective of the attacker’s strategy, the defender only utilizes predictions from the specialized model Mp corresponding to the most-likely attack (Equation 1) to provide the final prediction (only forward propagation) for generated adversarial examples. In our evaluation, we found a negligible impact of changing this aggregation to the ‘softmax’ strategy for aggregating the predictions. For example, we show representative results in case of the APGD (`∞, `2) attacks on the CIFAR10 dataset in Table 10." } ]
2,020
null
SP:8051813e72f10269c587a17450be5f23973595de
[ "This paper proposed a new graph convolutional network. It considers not only the original graph structure information but also the latent correlations between features, resulting in a graph neural network as a bi-directional low-pass filter. The new filter is derived using the alternating direction method of multipliers (ADMM) algorithm. Experiments show the new model's denoising performance is better than previous models." ]
Graph convolutional networks have achieved great success on graph-structured data. Many graph convolutional networks can be regarded as low-pass filters for graph signals. In this paper, we propose a new model, BiGCN, which represents a graph neural network as a bi-directional low-pass filter. Specifically, we not only consider the original graph structure information but also the latent correlation between features, thus BiGCN can filter the signals along with both the original graph and a latent feature-connection graph. Our model outperforms previous graph neural networks in the tasks of node classification and link prediction on most of the benchmark datasets, especially when we add noise to the node features.
[]
[ { "authors": [ "R Bartels" ], "title": "Algorithm 432, solution of the matrix equation ax+ xb= c. Comm, Ass", "venue": "Computer Machinery,", "year": 1972 }, { "authors": [ "Filippo Maria Bianchi", "Daniele Grattarola", "Cesare Alippi", "Lorenzo Livi" ], "title": "Graph neural networks with convolutional arma filters", "venue": "arXiv preprint arXiv:1901.01343,", "year": 2019 }, { "authors": [ "Joan Bruna", "Wojciech Zaremba", "Arthur Szlam", "Yann Lecun" ], "title": "Spectral networks and locally connected networks on graphs", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Jie Chen", "Tengfei Ma", "Cao Xiao" ], "title": "Fastgcn: fast learning with graph convolutional networks via importance sampling", "venue": "arXiv preprint arXiv:1801.10247,", "year": 2018 }, { "authors": [ "Siheng Chen", "Aliaksei Sandryhaila", "José MF Moura", "Jelena Kovacevic" ], "title": "Signal denoising on graphs via graph filtering", "venue": "In 2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP),", "year": 2014 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Gene Golub", "Stephen Nash", "Charles Van Loan" ], "title": "A hessenberg-schur method for the problem ax+ xb=", "venue": "c. IEEE Transactions on Automatic Control,", "year": 1979 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Elvin Isufi", "Andreas Loukas", "Andrea Simonetto", "Geert Leus" ], "title": "Autoregressive moving average graph filtering", "venue": "IEEE Transactions on Signal Processing,", "year": 2016 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Johannes Klicpera", "Stefan Weißenberger", "Stephan Günnemann" ], "title": "Diffusion improves graph learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Qimai Li", "Zhichao Han", "Xiao-Ming Wu" ], "title": "Deeper insights into graph convolutional networks for semi-supervised learning", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Yaguang Li", "Rose Yu", "Cyrus Shahabi", "Yan Liu" ], "title": "Diffusion convolutional recurrent neural network: Data-driven traffic forecasting", "venue": "arXiv preprint arXiv:1707.01926,", "year": 2017 }, { "authors": [ "Julian McAuley", "Christopher Targett", "Qinfeng Shi", "Anton Van Den Hengel" ], "title": "Image-based recommendations on styles and substitutes", "venue": "In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval,", "year": 2015 }, { "authors": [ "Galileo Namata", "Ben London", "Lise Getoor", "Bert Huang", "UMD EDU" ], "title": "Query-driven active surveying for collective classification", "venue": "In 10th International Workshop on Mining and Learning with Graphs, pp", "year": 2012 }, { "authors": [ "Hoang NT", "Takanori Maehara" ], "title": "Revisiting graph neural networks: All we have is low-pass filters", "venue": "arXiv preprint arXiv:1905.09550,", "year": 2019 }, { "authors": [ "Kenta Oono", "Taiji Suzuki" ], "title": "Graph neural networks exponentially lose expressive power for node classification", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Antonio Ortega", "Pascal Frossard", "Jelena Kovačević", "José MF Moura", "Pierre Vandergheynst" ], "title": "Graph signal processing: Overview, challenges, and applications", "venue": "Proceedings of the IEEE,", "year": 2018 }, { "authors": [ "Jiahao Pang", "Gene Cheung", "Antonio Ortega", "Oscar C Au" ], "title": "Optimal graph laplacian regularization for natural image denoising", "venue": "In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2015 }, { "authors": [ "Ryan Rossi", "Nesreen Ahmed" ], "title": "The network data repository with interactive graph analytics and visualization", "venue": "In Twenty-Ninth AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Prithviraj Sen", "Galileo Namata", "Mustafa Bilgic", "Lise Getoor", "Brian Galligher", "Tina Eliassi-Rad" ], "title": "Collective classification in network data", "venue": "AI magazine,", "year": 2008 }, { "authors": [ "Oleksandr Shchur", "Maximilian Mumme", "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Pitfalls of graph neural network evaluation", "venue": "arXiv preprint arXiv:1811.05868,", "year": 2018 }, { "authors": [ "WOK Asiri Suranga Wijesinghe", "Qing Wang" ], "title": "Dfnets: Spectral cnns for graphs with feedbacklooped filters", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Felix Wu", "Tianyi Zhang", "Amauri Holanda de Souza Jr.", "Christopher Fifty", "Tao Yu", "Kilian Q Weinberger" ], "title": "Simplifying graph convolutional networks", "venue": null, "year": 1902 }, { "authors": [ "Keyulu Xu", "Chengtao Li", "Yonglong Tian", "Tomohiro Sonobe", "Ken-ichi Kawarabayashi", "Stefanie Jegelka" ], "title": "Representation learning on graphs with jumping knowledge networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Xiaofan Zhu", "Michael Rabbat" ], "title": "Approximating signals supported on graphs", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2012 }, { "authors": [ "Daniel Zügner", "Stephan Günnemann" ], "title": "Certifiable robustness and robust training for graph convolutional networks", "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Graphs are important research objects in the field of machine learning as they are good carriers for structural data such as social networks and citation networks. Recently, graph neural networks (GNNs) received extensive attention due to their great performances in graph representation learning. A graph neural network takes node features and graph structure (e.g. adjacency matrix) as input, and embeds the graph into a lower-dimensional space. With the success of GNNs (Kipf & Welling, 2017; Veličković et al., 2017; Hamilton et al., 2017; Chen et al., 2018) in various domains, more and more efforts are focused on the reasons why GNNs are so powerful (Xu et al., 2019).\nLi et al (Li et al., 2018) re-examined graph convolutional networks (GCNs) and connected it with Laplacian smoothing. NT and Maehara et al (NT & Maehara, 2019) revisited GCNs in terms of graph signal processing and explained that many graph convolutions can be considered as low-pass filters (e.g.(Kipf & Welling, 2017; Wu et al., 2019)) which can capture low-frequency components and remove some feature noise by making connective nodes more similar. In fact, these findings are not new. Since its first appearance in Bruna et al. (2014), spectral GCNs have been closely related to graph signal processing and denoising. The spectral graph convolutional operation is derived from Graph Fourier Transform, and the filter can be formulated as a function with respect to the graph Laplacian matrix, denoted as g(L). In general spectral GCNs, the forward function is: H(l+1) = σ(g(L)H(l)).\nKipf and Welling (Kipf & Welling, 2017) approximated g(L) using first-order Chebyshev polynomials, which can be simplified as multiplying the augmented normalized adjacency matrix to the feature matrix. Despite the efficiency, this first-order graph filter is found sensitive to changes in the graph signals and the underlying graph structure (Isufi et al., 2016; Bianchi et al., 2019). For instance, on isolated nodes or small single components of the graph, their denoising effect is quite limited due to the lack of reliable neighbors. The potential incorrect structure information will also constrain the power of GCNs and cause more negative impacts with deeper layers. As noisy/incorrect information is inevitable in real-world graph data, more powerful and robust GCNs are needed to solve this problem. In this work, we propose a new graph neural network with more powerful denoising effects from the perspective of graph signal processing and higher fault tolerance to the graph structure.\nDifferent from image data, graph data usually has high dimensional features, and there may be some latent connection/correlation between each dimensions. Noting this, we take this connection information into account to offset the efforts of certain unreliable structure information, and remove extra noise by applying a smoothness assumption on such a ”feature graph”. Derived from the additional Laplacian smoothing regularization in this feature graph, we obtain a novel variant of\nspectral GCNs, named BiGCN, which contains low-pass graph filters for both the original graph and a latent feature connection graph in each convolution layer. Our model can extract low-frequency components from both the graphs, so it is more expressive than the original spectral GCN; and it removes the noise from two directions, so it is also more robust.\nWe evaluate our model on two tasks: node classification and link prediction. In addition to the original graph data, in order to demonstrate the effectiveness of our model with respect to graph signal denoising and fault tolerance, we design three cases with noise/structure mistakes: randomly adding Gaussian noise with different variances to a certain percentage of nodes; adding different levels of Gaussian noise to the whole graph feature; and changing a certain percentage of connections. The remarkable performances of our model in these experiments verify our power and robustness on both clean data and noisy data.\nThe main contributions of this work are summarized below.\n• We propose a new framework for the representation learning of graphs with node features. Instead of only considering the signals in the original graph, we take into account the feature correlations and make the model more robust.\n• We formulate our graph neural network based on Laplacian smoothing and derive a bidirectional low-pass graph filter using the Alternating Direction Method of Multipliers (ADMM) algorithm.\n• We set three cases to demonstrate the powerful denoising capacity and high fault tolerance of our model in tasks of node classification and link prediction." }, { "heading": "2 RELATED WORK", "text": "We summarize the related work in the field of graph signal processing and denoising and recent work on spectral graph convolutional networks as follows." }, { "heading": "2.1 GRAPH SIGNAL PROCESSING AND DENOISING", "text": "Graph-structured data is ubiquitous in the world. Graph signal processing (GSP) (Ortega et al., 2018) is intended for analyzing and processing the graph signals whose values are defined on the set of graph vertices. It can be seen as a bridge between classical signal processing and spectral graph theory. One line of the research in this area is the generalization of the Fourier transform to the graph domain and the development of powerful graph filters (Zhu & Rabbat, 2012; Isufi et al., 2016). It can be applied to various tasks, such as representation learning and denoising (Chen et al., 2014). More recently, the tools of GSP have been successfully used for the definition of spectral graph neural networks, making a strong connection between GSP and deep learning. In this work, we restart with the concepts from graph signal processing and define a new smoothing model for deep graph learning and graph denoising. It is worth mentioning that the concept of denoising/robustness in GSP is different from the defense/robustness against adversarial attacks (e.g. (Zügner & Günnemann, 2019)), so we do not make comparisons with those models." }, { "heading": "2.2 SPECTRAL GRAPH CONVOLUTIONAL NETWORKS", "text": "Inspired by the success of convolutional neural networks in images and other Euclidean domains, the researcher also started to extend the power of deep learning to graphs. One of the earliest trends for defining the convolutional operation on graphs is the use of the Graph Fourier Transform and its definition in the spectral domain instead of the original spatial domain (Bruna et al., 2014). Defferrard et al (Defferrard et al., 2016) proposed ChebyNet which defines a filter as Chebyshev polynomials of the diagonal matrix of eigenvalues, which can be exactly localized in the k-hop neighborhood. Later on, Kipf and Welling (Kipf & Welling, 2017) simplified the Chebyshev filters using the first-order polynomial filter, which led to the well-known graph convolutional network. Recently, many new spectral graph filters have been developed. For example, the rational auto-regressive moving average graph filters (ARMA) (Isufi et al., 2016; Bianchi et al., 2019) are proposed to enhance the modeling capacity of GNNs. Compared to the polynomial ones, ARMA filters are more robust and provide a more flexible graph frequency response. Feedback-looped filters (Wijesinghe & Wang, 2019) further\nimproved localization and computational efficiency. There is also another type of graph convolutional networks that defines convolutional operations in the spatial domain by aggregating information from neighbors. The spatial types are not closely related to our work, so it is beyond the scope of our discussion. As we will discuss later, our model is closely related to spectral graph convolutional networks. We define our graph filter from the perspective of Laplacian smoothing, and then extend it not only to the original graph but also to a latent feature graph in order to improve the capacity and robustness of the model." }, { "heading": "3 BACKGROUND: GRAPH SIGNAL PROCESSING", "text": "In this section, we will briefly introduce some concepts of graph signal processing (GSP), including graphs smoothness, graph Fourier Transform and graph filters, which will be used in later sections.\nGraph Laplacian and Smoothness. A graph can be represented as G = (V,E), which consists of a set of n nodes V = {1, . . . , n} and a set of edges E ⊆ V × V . In this paper, we only consider undirected attributed graphs. We denote the adjacency matrix of G as A = (aij) ∈ Rn×n and the degree matrix of G as D = diag(d(1), . . . , d(n)) ∈ Rn×n. In the degree matrix, d(i) represents the degree of vertex i ∈ V . We consider that each vertex i ∈ V associates a scalar x(i) ∈ R which is also called a graph signal. All graph signals can be represented by x ∈ Rn. Some variants of graph Laplacian can be defined on graph G. We denote the graph Laplacian of G as L = D − A ∈ Rn×n. It should be noted that the sum of rows of graph Laplacian L is zero. The smoothness of a graph signal x can be measure through the quadratic form of graph Laplacian: ∆(x) = xTLx = Σi,j 1 2aij(x(i) − x(j))\n2. Due to the fact that xTLx ≥ 0, L is a semi-positive definite and symmetric matrix.\nGraph Fourier Transform and Graph Filters. Decomposing the Laplacian matrix with L = UΛUT , we can get the orthogonal eigenvectors U as Fourier basis and eigenvalues Λ as graph frequencies. The Graph Fourier Transform F : Rn → Rn is defined by Fx = x̂ := UTx. The inverse Graph Fourier Transform is defined by F−1x̂ = x := Ux̂. It enables us to transfer the graph signal to the spectral domain, and then define a graph filter g in the spectral domain for filtering the graph signal x: g(L)x = Ug(Λ)UTx = Ug(Λ)F(x) where g(Λ) = diag(g(λ1), ...g(λN )) controls how the graph frequencies can be altered." }, { "heading": "4 BIGCN", "text": "The Graph Fourier Transform has been successfully used to define various low-pass filters on graph signals (column vectors of feature matrix) and derive spectral graph convolutional networks (Defferrard et al., 2016; Bianchi et al., 2019; Wijesinghe & Wang, 2019). A spectral graph convolutional operation can be formulated as a function g with respect to the Laplacian matrix L. Although it can smooth the graph and remove certain feature-wise noise by assimilating neighbor nodes, it is sensitive to node-wise noise and unreliable structure information. Notice that when the node features contain rich information, there may exist correlations between different dimensions of features which can be used to figure out the low-tolerance problem. Therefore, it is natural to define filters on ”feature signals” (row vectors of graph feature matrix) based on the feature correlation. Inspired by this, we propose a bi-directional spectral GCN, named BiGCN, with column filters and row filters derived from the Laplacian smoothness assumption, as shown in Fig 1. In this way, we can enhance the denoising capacity and fault tolerance to graph structure of spectral graph convolutions. To explain it better, we start with the following simple case." }, { "heading": "4.1 FROM LAPLACIAN SMOOTHING TO GRAPH CONVOLUTION", "text": "Assuming that f = y0 +η is an observation with noise η, to recover the true graph signal y0, a natural optimization problem is given by:\nmin y ‖ y − f ‖22 +λyTLy,\nwhere λ is a hyper-parameter, L is the (normalized) Laplacian matrix. The optimal solution to this problem is the true graph signal given by\ny = (I + λL)−1f. (1)\nIf we generalize the noisy graph signal f to a noisy feature matrix F = Y0 +N , then the true graph feature matrix Y0 can be estimated as follows:\nY0 = arg min Y ‖ Y − F ‖2F +λtrace(Y TLY ) = (I + λL)−1F. (2)\nY TLY , the Laplacian regularization, achieves a smoothness assumption on the feature matrix. (I + λL)−1 is equivalent to a low-pass filters in graph spectral domain which can remove featurewise/column-wise noise and can be used to defined a new graph convolutional operation. Specifically, by multiplying a learnable matrix W (i.e. adding a linear layer for node feature transformation beforehand, which is similar to (Wu et al., 2019; NT & Maehara, 2019)), we obtain a new graph convolutional layer as follows:\nH(l+1) = σ((I + λL)−1H(l)W (l)). (3)\nIn order to reduce the computational complexity, we can simplify the propagation formulation by approximating (I + λL)−1 with its first-order Taylor expansion I − λL." }, { "heading": "4.2 BI-DIRECTIONAL SMOOTHING AND FILTERING", "text": "Considering the latent correlation between different dimensions of features, similar to the graph adjacency matrix, we can define a ”feature adjacency matrix” A′ to indicate such feature connections. For instance, if i − th, j − th, k − th dimension feature refer to “height”,“weight” and “age” respectively, then “weight” may have very strong correlation with “height” but weak correlation with “age”, so it is reasonable to assign A′ji = 1 while A ′ jk = 0 (if we assume A\n′ is a 0− 1 matrix). With a given ”feature adjacency matrix”, we can construct a corresponding ”feature graph” in which nodes indicate each dimension of features and edges indicate the correlation relationship. In addition, if Yn×d is the feature matrix of graph G, then Y Td×n would be the ”feature matrix of the feature graph”. That is, the column vectors of Yn×d are the feature vectors of those original nodes while the row vectors are exactly the feature vectors of ”feature nodes”. Analogously, we can derive the Laplacian matrix L′ of this feature graph.\nWhen noise is not only feature-wise but also node-wise, or when graph structure information is not completely reliable, it is beneficial to consider feature correlation information in order to recover the clean feature matrix better. Thus we add a Laplacian smoothness regularization on feature graph to the optimization problem indicated above:\nL = min Y ‖ Y − F ‖2F +λ1trace(Y TL1Y ) + λ2trace(Y L2Y T ). (4)\nHere L1 and L2 are the normalized Laplacian matrix of the original graph and feature graph, λ1 and λ2 are hyper-parameters of the two Laplacian regularization. Y L′Y T is the Laplacian regularization\non feature graph or row vectors of the original feature matrix. The solution of this optimization problem is equal to the solution of differential equation:\n∂L ∂Y = 2Y − 2F + 2λ1L1Y + 2λ2Y L2 = 0. (5)\nThis equation, equivalent to λ1L1Y + λ2Y L2 = F − Y , is a Sylvester equation. The numerical solution of Sylvester equations can be calculated using some classical algorithm such as Bartels–Stewart algorithm (Bartels, 1972), Hessenberg-Schur method (Golub et al., 1979) and LAPACK algorithm (Anderson et al., 1999). However, all of them require Schur decomposition which including Householder transforms and QR iteration with O(n3) computational cost. Consequently, we transform the original problem to a bi-criteria optimization problem with equality constraint instead of solving the Sylvester equation directly:\nL = min Y1 f (Y1 ) + min Y2 g(Y2 ) s.t Y2 − Y1 = 0,\nf (Y1 ) = 1\n2 ‖ Y1 − F ‖2F +λ1trace(Y T1 L1Y1),\ng(Y2 ) = 1\n2 ‖ Y2 − F ‖2F +λ2trace(Y2L2Y T2 ). (6)\nWe adopt the ADMM algorithm (Boyd et al., 2011) to solve this constrain convex optimization problem. The augmented Lagrangian function of L is:\nLp(Y1, Y2, Z) =f(Y1) + g(Y2) + trace(ZT (Y2 − Y1)) + p\n2 ‖ Y2 − Y1 ‖2F . (7)\nThe update iteration form of ADMM algorithm is:\nY (k+1) 1 := arg min Y1 Lp(Y1, Y (k)2 , Z(k))\n= arg min Y1\n1 2 ‖ Y1 − F ‖2F +λ1trace(Y T1 L1Y1) + trace(Z(k) T (Y (k) 2 − Y1)) + p 2 ‖ Y (k)2 − Y1 ‖2F ,\nY (k+1) 2 := arg min Y2 Lp(Y (k+1)1 , Y2, Z(k))\n= arg min Y2\n1 2 ‖ Y2 − F ‖2F +λ2trace(Y2L2Y T2 ) + trace(Z(k) T (Y2 − Y (k+1)1 ))\n+ p\n2 ‖ Y2 − Y (k+1)1 ‖2F ,\nZ(k+1) = Z(k) + p(Y (k+1) 2 − Y (k+1) 1 ). (8)\nWe obtain Y1 and Y2 iteration formulation by computing the stationary points of Lp(Y1, Y (k)2 , Z(k)) and Lp(Y (k+1)1 , Y2, Z(k)):\nY (k+1) 1 =\n1\n1 + p (I + 2λ1 1 + p L1) −1(F + pY (k) 2 + Z (k)),\nY (k+1) 2 =\n1\n1 + p (F + pY\n(k+1) 1 − Z(k))(I + 2λ2 1 + p L2) −1. (9)\nTo decrease the complexity of computation, we can use first-order Taylor approximation to simplify the iteration formulations by choosing appropriate hyper-parameters p and λ1, λ2 such that the eigenvalues of 2λ11+pL1 and 2λ2 1+pL2 all fall into [−1, 1]:\nY (k+1) 1 =\n1 1 + p (I − 2λ1 1 + p L1)(F + pY (k) 2 + Z (k)),\nY (k+1) 2 =\n1\n1 + p (F + pY\n(k+1) 1 − Z(k))(I − 2λ2 1 + p L2),\nZ(k+1) = Z(k) + p(Y (k+1) 2 − Y (k+1) 1 ). (10)\nIn each iteration, as shown in Fig 1, we update Y1 by appling the column low-pass filter I− 2λ11+pL1 to the previous Y2, then update Y2 by appling the row low-pass filter I − 2λ21+pL2 to the new Y1. To some extent, the new Y1 is the low-frequency column components of the original Y2 and the new Y2 is the low-frequency row components of the new Y1. After k iteration (in our experiments, k = 2), we take the mean of Y (k)1 and Y (k) 2 as the approximate solution Y , denote it as Y = ADMM(F,L1, L2). In this way, the output of ADMM contains two kinds of low-frequency components. Moreover, we can generalize L2 to a learnable symmetric matrix based on the original feature matrix F (or some prior knowledge), since it is hard to give a quantitative description on feature correlations.\nIn (l+ 1)th propagation layer, F = H(l) is the output of lth layer, L2 is a learnable symmetric matrix depending on H(l), for this we denote L2 as L (l) 2 . The entire formulation is:\nH(l+1) = σ(ADMM(H(l), L1, L (l) 2 )W (l)). (11)\nDiscussion about over-smoothing Since our algorithm is derived from a bidirectional smoothing, some may worry about the over-smoothing problem. The over-smoothing issue of GCN is explored in (Li et al., 2018; Oono & Suzuki, 2020), where the main claim is that when the GCN model goes very deep, it will encounter over-smoothing problem and lose its expressive power. From this perspective, our model will also be faced with the same problem when we stack many layers. However, a single BiGCN layer is just a more expressive and robust filter than a normal GCN layer. Actually, compared with the single-direction low-pass filtering GCN with a general forward function: H(l+1) = σ(g(L1)H (l)W (l)), ADMM(H(l), L1, L (l) 2 ), combining low-frequency components of both column and row vectors of H(l), is more informative than g(L1)H(l) since the latter can be regarded as one part of the former to some extent. It also explains that BiGCN is more expressive that single-direction low-pass filtering GCNs. Furthermore, when we take L2 as an identity matrix (in equation 5), BiGCN degenerates to a single-directional GCN with low-pass filter: ((1+λ2)I+λ1L1)−1. It also illustrates that BiGCN has more general model capacity. More technical details are added in Appendix.\nIn practice, we can also mix the BiGCN layer with original GCN layers or use jumping knowledge (Xu et al., 2018) to alleviate the over-smoothing problem: for example, we can use BiGCN at the bottom and then stack other GCN layers above. As we will show in experiments, the adding smoothing term in the BiGCN layers does not lead to over-smoothing; instead, it improves the performance on various datasets." }, { "heading": "5 EXPERIMENT", "text": "We test BiGCN on two graph-based tasks: semi-supervised node classification and link prediction on several benchmarks. As these datasets are usually observed and carefully collected through a rigid screening, noise can be negligible. However, in many real-world data, noise is everywhere and cannot be ignored. To highlight the denoising capacity of the bi-directional filters, we design three cases and conduct extensive experiments on artificial noisy data. In noise level case, we add different levels of noise to the whole graph. In noise rate case, we randomly add noise to a part of nodes. Considering the potential unreliable connection on the graph, to fully verify the fault tolerance to structure information, we set structure mistakes case in which we will change graph structure. We compare our performance with several baselines including original GCN (Kipf & Welling, 2017), GraphSAGE (Hamilton et al., 2017), GAT (Veličković et al., 2017), GIN (Xu et al., 2019), and GDC (Klicpera et al., 2019)." }, { "heading": "5.1 BENCHMARK DATASETS", "text": "We conduct link prediction experiments on Citation networks and node classification experiments both on Citation networks and Co-purchase networks.\nCitation. A citation network dataset consists of documents as nodes and citation links as directed edges. We use three undirected citation graph datasets: Cora (Sen et al., 2008), CiteSeer (Rossi & Ahmed, 2015) , and PubMed (Namata et al., 2012) for both node classification and link prediction tasks as they are common in all baseline approaches. In addition, we add another citation network DBLP (Pang et al., 2015) to link prediction tasks.\nCo-purchase. We also use two Co-purchase networks Amazon Computers (McAuley et al., 2015) and Amazon Photos (Shchur et al., 2018), which take goods as nodes, to predict the respective product category of goods. The features are bag-of-words node features and the edges represent that two goods are frequently bought together." }, { "heading": "5.2 EXPERIMENTAL SETUP", "text": "We train a two-layer BiGCN as the same as other baselines. Details of the hyperparameters setting and noise cases setting are contained in the appendix.\nLearnable L2. We introduce a completely learnable L2 in our experiments. In detail, we define L2 = I −D−1/22 A2D −1/2 2 , A2 = W2 +W T 2 where W2 = sigmoid(W ) and W is an uppertriangle matrix parameter to be optimized. To make it sparse, we also add L1 regularization to L2. For each layer, L2 is defined differently. Note that our framework is general and in practice there may be other reasonable choices for L2 (e.g. as we discussed in Appendix)." }, { "heading": "5.3 BASELINE MODELS", "text": "We compare our BiGCN with several state-of-the-art GNN models: GCN (Kipf & Welling, 2017), GraphSAGE (Hamilton et al., 2017), GAT (Veličković et al., 2017), GIN (Xu et al., 2019): Graph Isomorphism Network, GDC (Klicpera et al., 2019): Graph diffusion convolution based on generalized graph diffusion. We compare one of the variants of GDC which leverages personalized PageRank graph diffusion to improve the original GCN and adapt GCN into link prediction tasks is consistent with the implementation in P-GNN." }, { "heading": "5.4 RESULTS", "text": "We set three types of noise cases in terms of noise level, noise rate and structure mistake to evaluate each model on node classification and link prediction tasks (excluding structure mistakes). ”Noise level” and ”noise rate” add different types of noise to node features; ”structure mistake” indicates we randomly remove or add edges in the original graph. For noise on node features, we expect our BiGCN show its ability as graph filters. For structural errors, we expect the latent feature graph can help with the correction of structural errors in original graphs. The detailed settings of these cases as well as some additional experimental results can be found in the Appendix.\nNoise level case. In this case, we add Gaussian noise with a fixed variance (from 0.1 to 0.9, called the noise level) to the feature matrix. As Fig 2 shows, BiGCN outperforms other baselines and\nshows flatter declines with increasing noise levels, demonstrating better robustness in both node classification and link prediction tasks.\nNoise rate case. Here, we randomly choose a part of nodes at a fixed percentage (from 0.1 to 0.9, called the noise rate) to add different Gaussian noise. From Fig 3 we can see that, on the two tasks, BiGCN performs much better than baselines on all benchmarks apart from Cora. Especially on the PubMed dataset, BiGCN improves node classification accuracy by more than 10%.\nStructure mistakes case. Structure mistakes refer to the incorrect interaction relationship among nodes. In this setting, we artificially remove or add a certain percentage of edges of graphs at random and conduct experiments on node classification. Fig 4 illustrates the outstanding robustness of BiGCN that is superior to all baselines, demonstrating that our bi-directional filters can effectively utilize information from the latent feature graph and drastically reduce the negative impact of the incorrect structural information. At last, we would like to mention that our model also outperform other models in most cases on clean data without noise. This can attribute to BiGCN’s ability to efficiently extract graph features through its bidirectional filters. The detailed values in the figures are listed in Appendix." }, { "heading": "6 CONCLUSION", "text": "We proposed bidirectional low-pass filtering GCN, a more powerful and robust network than general spectral GCNs. The bidirectional filter of BiGCN can capture more informative graph signal components than the single-directional one. With the help of latent feature correlation, BiGCN also enhances the network’s tolerance to noisy graph signals and unreliable edge connections. Extensive experiments show that our model achieves remarkable performance improvement on noisy graphs." }, { "heading": "A MODEL EXPRESSIVENESS", "text": "In this section, we add more details about the our discussion of over-smoothing in Section 4.\nAs a bi-directional low-pass filter, our model can extract more informative features from the spectral domain. To simplify the analysis, let us take just one step of ADMM (k=1). Since Z0 = 0, Y 01 = Y 02 = F , we have the final solution from Equation (10) as follows\nY1 = (I − 2λ1\n1 + p L1)F,\nY2 = (I − 2pλ1\n(1 + p)2 L1)F (I − 2λ2 1 + p L2) =\n( (I − 2λ2\n1 + p L2)F T (I − 2pλ1 (1 + p)2 L1)\n)T .\nFrom this solution, we can see that Y1 is a low-pass filter which extracts low-frequency features from the original graph via L1; Y2 is a low-pass filter which extracts low-frequency features from the feature graph via L2 and then do some transformation. Since we take the average of Y1 and Y2 as the output of ADMM(H,L1, L2), the BiGCN layer will extract low-frequency features from both the graphs. That means, our model adds new information from the latent feature graph while not losing any features in the original graph. Compared to the original single-directional GCN, our model has more informative features and is more powerful in representation.\nWhen we take more than one step of ADMM, from Equation (10) we know that the additive component (I − 2λ11+pL1)F is always in Y1 (with a scaling coefficient), and the component F (I − 2λ2 1+pL2) is always in Y2. So, the output of the BiGCN layer will always contain the low-frequency features from the original graph and the feature graph with some additional features with transformation, which can give us the same conclusion as the one step case." }, { "heading": "B SENSITIVITY ANALYSIS", "text": "To demonstrate how hyper-parameters (iterations of ADMM, λ2, p and λ) influence BiGCN, we take Cora as an example and present the results on node classification under certain settings of artificial noise.\nFirst, we investigate the influence of iteration and λ2 on clean data and three noise cases with 0.2 noise rate, 0.2 noise level and 0.1% structure mistakes respectively. Fig 5(a) shows that ADMM with 2 iterations is good enough and the choice of λ2 has very little impact on results since it can be absorbed into the learnable L2. Then we take a particular case in which noise rate equals to 0.2 as an example to illustrate how much the performance of BiGCN depends on p and λ. Fig 5(b) shows that p guarantees relatively stable performance over a wide range values and only λ has comparable larger impact." }, { "heading": "C FLEXIBLE SELECTION OF L2", "text": "In our paper, we assume the latent feature graph L2 as a learnable matrix and automatically optimize it. However, in practice it can also be defined as other fixed forms. For example, a common way to deal with the latent correlation is to use a correlation graph Li et al. (2017). Another special case is if we define L2 as an identity matrix, our model will degenerate to a normal (single-directional) low-pass filtering GCN. When we take L2 = I in Equation (5), the solution becomes\nY = ((1 + λ2)I + λ1L1) −1F\nwhich is similar to the single-directional low pass filter (Equation (2)). Then the BiGCN layer will degenerate to the GCN layer as follows:\nH(l+1) = σ(((1 + λ2)I + λ1L1) −1H(l)W (l)).\nTo show the difference between different definitions of L2, we design a simple approach using a thresholded correlation matrix for L2 to compare with the method used in our main paper. In particular, we define an edge weight Aij as follows.\n(Pij)j∈N (i)∪i = softmax([ xTi xj\n‖ xi ‖‖ xj ‖ ]j∈N (i)∪i),\nAij = { 0, Pij ≤ mean(P ) 1, Pij > mean(P ) .\nThen we compute L2 as the normalized Laplacian obtained from A, i.e. L2 = D̃− 1 2 ÃD̃− 1 2 . For a simple demonstration, we only compare the two models on Cora with node feature noises. From Table 1 and Table 2, we can see that our learnable L2 is overall better. However, a fixed L2 can still give us decent results. When the node feature dimension is large, fixing L2 may be more efficient." }, { "heading": "D EXPERIMENTAL DETAILS", "text": "We train a two-layer BiGCN as the same as other baselines using Adam as the optimization method with 0.01 learning rate, 5× 10−4 weight decay, and 0.5 dropout rate for all benchmarks and baselines. In the node classification task, we use early stopping with patience 100 to early stop the model training process and select the best performing models based on validation set accuracy. In the link prediction task, we use the maximum 100 epochs to train each classifier and report the test ROCAUC selected based on the best validation set ROCAUC every 10 epochs. In addition, we follow the experimental setting from P-GNN (position-aware GNN) and the approach that we adapt GCN into link prediction tasks is consistent with the implementation in P-GNN. We set the random seed for each run and we take mean test results for 10 runs to report the performances.\nAll the experimental datasets are taken from PyTorch Geometric and we test BiGCN and other baselines on the whole graph while in GDC, only the largest connected component of the graph is selected. Thus, the experimental results we reported of GDC maybe not completely consistent with that reported by GDC. We found that the Citation datasets in PyTorch Geometric are a little different from those used in GCN, GraphSAGE, and GAT. It may be the reason why their accuracy results on Citeseer and Pubmed in node classification tasks are a little lower than the original papers reported.\nTo highlight the denoising capacity of the bi-directional filters, we design the following three cases and conduct extensive experiments on artificial noisy data. The noise level case and noise rate cases are adding noise on node features and the structure mistake case adds noise to graph structures.\nNoise level case. In this case, we add different Gaussian noise with zero mean to all the node features in the graph, i.e. to the feature matrix and use the variance of Gaussian (from 0.1 to 0.9) as the quantitative indexes of noise level.\nNoise rate case. In this case, we add Gaussian noise with the same distribution to different proportions of nodes, i.e. some rows of the feature matrix, at a random and quantitatively study how the percentage (from 10% to 100%) of nodes with noisy features impacts the model performances.\nStructure mistakes case. In practice, it is common and inevitable to observe wrong or interference link information in real-world data, especially in a large-scale network, such as a social network. Therefore, we artificially make random changes in the graph structure, such as removing edges or adding false edges by directly reversing the value of the original adjacency matrix (from 0 to 1 or from 1 to 0) symmetrically to obtain an error adjacency matrix. We choose different scales of errors to decide how many values would be reversed randomly. For example, assigning a 0.01% error rate to a graph consisting of 300 vertices means that 0.01 × 10−2 × 3002 = 9 values symmetrically distributed in the adjacency matrix will be changed.\nWe conduct all of the above cases on five benchmarks in node classification tasks and the two previous cases on four benchmarks in link prediction tasks.\nFor more experimental details please refer to our codes: https://anonymous.4open. science/r/4fefefed-4d59-4214-a324-832ac0ef1e96/." }, { "heading": "D.1 DATASETS", "text": "We use three Citation networks (Cora, Citeseer, and Pubmed) and two Co-purchase networks for node classification tasks and all the Citation datasets for link prediction." }, { "heading": "D.2 EXPERIMENTAL RESULTS ON CLEAN DATA", "text": "The performances of models on clean benchmarks in node classification and link prediction are shown in Table 4 and 5 respectively. These results correspond to the values with noise level 0 in the figures of Section 5." }, { "heading": "D.3 EXPERIMENTAL RESULTS ON AMZ COMP", "text": "The node classification performances of models on AMZ Comp dataset are shown in Fig 6." }, { "heading": "E NUMERICAL RESULTS AND HYPERPARAMETERS", "text": "In order to facilitate future research to compare with our results, we share the accurate numeric results here in addition to the curves shown in the pictures of the Experimental section. We also share the experimental environment and the optimal hyperparameters we used to get the results in B.2." }, { "heading": "E.1 NUMERICAL RESULTS", "text": "" }, { "heading": "E.1.1 NOISE RATE (NR)", "text": "Node Classification (NC)\nTable 6: Cora - NR - NC\n0.200 0.400 0.600 0.800 1.000\nGCN 0.751 0.706 0.662 0.631 0.606 SAGE 0.768 0.717 0.685 0.656 0.645 GAT 0.713 0.668 0.626 0.605 0.603 GIN 0.712 0.654 0.621 0.607 0.601 GDC 0.814 0.806 0.799 0.784 0.783 BiGCN 0.802 0.785 0.770 0.745 0.734\nTable 7: Citeseer - NR - NC\n0.200 0.400 0.600 0.800 1.000\nGCN 0.597 0.553 0.483 0.442 0.404 SAGE 0.612 0.543 0.497 0.450 0.427 GAT 0.564 0.457 0.405 0.371 0.346 GIN 0.535 0.468 0.432 0.405 0.401 GDC 0.617 0.575 0.548 0.520 0.511 BiGCN 0.626 0.580 0.561 0.531 0.516\nTable 8: PubMed - NR - NC\nTable 9: Computers - NR - NC\n0.200 0.400 0.600 0.800 1.000\nGCN 0.913 0.908 0.907 0.905 0.894 SAGE 0.910 0.903 0.900 0.901 0.904 GAT 0.873 0.874 0.867 0.848 0.855 GIN 0.342 0.315 0.333 0.304 0.306 GDC 0.901 0.896 0.890 0.883 0.881 BiGCN 0.922 0.921 0.920 0.917 0.916\nLink Prediction (LP)\nTable 11: Cora - NR - LP\n0.200 0.400 0.600 0.800 1.000\nGCN 0.850 0.817 0.795 0.792 0.785 SAGE 0.846 0.826 0.786 0.785 0.774 GAT 0.848 0.817 0.781 0.785 0.767 GIN 0.827 0.799 0.799 0.785 0.780 GDC 0.872 0.860 0.853 0.847 0.840 BiGCN 0.887 0.875 0.851 0.845 0.843\nTable 12: Citeseer - NR - LP\n0.200 0.400 0.600 0.800 1.000\nGCN 0.812 0.773 0.754 0.739 0.726 SAGE 0.824 0.787 0.749 0.740 0.732 GAT 0.807 0.765 0.747 0.738 0.741 GIN 0.819 0.772 0.758 0.757 0.747 DGC 0.808 0.779 0.758 0.764 0.756 BiGCN 0.867 0.836 0.812 0.800 0.804\nTable 13: Pubmed - NR - LP\n0.200 0.400 0.600 0.800 1.000\nGCN 0.838 0.767 0.745 0.743 0.741 SAGE 0.844 0.797 0.770 0.763 0.755 GAT 0.840 0.789 0.775 0.777 0.778 GIN 0.802 0.771 0.766 0.769 0.771 GDC 0.839 0.801 0.780 0.769 0.760 BiGCN 0.875 0.846 0.825 0.811 0.803\nTable 14: DBLP - NR - LP\n0.200 0.400 0.600 0.800 1.000\nGCN 0.901 0.879 0.868 0.860 0.854 SAGE 0.899 0.879 0.868 0.857 0.856 GAT 0.897 0.877 0.865 0.862 0.857 GIN 0.890 0.879 0.875 0.872 0.872 GDC 0.898 0.885 0.873 0.866 0.862 BiGCN 0.914 0.902 0.895 0.890 0.884" }, { "heading": "E.1.2 NOISE LEVEL (NL)", "text": "Node Classification (NC)\nTable 19: Photos - NL - NC\n0.100 0.200 0.300 0.400 0.500 0.600 0.700 0.800 0.900\nGCN 0.915 0.907 0.904 0.903 0.906 0.901 0.899 0.894 0.893 SAGE 0.903 0.904 0.909 0.904 0.905 0.901 0.902 0.901 0.896 GAT 0.879 0.881 0.871 0.868 0.849 0.845 0.840 0.770 0.740 GIN 0.336 0.337 0.327 0.332 0.317 0.345 0.351 0.314 0.330 GDC 0.901 0.896 0.896 0.887 0.885 0.877 0.876 0.863 0.859 BiGCN 0.925 0.923 0.920 0.919 0.919 0.916 0.908 0.906 0.902\nLink Prediction (LP)" }, { "heading": "E.1.3 STRUCTURE MISTAKES (SM)", "text": "Node Classification (NC)" }, { "heading": "E.2 ADDITIONAL IMPLEMENTATION DETAILS AND HYPER-PARAMETER SETTING", "text": "All implementations for both node classification and link prediction are based on PyTorch 1.2.0 and Pytorch Geometric1. All experiments based on PyTorch are running on one NVIDIA GeForce RTX 2080 Ti GPU using CUDA. The experimental datasets are taken from the PyTorch Geometric platform. We tune our hyperparameters for each model using validation data and listed the final optimal setting in the following tables. To accelerate the tedious process of hyper-parameters tuning, we set 2λ11+p = 2λ2 1+p = λ and choose different hyper-parameter p for different datasets." }, { "heading": "E.2.1 NODE CLASSIFICATION", "text": "" }, { "heading": "E.2.2 LINK PREDICTION", "text": "" } ]
2,020
BIGCN: A BI-DIRECTIONAL LOW-PASS FILTERING GRAPH NEURAL NETWORK
SP:4a7558123aa3ce672415a3e07eb3077d3ff92730
[ "This paper proposed a new actor-critic framework with adversary guide for deep reinforcement learning (RL), and introduced new Kullback-Leiblier divergence bonus term based on the difference between actor network and adversary network to deal with the exploration in RL. The experimental results showed the merit of this method for exploration. Some comments are provided as follows." ]
Despite definite success in deep reinforcement learning problems, actor-critic algorithms are still confronted with sample inefficiency in complex environments, particularly in tasks where efficient exploration is a bottleneck. These methods consider a policy (the actor) and a value function (the critic) whose respective losses are built using different motivations and approaches. This paper introduces a third protagonist: the adversary. While the adversary mimics the actor by minimizing the KL-divergence between their respective action distributions, the actor, in addition to learning to solve the task, tries to differentiate itself from the adversary predictions. This novel objective stimulates the actor to follow strategies that could not have been correctly predicted from previous trajectories, making its behavior innovative in tasks where the reward is extremely rare. Our experimental analysis shows that the resulting Adversarially Guided Actor-Critic (AGAC) algorithm leads to more exhaustive exploration. Notably, AGAC outperforms current state-of-the-art methods on a set of various hard-exploration and procedurally-generated tasks.
[ { "affiliations": [], "name": "Yannis Flet-Berliac" }, { "affiliations": [], "name": "Johan Ferret" }, { "affiliations": [], "name": "Philippe Preux" } ]
[ { "authors": [ "Zafarali Ahmed", "Nicolas Le Roux", "Mohammad Norouzi", "Dale Schuurmans" ], "title": "Understanding the impact of entropy on policy optimization", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Martin Arjovsky", "Léon Bottou" ], "title": "Towards principled methods for training generative adversarial networks", "venue": "In International Conference on Representation Learning,", "year": 2017 }, { "authors": [ "Dzmitry Bahdanau", "Felix Hill", "Jan Leike", "Edward Hughes", "Pushmeet Kohli", "Edward Grefenstette" ], "title": "Learning to understand goal specifications by modelling reward", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Andrew Barto", "Richard Sutton", "Charles Anderson" ], "title": "Neuronlike adaptive elements that can solve difficult learning control problems", "venue": "IEEE transactions on systems, man, and cybernetics,", "year": 1983 }, { "authors": [ "Marc Bellemare", "Sriram Srinivasan", "Georg Ostrovski", "Tom Schaul", "David Saxton", "Remi Munos" ], "title": "Unifying count-based exploration and intrinsic motivation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Marc Bellemare", "Sriram Srinivasan", "Georg Ostrovski", "Tom Schaul", "David Saxton", "Remi Munos" ], "title": "Unifying count-based exploration and intrinsic motivation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Yuri Burda", "Harrison Edwards", "Amos Storkey", "Oleg Klimov" ], "title": "Exploration by random network distillation", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Andres Campero", "Roberta Raileanu", "Heinrich Küttler", "Joshua B. Tenenbaum", "Tim Rocktäschel", "Edward Grefenstette" ], "title": "Learning with amigo: Adversarially motivated intrinsic goals", "venue": "International Conference on Learning Representations,", "year": 2021 }, { "authors": [ "Maxime Chevalier-Boisvert", "Lucas Willems", "Suman Pal" ], "title": "Minimalistic gridworld environment for openai gym", "venue": "https://github.com/maximecb/gym-minigrid,", "year": 2018 }, { "authors": [ "Djork-Arné Clevert", "Thomas Unterthiner", "Sepp Hochreiter" ], "title": "Fast and accurate deep network learning by exponential linear units (elus)", "venue": "In International Conference on Representation Learning,", "year": 2016 }, { "authors": [ "Karl Cobbe", "Chris Hesse", "Jacob Hilton", "John Schulman" ], "title": "Leveraging procedural generation to benchmark reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Adrien Ecoffet", "Joost Huizinga", "Joel Lehman", "Kenneth O Stanley", "Jeff Clune" ], "title": "Go-explore: a new approach for hard-exploration problems", "venue": null, "year": 1901 }, { "authors": [ "Lasse Espeholt", "Hubert Soyer", "Remi Munos", "Karen Simonyan", "Vlad Mnih", "Tom Ward", "Yotam Doron", "Vlad Firoiu", "Tim Harley", "Iain Dunning" ], "title": "Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Benjamin Eysenbach", "Abhishek Gupta", "Julian Ibarz", "Sergey Levine" ], "title": "Diversity is all you need: Learning skills without a reward function", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Jesse Farebrother", "Marlos C Machado", "Michael Bowling" ], "title": "Generalization and regularization in dqn", "venue": "arXiv preprint arXiv:1810.00123,", "year": 2018 }, { "authors": [ "Johan Ferret", "Raphaël Marinier", "Matthieu Geist", "Olivier Pietquin" ], "title": "Self-attentional credit assignment for transfer in reinforcement learning", "venue": "In International Joint Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Johan Ferret", "Olivier Pietquin", "Matthieu Geist" ], "title": "Self-imitation advantage learning", "venue": "In International Conference on Autonomous Agents and Multiagent Systems,", "year": 2021 }, { "authors": [ "Yannis Flet-Berliac", "Philippe Preux" ], "title": "Only relevant information matters: Filtering out noisy samples to boost rl", "venue": "In International Joint Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Yannis Flet-Berliac", "Reda Ouhamma", "Odalric-Ambrym Maillard", "Philippe Preux" ], "title": "Learning value functions in deep policy gradients using residual variance", "venue": "In International Conference on Learning Representations,", "year": 2021 }, { "authors": [ "Carlos Florensa", "David Held", "Xinyang Geng", "Pieter Abbeel" ], "title": "Automatic goal generation for reinforcement learning agents", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Meire Fortunato", "Mohammad Gheshlaghi Azar", "Bilal Piot", "Jacob Menick", "Ian Osband", "Alexander Graves", "Vlad Mnih", "Remi Munos", "Demis Hassabis", "Olivier Pietquin", "Charles Blundell", "Shane Legg" ], "title": "Noisy networks for exploration", "venue": "In International Conference on Representation Learning,", "year": 2018 }, { "authors": [ "Matthieu Geist", "Bruno Scherrer", "Olivier Pietquin" ], "title": "A theory of regularized markov decision processes", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Ian Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Audrunas Gruslys", "Will Dabney", "Mohammad Gheshlaghi Azar", "Bilal Piot", "Marc Bellemare", "Remi Munos" ], "title": "The reactor: A fast and sample-efficient actor-critic agent for reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Seungyul Han", "Youngchul Sung" ], "title": "Diversity actor-critic: Sample-aware entropy regularization for sample-efficient exploration", "venue": "arXiv preprint arXiv:2006.01419,", "year": 2020 }, { "authors": [ "Matthew Hausknecht", "Peter Stone" ], "title": "Deep recurrent q-learning for partially observable mdps", "venue": "In AAAI Fall Symposium on Sequential Decision Making for Intelligent Agents,", "year": 2015 }, { "authors": [ "Jonathan Ho", "Stefano Ermon" ], "title": "Generative adversarial imitation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Maximilian Igl", "Kamil Ciosek", "Yingzhen Li", "Sebastian Tschiatschek", "Cheng Zhang", "Sam Devlin", "Katja Hofmann" ], "title": "Generalization in reinforcement learning with selective noise injection and information bottleneck", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Niels Justesen", "Ruben Rodriguez Torrado", "Philip Bontrager", "Ahmed Khalifa", "Julian Togelius", "Sebastian Risi" ], "title": "Illuminating generalization in deep reinforcement learning through procedural level generation", "venue": "In NeurIPS Workshop on Deep Reinforcement Learning,", "year": 2018 }, { "authors": [ "Michał Kempka", "Marek Wydmuch", "Grzegorz Runc", "Jakub Toczek", "Wojciech Jaśkowski" ], "title": "Vizdoom: A doom-based ai research platform for visual reinforcement learning", "venue": "In IEEE Conference on Computational Intelligence and Games,", "year": 2016 }, { "authors": [ "Diederik Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Representation Learning,", "year": 2015 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial machine learning at scale", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Kimin Lee", "Kibok Lee", "Jinwoo Shin", "Honglak Lee" ], "title": "Network randomization: A simple technique for generalization in deep reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Timothy Lillicrap", "Jonathan Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Takeru Miyato", "Shin-ichi Maeda", "Masanori Koyama", "Ken Nakae", "Shin Ishii" ], "title": "Distributional smoothing with virtual adversarial training", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Rémi Munos", "Tom Stepleton", "Anna Harutyunyan", "Marc Bellemare" ], "title": "Safe and efficient off-policy reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Junhyuk Oh", "Yijie Guo", "Satinder Singh", "Honglak Lee" ], "title": "Self-imitation learning", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Deepak Pathak", "Pulkit Agrawal", "Alexei A Efros", "Trevor Darrell" ], "title": "Curiosity-driven exploration by self-supervised prediction", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "David Pfau", "Oriol Vinyals" ], "title": "Connecting generative adversarial networks and actor-critic methods", "venue": "arXiv preprint arXiv:1610.01945,", "year": 2016 }, { "authors": [ "Roberta Raileanu", "Tim Rocktäschel" ], "title": "Ride: Rewarding impact-driven exploration for procedurallygenerated environments", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "John Schulman", "Philipp Moritz", "Sergey Levine", "Michael Jordan", "Pieter Abbeel" ], "title": "High-dimensional continuous control using generalized advantage estimation", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "David Silver", "Guy Lever", "Nicolas Heess", "Thomas Degris", "Daan Wierstra", "Martin Riedmiller" ], "title": "Deterministic policy gradient algorithms", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Xingyou Song", "Yiding Jiang", "Yilun Du", "Behnam Neyshabur" ], "title": "Observational overfitting in reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Richard Stuart Sutton" ], "title": "Temporal Credit Assignment in Reinforcement Learning", "venue": "PhD thesis, University of Massachusetts Amherst,", "year": 1984 }, { "authors": [ "Nino Vieillard", "Tadashi Kozuno", "Bruno Scherrer", "Olivier Pietquin", "Rémi Munos", "Matthieu Geist" ], "title": "Leverage the average: an analysis of regularization in rl", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Nino Vieillard", "Olivier Pietquin", "Matthieu Geist" ], "title": "Munchausen reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Oriol Vinyals", "Igor Babuschkin", "Wojciech Czarnecki", "Michaël Mathieu", "Andrew Dudzik", "Junyoung Chung", "David Choi", "Richard Powell", "Timo Ewalds", "Petko Georgiev", "Junhyuk Oh", "Dan Horgan", "Manuel Kroiss", "Ivo Danihelka", "Aja Huang", "Laurent Sifre", "Trevor Cai", "John Agapiou", "Max Jaderberg", "David Silver" ], "title": "Grandmaster level in StarCraft II using multi-agent reinforcement learning", "venue": "Nature, 575,", "year": 2019 }, { "authors": [ "Ziyu Wang", "Victor Bapst", "Nicolas Heess", "Volodymyr Mnih", "Remi Munos", "Koray Kavukcuoglu", "Nando de Freitas" ], "title": "Sample efficient actor-critic with experience replay", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Yuhuai Wu", "Elman Mansimov", "Roger B Grosse", "Shun Liao", "Jimmy Ba" ], "title": "Scalable trust-region method for deep reinforcement learning using kronecker-factored approximation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Amy Zhang", "Nicolas Ballas", "Joelle Pineau" ], "title": "A dissection of overfitting and generalization in continuous reinforcement learning", "venue": "arXiv preprint arXiv:1806.07937,", "year": 2018 }, { "authors": [ "Chiyuan Zhang", "Oriol Vinyals", "Remi Munos", "Samy Bengio" ], "title": "A study on overfitting in deep reinforcement learning", "venue": "arXiv preprint arXiv:1804.06893,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Research in deep reinforcement learning (RL) has proven to be successful across a wide range of problems (Silver et al., 2014; Schulman et al., 2016; Lillicrap et al., 2016; Mnih et al., 2016). Nevertheless, generalization and exploration in RL still represent key challenges that leave most current methods ineffective. First, a battery of recent studies (Farebrother et al., 2018; Zhang et al., 2018a; Song et al., 2020; Cobbe et al., 2020) indicates that current RL methods fail to generalize correctly even when agents have been trained in a diverse set of environments. Second, exploration has been extensively studied in RL; however, most hard-exploration problems use the same environment for training and evaluation. Hence, since a well-designed exploration strategy should maximize the information received from a trajectory about an environment, the exploration capabilities may not be appropriately assessed if that information is memorized. In this line of research, we choose to study the exploration capabilities of our method and its ability to generalize to new scenarios. Our evaluation domains will, therefore, be tasks with sparse reward in procedurally-generated environments.\nIn this work, we propose Adversarially Guided Actor-Critic (AGAC), which reconsiders the actor-critic framework by introducing a third protagonist: the adversary. Its role is to predict the actor’s actions correctly. Meanwhile, the actor must not only find the optimal actions to maximize the sum of expected returns, but also counteract the predictions of the adversary. This formulation is lightly inspired by adversarial methods, specifically generative adversarial networks (GANs) (Goodfellow et al., 2014). Such a link between GANs and actor-critic methods has been formalized by Pfau & Vinyals (2016); however, in the context of a third protagonist, we draw a different analogy. The adversary can be interpreted as playing the role of a discriminator that must predict the actions of the actor, and the actor can be considered as playing the role of a generator that behaves to deceive the predictions of the adversary. This approach has the advantage, as with GANs, that the optimization procedure generates a diversity of meaningful data, corresponding to sequences of actions in AGAC.\n∗Equal contribution.\nThis paper analyses and explores how AGAC explicitly drives diversity in the behaviors of the agent while remaining reward-focused, and to which extent this approach allows to adapt to the evolving state space of procedurally-generated environments where the map is constructed differently with each new episode. Moreover, because stability is a legitimate concern since specific instances of adversarial networks were shown to be prone to hyperparameter sensitivity issues (Arjovsky & Bottou, 2017), we also examine this aspect in our experiments.\nThe contributions of this work are as follow: (i) we propose a novel actor-critic formulation inspired from adversarial learning (AGAC), (ii) we analyse empirically AGAC on key reinforcement learning aspects such as diversity, exploration and stability, (iii) we demonstrate significant gains in performance on several sparse-reward hard-exploration tasks including procedurally-generated tasks." }, { "heading": "2 RELATED WORK", "text": "Actor-critic methods (Barto et al., 1983; Sutton, 1984) have been extended to the deep learning setting by Mnih et al. (2016), who combined deep neural networks and multiple distributed actors with an actor-critic setting, with strong results on Atari. Since then, many additions have been proposed, be it architectural improvements (Vinyals et al., 2019), better advantage or value estimation (Schulman et al., 2016; Flet-Berliac et al., 2021), or the incorporation of off-policy elements (Wang et al., 2017; Oh et al., 2018; Flet-Berliac & Preux, 2020). Regularization was shown to improve actor-critic methods, either by enforcing trust regions (Schulman et al., 2015; 2017; Wu et al., 2017), or by correcting for off-policiness (Munos et al., 2016; Gruslys et al., 2018); and recent works analyzed its impact from a theoretical standpoint (Geist et al., 2019; Ahmed et al., 2019; Vieillard et al., 2020a;b). Related to our work, Han & Sung (2020) use the entropy of the mixture between the policy induced from a replay buffer and the current policy as a regularizer. To the best of our knowledge, none of these methods explored the use of an adversarial objective to drive exploration.\nWhile introduced in supervised learning, adversarial learning (Goodfellow et al., 2015; Miyato et al., 2016; Kurakin et al., 2017) was leveraged in several RL works. Ho & Ermon (2016) propose an imitation learning method that uses a discriminator whose task is to distinguish between expert trajectories and those of the agent while the agent tries to match expert behavior to fool the discriminator. Bahdanau et al. (2019) use a discriminator to distinguish goal states from non-goal states based on a textual instruction, and use the resulting model as a reward function. Florensa et al. (2018) use a GAN to produce sub-goals at the right level of difficulty for the current agent, inducing a form of curriculum. Additionally, Pfau & Vinyals (2016) provide a parallel between GANs and the actor-critic framework.\nWhile exploration is driven in part by the core RL algorithms (Fortunato et al., 2018; Han & Sung, 2020; Ferret et al., 2021), it is often necessary to resort to exploration-specific techniques. For instance, intrinsic motivation encourages exploratory behavior from the agent. Some works use state-visitation counts or pseudo-counts to promote exhaustive exploration (Bellemare et al., 2016a), while others use curiosity rewards, expressed in the magnitude of prediction error from the agent, to push it towards unfamiliar areas of the state space (Burda et al., 2018). Ecoffet et al. (2019) propose a technique akin to tree traversal to explore while learning to come back to promising areas. Eysenbach et al. (2018) show that encouraging diversity helps with exploration, even in the absence of reward.\nLast but not least, generalization is a key challenge in RL. Zhang et al. (2018b) showed that, even when the environment is not deterministic, agents can overfit to their training distribution and that it is difficult to distinguish agents likely to generalize to new environments from those that will not. In the same vein, recent work has advocated using procedurally-generated environments, in which a new instance of the environment is sampled when a new episode starts, to assess generalization capabilities better (Justesen et al., 2018; Cobbe et al., 2020). Finally, methods based on network randomization (Igl et al., 2019), noise injection (Lee et al., 2020), and credit assignment (Ferret et al., 2020) have been proposed to reduce the generalization gap for RL agents." }, { "heading": "3 BACKGROUND AND NOTATIONS", "text": "We place ourselves in the Markov Decision Processes (Puterman, 1994) framework. A Markov Decision Process (MDP) is a tuple M = {S,A,P, R, γ}, where S is the state space, A is the action\nspace, P is the transition kernel,R is the bounded reward function and γ ∈ [0, 1) is the discount factor. Let π denote a stochastic policy mapping states to distributions over actions. We place ourselves in the infinite-horizon setting, i.e., we seek a policy that optimizes J(π) = Eπ[ ∑∞ t=0 γ\ntr (st, at)]. The value of a state is the quantity V π(s) = Eπ[ ∑∞ t=0 γ\ntr (st, at) |s0 = s] and the value of a state-action pair Qπ(s, a) of performing action a in state s and then following policy π is defined as: Qπ(s, a) = Eπ [ ∑∞ t=0 γ\ntr (st, at) |s0 = s, a0 = a]. The advantage function, which quantifies how an action a is better than the average action in state s, is Aπ(s, a) = Qπ(s, a)− V π(s). Finally, the entropyHπ of a policy is calculated as: Hπ(s) = Eπ(·|s) [− log π(·|s)] .\nActor-Critic and Deep Policy Gradients. An actor-critic algorithm is composed of two main components: a policy and a value predictor. In deep RL, both the policy and the value function are obtained via parametric estimators; we denote θ and φ their respective parameters. The policy is updated via policy gradient, while the value is usually updated via temporal difference or Monte Carlo rollouts. In practice, for a sequence of transitions {st, at, rt, st+1}t∈[0,N ], we use the following policy gradient loss (including the commonly used entropic penalty):\nLPG = − 1\nN t+N∑ t′=t (At′ log π (at′ |st′ , θ) + αHπ(st′ , θ)),\nwhere α is the entropy coefficient and At is the generalized advantage estimator (Schulman et al., 2016) defined as: At = ∑t+N t′=t (γλ)\nt′−t(rt′ + γVφold(st′+1) − Vφold(st′)), with λ a fixed hyperparameter and Vφold the value function estimator at the previous optimization iteration. To estimate the value function, we solve the non-linear regression problem minimizeφ ∑t+N t′=t (Vφ(st′)− V̂t′)2 where V̂t = At + Vφold(st′)." }, { "heading": "4 ADVERSARIALLY GUIDED ACTOR-CRITIC", "text": "To foster diversified behavior in its trajectories, AGAC introduces a third protagonist to the actor-critic framework: the adversary. The role of the adversary is to accurately predict the actor’s actions, by minimizing the discrepancy between its action distribution πadv and the distribution induced by the policy π. Meanwhile, in addition to finding the optimal actions to maximize the sum of expected returns, the actor must also counteract the adversary’s predictions by maximizing the discrepancy between π and πadv (see Appendix B for an illustration). This discrepancy, used as a form of exploration bonus, is defined as the difference of action log-probabilities (see Eq. (1)), whose expectation is the Kullback–Leibler divergence:\nDKL(π(·|s)‖πadv(·|s)) = Eπ(·|s) [log π(·|s)− log πadv(·|s)] .\nFormally, for each state-action pair (st, at) in a trajectory, an action-dependent bonus log π(at|st)− log πadv(at|st) is added to the advantage. In addition, the value target of the critic is modified to include the action-independent equivalent, which is the KL-divergence DKL(π(·|st)‖πadv(·|st)). We discuss the role of these mirrored terms below, and the implications of AGAC’s modified objective from a more theoretical standpoint in the next section. In addition to the parameters θ (resp. θold the parameter of the policy at the previous iteration) and φ defined above (resp. φold that of the critic), we denote ψ (resp. ψold) that of the adversary.\nAGAC minimizes the following loss:\nLAGAC = LPG + βV LV +βadvLadv.\nIn the new objective LPG = − 1N ∑N t=0(A AGAC t log π (at|st, θ) + αHπ(st, θ)), AGAC modifies At as:\nAAGACt = At+ c ( log π(at|st, θold)− log πadv(at|st, ψold) ) , (1)\nwith c a varying hyperparameter that controls the dependence on the action log-probability difference. To encourage exploration without preventing asymptotic stability, c is linearly annealed during the course of training. LV is the objective function of the critic defined as:\nLV = 1\nN N∑ t=0\n( Vφ(st)− ( V̂t+ cDKL(π(·|st, θold)‖πadv(·|st, ψold)) ))2 . (2)\nFinally, Ladv is the objective function of the adversary:\nLadv = 1\nN N∑ t=0 DKL(π(·|st, θold)‖πadv(·|st, ψ)). (3)\nEqs. (1), (2) and (3) are the three equations that our method modifies (we color in blue the specific parts) in the traditional actor-critic framework. The terms βV and βadv are fixed hyperparameters.\nUnder the proposed actor-critic formulation, the probability of sampling an action is increased if the modified advantage is positive, i.e. (i) the corresponding return is larger than the predicted value and/or (ii) the action log-probability difference is large. More precisely, our method favors transitions whose actions were less accurately predicted than the average action, i.e. log π(a|s)− log πadv(a|s) ≥ DKL(π(·|s)‖πadv(·|s)). This is particularly visible for λ → 1, in which case the generalized advantage is At = Gt − Vφold(st), resulting in the appearance of both aforementioned mirrored terms in the modified advantage:\nAAGACt = Gt − V̂ φold t + c ( log π(at|st)− log πadv(at|st)− D̂φoldKL (π(·|st)‖πadv(·|st)) ) ,\nwith Gt the observed return, V̂ φold t the estimated return and D̂ φold KL (π(·|st)‖πadv(·|st)) the estimated KL-divergence (estimated components of Vφold(st) from Eq. 2).\nTo avoid instability, in practice the adversary is a separate estimator, updated with a smaller learning rate than the actor. This way, it represents a delayed and more steady version of the actor’s policy, which prevents the agent from having to constantly adapt or focus solely on fooling the adversary." }, { "heading": "4.1 BUILDING MOTIVATION", "text": "In the following, we provide an interpretation of AGAC by studying the dynamics of attraction and repulsion between the actor and the adversary. To simplify, we study the equivalent of AGAC in a policy iteration (PI) scheme. PI being the dynamic programming scheme underlying the standard actor-critic, we have reasons to think that some of our findings translate to the original AGAC algorithm. In PI, the quantity of interest is the action-value, which AGAC would modify as:" }, { "heading": "QAGACπk = Qπk + c (log πk − log πadv),", "text": "with πk the policy at iteration k. Incorporating the entropic penalty, the new policy πk+1 verifies:\nπk+1 = argmax π JPI(π) = argmax π EsEa∼π(·|s)[QAGACπk (s, a)− α log π(a|s)].\nWe can rewrite this objective:\nJPI(π) = EsEa∼π(·|s)[QAGACπk (s, a)− α log π(a|s)] = EsEa∼π(·|s)[Qπk(s, a) + c (log πk(a|s)− log πadv(a|s))− α log π(a|s)] = EsEa∼π(·|s)[Qπk(s, a) + c (log πk(a|s)− log π(a|s) + log π(a|s)− log πadv(a|s))− α log π(a|s)]\n= Es [ Ea∼π(·|s)[Qπk(s, a)]− cDKL(π(·|s)||πk(·|s))︸ ︷︷ ︸\nπk is attractive\n+ cDKL(π(·|s)||πadv(·|s))︸ ︷︷ ︸ πadv is repulsive +αH(π(·|s))︸ ︷︷ ︸ enforces stochastic policies\n] .\nThus, in the PI scheme, AGAC finds a policy that maximizes Q-values, while at the same time remaining close to the current policy and far from a mixture of the previous policies (i.e., πk−1, πk−2, πk−3, . . . ). Note that we experimentally observe (see Section 5.3) that our method performs better with a smaller learning rate for the adversarial network than that of the other networks, which could imply that a stable repulsive term is beneficial.\nThis optimization problem is strongly concave in π (thanks to the entropy term), and is state-wise a Legendre-Fenchel transform. Its solution is given by (see Appendix E for the full derivation):\nπk+1 ∝ ( πk πadv ) c α exp Qπk α .\nThis result gives us some insight into the behavior of the objective function. Notably, in our example, if πadv is fixed and c = α, we recover a KL-regularized PI scheme (Geist et al., 2019) with the modified reward r − c log πadv." }, { "heading": "4.2 IMPLEMENTATION", "text": "In all of the experiments, we use PPO (Schulman et al., 2017) as the base algorithm and build on it to incorporate our method. Hence,\nLPG = − 1\nN t+N∑ t′=t min ( π(at′ |st′ , θ) π(at′ |st′ , θold) AAGACt′ , clip ( π(at′ |st′ , θ) π(at′ |st′ , θold) , 1− , 1 + ) AAGACt′ ) ,\nwith AAGACt′ given in Eq. (1), N the temporal length considered for one update of parameters and the clipping parameter. Similar to RIDE (Raileanu & Rocktäschel, 2019), we also discount PPO by episodic state visitation counts, except for VizDoom (cf. Section 5.1). The actor, critic and adversary use the convolutional architecture of the Nature paper of DQN (Mnih et al., 2015) with different hidden sizes (see Appendix D for architecture details). The three neural networks are optimized using Adam (Kingma & Ba, 2015). Our method does not use RNNs in its architecture; instead, in all our experiments, we use frame stacking. Indeed, Hausknecht & Stone (2015) interestingly demonstrate that although recurrence is a reliable method for processing state observation, it does not confer any systematic advantage over stacking observations in the input layer of a CNN. Note that the parameters are not shared between the policy, the critic and the adversary and that we did not observe any noticeable difference in computational complexity when using AGAC compared to PPO. We direct the reader to Appendix C for a list of hyperparameters. In particular, the c coefficient of the adversarial bonus is linearly annealed.\nAt each training step, we perform a stochastic optimization step to minimizeLAGAC using stop-gradient:\nθ ← Adam (θ,∇θLPG, η1) , φ← Adam (φ,∇φLV , η1) , ψ ← Adam (ψ,∇ψLadv, η2) ." }, { "heading": "5 EXPERIMENTS", "text": "In this section, we describe our experimental study in which we investigate: (i) whether the adversarial bonus alone (e.g. without episodic state visitation count) is sufficient to outperform other methods in VizDoom, a sparse-reward task with high-dimensional observations, (ii) whether AGAC succeeds in partially-observable and procedurally-generated environments with high sparsity in the rewards, compared to other methods, (iii) how well AGAC is capable of exploring in environments without extrinsic reward, (iv) the training stability of our method. In all of the experiments, lines are average performances and shaded areas represent one standard deviation. The code for our method is released at github.com/yfletberliac/adversarially-guided-actor-critic.\nEnvironments. To carefully evaluate the performance of our method, its ability to develop robust exploration strategies and its generalization to unseen states, we choose tasks that have been used in prior work, which are tasks with high-dimensional observations, sparse reward and procedurallygenerated environments. In VizDoom (Kempka et al., 2016), the agent must learn to move along corridors and through rooms without any reward feedback from the 3-D environment. The MiniGrid environments (Chevalier-Boisvert et al., 2018) are a set of challenging partially-observable and sparse-reward gridworlds. In this type of procedurally-generated environments, memorization is impossible due to the huge size of the state space, so the agent must learn to generalize across the different layouts of the environment. Each gridworld has different characteristics: in the MultiRoom\ntasks, the agent is placed in the first room and should reach a goal placed in the most distant room. In the KeyCorridor tasks, the agent must navigate to pick up an object placed in a room locked by a door whose key is in another room. Finally, in the ObstructedMaze tasks, the agent must pick up a box that is placed in a corner of a 3x3 maze in which the doors are also locked, the keys are hidden in boxes and balls obstruct the doors. All considered environments (see Fig. 1 for some examples) are available as part of OpenAI Gym (Brockman et al., 2016).\nBaselines. For a fair assessment of our method, we compare to some of the most prominent methods specialized in hard-exploration tasks: RIDE (Raileanu & Rocktäschel, 2019), based on an intrinsic reward associated with the magnitude of change between two consecutive state representations and state visitation, Count as Count-Based Exploration (Bellemare et al., 2016b), which we couple with IMPALA (Espeholt et al., 2018), RND (Burda et al., 2018) in which an exploration bonus is positively correlated to the error of predicting features from the observations and ICM (Pathak et al., 2017), where a module only predicts the changes in the environment that are produced by the actions of the agent. Finally, we compare to most the recent and best performing method at the time of writing in procedurally-generated environments: AMIGo (Campero et al., 2021) in which a goal-generating teacher provides count-based intrinsic goals." }, { "heading": "5.1 ADVERSARIALLY-BASED EXPLORATION (NO EPISODIC COUNT)", "text": "In this section, we assess the benefits of using an adversarially-based exploration bonus and examine how AGAC performs without the help of count-based exploration. In order to provide a comparison to state-of-the-art methods, we choose VizDoom, a hard-exploration problem used in prior work. In this game, the map consists of 9 rooms connected by corridors where 270 steps separate the initial position of the agent and the goal under an optimal policy. Episodes are terminated either when the agent finds the goal or if the episode exceeds 2100 timesteps. Importantly, while other algorithms (Raileanu & Rocktäschel, 2019; Campero et al., 2021) benefit from count-based exploration, this study has been conducted with our method not benefiting from episodic count whatsoever. Results in Table 1 indicate that AGAC clearly outperforms other methods in sample-efficiency. Only the methods ICM and RIDE succeed in matching the score of AGAC, and with about twice as much transitions (∼ 3M vs. 6M). Interestingly, AMIGo performs similarly to Count and RND. We find this result surprising because AMIGo has proven to perform well in the MiniGrid environments. Nevertheless, it appears that concurrent works to ours experienced similar issues with the accompanying implementation1. The results of AGAC support the capabilities of the adversarial bonus and show that it can, on its own, achieve significant gains in performance. However, the VizDoom task is not procedurally-generated; hence we have not evaluated the generalization to new states yet. In the following section, we use MiniGrid to investigate this." }, { "heading": "5.2 HARD-EXPLORATION TASKS WITH PARTIALLY-OBSERVABLE ENVIRONMENTS", "text": "We now evaluate our method on multiple hard-exploration procedurally-generated tasks from MiniGrid. Details about MiniGrid can be found in Appendix C.1. Fig. 2 indicates that AGAC significantly outperforms other methods on these tasks in sample-efficiency and performance. AGAC also outperforms the current state-of-the-art method, AMIGo, despite the fact that it uses the fully-observable version of MiniGrid. Note that we find the same poor performance results when training AMIGo in MiniGrid, similar to Vizdoom results. For completeness, we also report in Table 2 of Appendix A.1 the performance results with the scores reported in the original papers Raileanu & Rocktäschel\n1AMIGo implementation GitHub Issue.\n(2019) and Campero et al. (2021). We draw similar conclusions: AGAC clearly outperforms the state-of-the-art RIDE, AMIGo, Count, RND and ICM.\nIn all the considered tasks, the agent must learn to generalize across a very large state space because the layouts are generated procedurally. We consider three main arguments to explain why our method is successful: (i) our method makes use of partial observations: in this context, the adversary has a harder time predicting the actor’s actions; nevertheless, the mistakes of the former benefit the latter in the form of an exploration bonus, which pushes the agent to explore further in order to deceive the adversary, (ii) the exploration bonus (i.e. intrinsic reward) does not dissipate compared to most other methods, as observed in Fig. 9 in Appendix A.4, (iii) our method does not make assumptions about the environment dynamics (e.g., changes in the environment produced by an action as in Raileanu & Rocktäschel (2019)) since this can hinder learning when the space of state changes induced by an action is too large (such as the action of moving a block in ObstructedMaze).\nIn Appendix A.3, we also include experiments in two environments with extremely sparse reward signals: KeyCorridorS8R3 and ObstructedMazeFull. Despite the challenge, AGAC still manages to find rewards and can perform well by taking advantage of the diversified behaviour induced by our method. To the best of our knowledge, no other method ever succeeded to perform well (> 0 average return) in those tasks. We think that given more computing time, AGAC’s score could go higher." }, { "heading": "5.3 TRAINING STABILITY", "text": "Here we want to analyse the stability of the method when changing hyperparameters. The most important parameters in AGAC are c, the coefficient for the adversarial bonus, and the learning rates ratio ν = η2η1 . We choose KeyCorridorS4R3 as the evaluation task because among all the tasks considered, its difficulty is at a medium level. Fig. 3 shows the learning curves. For readability, we plot the average return only; the standard deviation is sensibly the same for all curves. We observe that deviating from the hyperparameter values found using grid search results in a slower training. Moreover, although reasonable, c appears to have more sensitivity than ν." }, { "heading": "Random AGACCount RIDERND", "text": "" }, { "heading": "5.4 EXPLORATION IN REWARD-FREE ENVIRONMENT", "text": "To better understand the effectiveness of our method and inspect how the agent collects rewards that would not otherwise be achievable by simple exploration heuristics or other methods, we analyze the performance of AGAC in another (procedurally-generated) challenging environment, MultiRoomN10S6, when there is no reward signal, i.e. no extrinsic reward. Beyond the good performance of our method when extrinsic rewards are given to the agent, Fig. 4 indicates that the exploration induced by our method makes the agent succeed in a significant proportion of the episodes: in the configuration “NoExtrinsicReward” the reward signal is not given (the goal is\ninvisible to the agent) and the performance of AGAC stabilizes around an average return of ∼ 0.15. Since the return of an episode is either 0 or 1 (depending on whether the agent reached the goal state or not), and because this value is aggregated across several episodes, the results indicate that reward-free AGAC succeeds in ∼ 15% of the tasks. Comparatively, random agents have a zero average return. This poor performance is in accordance with the results in Raileanu & Rocktäschel (2019) and reflects the complexity of the task: in order to go from one room to another, an agent must perform a specific action to open a door and cross it within the time limit of 200 timesteps. In the following, we visually investigate how different methods explore the environments." }, { "heading": "5.5 VISUALIZING COVERAGE AND DIVERSITY", "text": "In this section, we first investigate how different methods explore environments without being guided by extrinsic rewards (the green goal is invisible to the agent) on both procedurally-generated and singleton environments. In singleton environments, an agent has to solve the same task in the same environment/maze in every episode. Fig. 5 shows the state visitation heatmaps (darker areas correspond to more visits) after a training of 10M timesteps. We observe that most of the methods explore inefficiently in a singleton environment and that only RIDE succeeds in reaching the fifth room while AGAC reaches the last (tenth) room. After training the agents in procedurally-generated environments, the methods explore even less efficiently while AGAC succeeds in exploring all rooms.\nWe now qualitatively study the diversity of an agent’s behavior when trained with AGAC. Fig. 6 presents the state visitation heatmaps of the last ten episodes for an agent trained in procedurallygenerated environments in the MultiRoomN10S6 task without extrinsic reward. The heatmaps correspond to the behavior of the resulting policy, which is still learning from the AGAC objective. Looking at the figure, we can see that the strategies vary at each update with, for example, backand-forth and back-to-start behaviors. Although there are no extrinsic reward, the strategies seem to diversify from one update to the next. Finally, Fig. 7 in Appendix A.2 shows the state visitation heatmaps in a different configuration: when the agent has been trained on a singleton environment in the MultiRoomN10S6 task without extrinsic reward. Same as previously, the agent is updated between each episode. Looking at the figure, we can make essentially the same observations as previously, with a noteworthy behavior in the fourth heatmap of the bottom row where it appears the agent went to the fourth room to remain inside it. Those episodes indicate that, although the agent sees the same environment repeatedly, the successive adversarial updates force it to continuously adapt its behavior and try new strategies." }, { "heading": "6 DISCUSSION", "text": "This paper introduced AGAC, a modification to the traditional actor-critic framework: an adversary network is added as a third protagonist. The mechanics of AGAC have been discussed from a policy iteration point of view, and we provided theoretical insight into the inner workings of the proposed algorithm: the adversary forces the agent to remain close to the current policy while moving away from the previous ones. In a nutshell, the influence of the adversary makes the actor conservatively diversified.\nIn the experimental study, we have evaluated the adversarially-based bonus in VizDoom and empirically demonstrated its effectiveness and superiority compared to other relevant methods (some benefiting from count-based exploration). Then, we have conducted several performance experiments using AGAC and have shown a significant performance improvement over some of the most popular exploration methods (RIDE, AMIGo, Count, RND and ICM) on a set of various challenging tasks from MiniGrid. These procedurally-generated environments have served another purpose which is to validate the capacity of our method to generalize to unseen scenarios. In addition, the training stability of our method has been studied, showing a greater but acceptable sensitivity for c, the adversarial bonus coefficient. Finally, we have investigated the exploration capabilities of AGAC in a reward-free setting where the agent demonstrated exhaustive exploration through various strategic choices, confirming that the adversary successfully drives diversity in the behavior of the actor." }, { "heading": "A ADDITIONAL EXPERIMENTS", "text": "" }, { "heading": "A.1 MINIGRID PERFORMANCE", "text": "In this section, we report the final performance of all methods considered in the MiniGrid experiments of Fig. 2 with the scores reported in Raileanu & Rocktäschel (2019) and Campero et al. (2021). All methods have a budget of 200M frames." }, { "heading": "A.2 STATE VISITATION HEATMAPS IN SINGLETON ENVIRONMENT WITH NO EXTRINSIC REWARD", "text": "In this section, we provide additional state visitation heatmaps. The agent has been trained on a singleton environment from the MultiRoomN10S6 task without extrinsic reward. The last ten episodes of the training suggest that although the agent experiences the same maze over and over again, the updates force it to change behavior and try new strategies." }, { "heading": "A.3 (EXTREMELY) HARD-EXPLORATION TASKS WITH PARTIALLY-OBSERVABLE ENVIRONMENTS", "text": "In this section, we include additional experiments on one of the hardest tasks available in MiniGrid. The first is KeyCorridorS8R3, where the size of the rooms has been increased. In it, the agent has to pick up an object which is behind a locked door: the key is hidden in another room and the agent has to explore the environment to find it. The second, ObstructedMazeFull, is similar to ObstructedMaze4Q, where the agent has to pick up a box which is placed in one of the four corners of a 3x3 maze: the doors are locked, the keys are hidden in boxes and the doors are obstructed by balls. In those difficult tasks, only our method succeeds in exploring well enough to find rewards." }, { "heading": "A.4 MEAN INTRINSIC REWARD", "text": "In this section, we report the mean intrinsic reward computed for an agent trained in MultiRoomN12S10 to conveniently compare our results with that of Raileanu & Rocktäschel (2019). We observe in Fig. 9 that the intrinsic reward is consistently larger for our method and that, contrary to other methods, does not converge to low values. Please note that, in all considered experiments, the adversarial bonus coefficient c in Eq. 2 and 3 is linearly annealed throughout the training since it is mainly useful at the beginning of learning when the rewards have not yet been met. In the long run, this coefficient may prevent the agent from solving the task by forcing it to always favour exploration over exploitation.\nB ILLUSTRATION OF AGAC" }, { "heading": "C EXPERIMENTAL DETAILS AND HYPERPARAMETERS", "text": "" }, { "heading": "C.1 MINIGRID SETUP", "text": "Here, we describe in more details the experimental setup we used in our MiniGrid experiments.\nThere are several different MiniGrid scenarios that we consider in this paper. MultiRoom corresponds to a set of navigation tasks, where the goal is to go from a starting state to a goal state. The notation MultiRoom-N2S4 means that there are 2 rooms in total, and that each room has a maximal side of 4. In order to go from one room to another, the agent must perform a specific action to open a door. Episodes are terminated with zero reward after a maximum of 20 × N steps with N the number of rooms. In KeyCorridor, the agent also has to pick up a key, since the goal state is behind a door that only lets it in with the key. The notation KeyCorridor-S3R4 means that there are 4 side corridors, leading to rooms that have a maximal side of 3. The maximum number of steps is 270. In ObstructedMaze, keys are hidden in boxes, and doors are obstructed by balls the agent has to get out of its way. The notation ObstructedMaze-1Dl means that there are two connected rooms of maximal side 6 and 1 door (versus a 3x3 matrix and 2 doors if the leading characters are 2D), adding h as a suffix places keys in boxes, and adding b as a suffix adds balls in front of doors. Using Q as a suffix is equivalent to using lhb (that is, both hiding keys and placing balls to be moved). The maximum number of steps is 576. ObstructedMazeFull is the hardest configuration for this scenario, since it has the maximal number of keys, balls to move, and doors possible.\nIn each scenario, the agent has access to a partial view of the environment, a 7x7 square that includes itself and points in the direction of its previous movement." }, { "heading": "C.2 HYPERPARAMETERS", "text": "In all experiments, we train six different instances of our algorithm with different random seeds. In Table 3, we report the list of hyperparameters.\nD IMPLEMENTATION DETAILS\nIn Fig. 11 is depicted the architecture of our method." }, { "heading": "E PROOF OF SECTION 4.1 RESULTS", "text": "In this section, we provide a short proof for the result of the optimization problem in Section 4.1. We recall the result here:\nπk+1 = argmax π JPI(π) ∝ ( πk πadv ) c α exp Qπk α ,\nwith the objective function:\nJPI(π) = EsEa∼π(·|s)[Qπk(s, a) + c (log πk(a|s)− log πadv(a|s))− α log π(a|s)].\nProof. We first consider a simpler optimization problem: argmaxπ〈π,Qπk〉 + αH(π), whose solution is known (Vieillard et al., 2020a, Appendix A). The expression for the maximizer is the α-scaled softmax:\nπ∗ = exp(\nQπk α )\n〈1, exp(Qπkα )〉 .\nWe now turn towards the optimization problem of interest, which we can rewrite as:\nargmax π 〈π,Qπk + c (log πk − log πadv)〉+ αH(π).\nBy the simple change of variable Q̃πk = Qπk + c (log πk − log πadv), we can reuse the previous solution (replacing Qπk by Q̃πk ). With the simplification:\nexp Qπk + c (log πk − log πadv)\nα = ( πk πadv ) c α exp Qπk α ,\nwe obtain the result and conclude the proof." } ]
2,021
null
SP:5964ce1b29c23bb9e4b9a83a466ca0bc3f869183
[ "This paper proposes a new regularizer that can be plugged in gradient-based learning algorithms, which aims at solving the problems induced by unobserved confounders. And the authors provide the upper bound for one specific kind of distributionally robust optimization problem, whose uncertainty set is defined as the affine combinations of training distributions. And based on this the algorithm is proposed to deal with the problem of unobserved confounders. Experiments on three medical datasets validate the effectiveness of the method. " ]
The ability to extrapolate, or generalize, from observed to new related environments is central to any form of reliable machine learning, yet most methods fail when moving beyond i.i.d data. In some cases, the reason lies in a misappreciation of the causal structure that governs the data, and in particular as a consequence of the influence of unobserved confounders that drive changes in observed distributions and distort correlations. In this paper, we argue for defining generalization with respect to a broader class of distribution shifts (defined as arising from interventions in the underlying causal model), including changes in observed, unobserved and target variable distributions. We propose a new robust learning principle that may be paired with any gradient-based learning algorithm. This learning principle has explicit generalization guarantees, and relates robustness with certain invariances in the causal model, clarifying why, in some cases, test performance lags training performance. We demonstrate the empirical performance of our approach on healthcare data from different modalities, including image and speech data.
[ { "affiliations": [], "name": "UNOBSERVED CONFOUNDING" } ]
[ { "authors": [ "Soroosh Shafieezadeh Abadeh", "Daniel Kuhn" ], "title": "Distributionally robust logistic regression", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Martin Arjovsky", "Léon Bottou", "Ishaan Gulrajani", "David Lopez-Paz" ], "title": "Invariant risk minimization", "venue": "arXiv preprint arXiv:1907.02893,", "year": 2019 }, { "authors": [ "Steffen Bickel", "Michael Brückner", "Tobias Scheffer" ], "title": "Discriminative learning under covariate shift", "venue": "Journal of Machine Learning Research,", "year": 2009 }, { "authors": [ "Federico Cabitza", "Raffaele Rasoini", "Gian Franco Gensini" ], "title": "Unintended consequences of machine learning in medicine", "venue": "Jama,", "year": 2017 }, { "authors": [ "Raymond J Carroll", "David Ruppert", "Leonard A Stefanski", "Ciprian M Crainiceanu" ], "title": "Measurement error in nonlinear models: a modern perspective", "venue": "CRC press,", "year": 2006 }, { "authors": [ "John Duchi", "Hongseok Namkoong" ], "title": "Learning models with uniform performance via distributionally robust optimization", "venue": "arXiv preprint arXiv:1810.08750,", "year": 2018 }, { "authors": [ "John Duchi", "Peter Glynn", "Hongseok Namkoong" ], "title": "Statistics of robust optimization: A generalized empirical likelihood approach", "venue": "arXiv preprint arXiv:1610.03425,", "year": 2016 }, { "authors": [ "John C Duchi", "Tatsunori Hashimoto", "Hongseok Namkoong" ], "title": "Distributionally robust losses against mixture covariate shifts", "venue": "Under review,", "year": 2019 }, { "authors": [ "Wayne A Fuller" ], "title": "Measurement error models, volume 305", "venue": null, "year": 2009 }, { "authors": [ "Yaroslav Ganin", "Evgeniya Ustinova", "Hana Ajakan", "Pascal Germain", "Hugo Larochelle", "François Laviolette", "Mario Marchand", "Victor Lempitsky" ], "title": "Domain-adversarial training of neural networks", "venue": "The Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "AmirEmad Ghassami", "Saber Salehkaleybar", "Negar Kiyavash", "Kun Zhang" ], "title": "Learning causal structures using regression invariance", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Jaime Roquero Gimenez", "James Zou" ], "title": "Identifying invariant factors across multiple environments with kl regression", "venue": "arXiv preprint arXiv:2002.08341,", "year": 2020 }, { "authors": [ "Daniel S Kermany", "Michael Goldbaum", "Wenjia Cai", "Carolina CS Valentim", "Huiying Liang", "Sally L Baxter", "Alex McKeown", "Ge Yang", "Xiaokang Wu", "Fangbing Yan" ], "title": "Identifying medical diagnoses and treatable diseases", "venue": "by image-based deep learning. Cell,", "year": 2018 }, { "authors": [ "David Krueger", "Ethan Caballero", "Joern-Henrik Jacobsen", "Amy Zhang", "Jonathan Binas", "Remi Le Priol", "Aaron Courville" ], "title": "Out-of-distribution generalization via risk extrapolation (rex)", "venue": "arXiv preprint arXiv:2003.00688,", "year": 2020 }, { "authors": [ "Daniel Kuhn", "Peyman Mohajerin Esfahani", "Viet Anh Nguyen", "Soroosh Shafieezadeh-Abadeh" ], "title": "Wasserstein distributionally robust optimization: Theory and applications in machine learning", "venue": "In Operations Research & Management Science in the Age of Analytics,", "year": 2019 }, { "authors": [ "Giovanni Leoni" ], "title": "A first course in Sobolev spaces", "venue": "American Mathematical Soc.,", "year": 2017 }, { "authors": [ "Da Li", "Yongxin Yang", "Yi-Zhe Song", "Timothy M Hospedales" ], "title": "Learning to generalize: Metalearning for domain generalization", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Nicolai Meinshausen" ], "title": "Causality from a distributional robustness point of view", "venue": "IEEE Data Science Workshop (DSW),", "year": 2018 }, { "authors": [ "Judea Pearl" ], "title": "Why there is no statistical test for confounding, why many think there is, and why they are almost right", "venue": null, "year": 1998 }, { "authors": [ "Jonas Peters", "Peter Bühlmann", "Nicolai Meinshausen" ], "title": "Causal inference by using invariant prediction: identification and confidence intervals", "venue": "Journal of the Royal Statistical Society: Series B (Statistical Methodology),", "year": 2016 }, { "authors": [ "Dominik Rothenhäusler", "Nicolai Meinshausen", "Peter Bühlmann", "Jonas Peters" ], "title": "Anchor regression: heterogeneous data meets causality", "venue": "arXiv preprint arXiv:1801.06229,", "year": 2018 }, { "authors": [ "Dominik Rothenhäusler", "Peter Bühlmann", "Nicolai Meinshausen" ], "title": "Causal dantzig: fast inference in linear structural equation models with hidden variables under additive interventions", "venue": "The Annals of Statistics,", "year": 2019 }, { "authors": [ "Shiori Sagawa", "Pang Wei Koh", "Tatsunori B Hashimoto", "Percy Liang" ], "title": "Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization", "venue": null, "year": 1911 }, { "authors": [ "Betul Erdogdu Sakar", "M Erdem Isenkul", "C Okan Sakar", "Ahmet Sertbas", "Fikret Gurgen", "Sakir Delil", "Hulya Apaydin", "Olcay Kursun" ], "title": "Collection and analysis of a parkinson speech dataset with multiple types of sound recordings", "venue": "IEEE Journal of Biomedical and Health Informatics,", "year": 2013 }, { "authors": [ "Shiv Shankar", "Vihari Piratla", "Soumen Chakrabarti", "Siddhartha Chaudhuri", "Preethi Jyothi", "Sunita Sarawagi" ], "title": "Generalizing across domains via cross-gradient training", "venue": "arXiv preprint arXiv:1804.10745,", "year": 2018 }, { "authors": [ "Aman Sinha", "Hongseok Namkoong", "John Duchi" ], "title": "Certifying some distributional robustness with principled adversarial training", "venue": "arXiv preprint arXiv:1710.10571,", "year": 2017 }, { "authors": [ "Amal Rannen Triki", "Maxim Berman", "Matthew B Blaschko" ], "title": "Function norms and regularization in deep networks", "venue": "arXiv preprint arXiv:1710.06703,", "year": 2017 }, { "authors": [ "Subhashini Venugopalan", "Arunachalam Narayanaswamy", "Samuel Yang", "Anton Gerashcenko", "Scott Lipnick", "Nina Makhortova", "James Hawrot", "Christine Marques", "Joao Pereira", "Michael Brenner" ], "title": "It’s easy to fool yourself: Case studies on identifying bias and confounding in bio-medical datasets", "venue": "arXiv preprint arXiv:1912.07661,", "year": 2019 }, { "authors": [ "Riccardo Volpi", "Hongseok Namkoong", "Ozan Sener", "John C Duchi", "Vittorio Murino", "Silvio Savarese" ], "title": "Generalizing to unseen domains via adversarial data augmentation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Xiaosong Wang", "Yifan Peng", "Le Lu", "Zhiyong Lu", "Mohammadhadi Bagheri", "Ronald M Summers" ], "title": "Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "David Wozabal" ], "title": "A framework for optimization under ambiguity", "venue": "Annals of Operations Research,", "year": 2012 }, { "authors": [ "John R Zech", "Marcus A Badgeley", "Manway Liu", "Anthony B Costa", "Joseph J Titano", "Eric K Oermann" ], "title": "Confounding variables can degrade generalization performance of radiological deep learning models", "venue": "arXiv preprint arXiv:1807.00431,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Prediction algorithms use data, necessarily sampled under specific conditions, to learn correlations that extrapolate to new or related data. If successful, the performance gap between these two domains is small, and we say that algorithms generalize beyond their training data. Doing so is difficult however, some form of uncertainty about the distribution of new data is unavoidable. The set of potential distributional changes that we may encounter is mostly unknown and in many cases may be large and varied. Some examples include covariate shifts (Bickel et al., 2009), interventions in the underlying causal system (Pearl, 2009), varying levels of noise (Fuller, 2009) and confounding (Pearl, 1998). All of these feature in modern applications, and while learning systems are increasingly deployed in practice, generalization of predictions and their reliability in a broad sense remains an open question.\nA common approach to formalize learning with uncertain data is, instead of optimizing for correlations in a fixed distribution, to do so simultaneously for a range of different distributions in an uncertainty set P (Ben-Tal et al., 2009).\nminimize f sup P∈P E(x,y)∼P [L(f(x), y)] (1)\nfor some measure of error L of the function f that relates input and output examples (x, y) ∼ P . Choosing different sets P leads to estimators with different properties. It includes as special cases, for instance, many approaches in domain adaptation, covariate shift, robust statistics and optimization (Kuhn et al., 2019; Bickel et al., 2009; Duchi et al., 2016; 2019; Sinha et al., 2017; Wozabal, 2012; Abadeh et al., 2015; Duchi & Namkoong, 2018). Robust solutions to problem (1) are said to generalize if potential shifted, test distributions are contained in P , but also larger sets P result in conservative solutions (i.e. with sub-optimal performance) on data sampled from distribution away from worst-case scenarios, in general.\nOne formulation of causality is in fact also a version of this problem, for P defined as any distribution arising from arbitrary interventions on observed covariates x leading to shifts in their distribution Px (see e.g. sections 3.2 and 3.3 in (Meinshausen, 2018)). The invariance to changes in covariate distributions of causal solutions is powerful for generalization, but implicitly assumes that all\ncovariates or other drivers of the outcome subject to change at test time are observed. Often shifts occur elsewhere, for example in the distribution of unobserved confounders, in which case also conditional distributions Py|x may shift. Perhaps surprisingly, in the presence of unobserved confounders, the goals of achieving robustness and learning a causal model can be different (and similar behaviour also occurs with varying measurement noise). There is in general an inherent trade-off in generalization performance. In the presence of unobserved confounders, causal and correlation-based solutions are both optimal in different regimes, depending on the shift in the underlying generating mechanism from which new data is generated.\nConsider a simple example, illustrated in Figure 1, to show this explicitly. We assume access to observations of variables (X1, X2, Y ) in two training datasets, each dataset sampled with differing variances (σ2 = 1 and σ2 = 2) from the following structural model F,\nX2 := −H + EX2 , Y := X2 + 3H + EY , X1 := Y +X2 + EX1 ,\nEX1 , EX2 ∼ N (0, σ2), EY ∼ N (0, 1) are exogenous variables. In a first scenario (leftmost panel) we consider all data (training and testing) to be generated without unobserved confounders, H := 0; and, in a second scenario (remaining panels) all data with unobserved confounders, H := EH ∼ N (0, 1). Each panel of Figure 1 shows performance on new data obtained after manipulating the underlying data generating system; the magnitude and type of intervention appears in the horizontal axis. We consider the following learning paradigms: Ordinary Least Squares (OLS) learns the linear mapping that minimizes average training risk, Domain Robust Optimization (DRO) minimizes the maximum training risk among the two available datasets, and the causal solution, assumed known, has fixed coefficients (0, 1) for (X1, X2). Two important observations motivate this paper.\nFirst, observe that Ordinary Least Squares (OLS) and Domain Robust Optimization (DRO) absorb spurious correlations (due toH , and the fact thatX1 is caused by Y ) with unstable performance under shifts in p(X1, X2) but as a consequence good performance under shifts in p(H). Causal solutions, by contrast, are robust to shifts in p(X1, X2), even on new data with large shifts, but underperform substantially under changes in the distribution of unobserved confounders p(H). Second, the presence of unobserved confounding hurts generalization performance in general with higher errors for all methods, e.g. contrast the middle and leftmost panel. To the best of our knowledge, the influence of unobserved confounders has been minimally explored in the context of generalization of learning algorithms, even though, as Figure 1 shows, in this context different shifts in distribution may have important consequences for predictive performance.\nOur Contributions. In this paper we provide a new choice of P and learning problem (1) that we show to be justified by certain statistical invariances across training and testing data, to be expected in the presence of unobserved confounders. This leads us to define a new differentiable, regularized objective for representation learning. Our proposal defines P as an affine combination of available training data distributions, and we show that solutions to this problem are robust to more general shifts in distribution than previously considered, spanning robustness to shifts in observed, unobserved, and target variables, depending on the properties of the available training data distributions. This approach has benefits for performance out-of-sample but also for tasks involving variable selection, where important features are consistently replicated across experiments with our objective." }, { "heading": "2 INVARIANCES IN THE PRESENCE OF UNOBSERVED CONFOUNDERS", "text": "This section formally introduces the problem of out-of-distribution generalization. We describe in greater detail the reasons that popular learning principles, such as Empirical Risk Minimization (ERM), underperform in general, and define certain invariances to recover solutions that generalize.\nWe take the perspective that all potential distributions that may be observed over a system of variables arise from a causal modelM = (F,V,U), characterized by endogenous variables, V ∈ V , representing all variables determined by the system, either observed or not; exogenous variables, U ∈ U , in contrast imposed upon the model, and a sequence of structural equations F : U → V , describing how endogenous variables can be (deterministically) obtained from the exogenous variables (Pearl, 2009). An example is given in Figure 1, V = (X1, X2, H, Y ) are endogenous and U = (EX1 , EX2 , EH , EY ) are exogenous variables.\nUnseen data is generated from such a systemM after manipulating the distribution of exogenous variables U, which propagates across the system shifting the joint distribution of all variables V, whether observed or unobserved, but keeping the causal mechanisms F unchanged. Representative examples include changes in data collection conditions, such as due to different measurement devices, or new data sources, such as patients in different hospitals or countries, among many others.\nOur goal is to learn a representation Z = φ(X) acting on a set observed variables X ⊂ V with the ability to extrapolate to new unseen data, and doing so acknowledging that all relevant variables in V are likely not observed. Unobserved confounders (for the task at hand, say predicting Y ∈ V) simultaneously cause X and Y , confounding or biasing the causal association between X and Y giving rise to spurious correlations that do not reproduce in general (Pearl, 1998; 2009). We present a brief argument below highlighting the systematic bias due to unobserved confounders in ERM." }, { "heading": "2.1 THE BIASES OF UNOBSERVED CONFOUNDING", "text": "Consider the following structural equation for observed variables (X,Y ),\nY := f ◦ φ(X) + E (2) where f := f(·;β0) is a predictor acting on a representation Z := φ(X) and E stands for potential sources of mispecification and unexplained sources of variability. For a given sample of data (x, y) and z = φ(x), the optimal prediction rule β̂ is often taken to minimize squared residuals, with β̂ the solution to the normal equations: ∇βf(z; β̂)y = ∇βf(z; β̂)f(z; β̂), where ∇βf(z; β̂) denotes the column vector of gradients of f with respect to parameters β evaluated at β̂. Consider the Taylor expansion of f(z;β0) around an estimate β̂ sufficiently close to β0, f(z;β0) ≈ f(z; β̂) + ∇βf(z; β̂)T (β0 − β̂). Using this approximation in our first order optimality condition we find,\n∇βf(z; β̂)∇βf(z; β̂)T (β0 − β̂) + v = ∇βf(z; β̂) (3) where v is a scaled disturbance term that includes the rest of the linear approximation of f and is small asymptotically; := y − f(z; β̂) is the residual. β̂ is consistent for the true β0 if and only if ∇βf(z; β̂) → 0 in probability. This assumption is satisfied if E (all sources of variation in Y not captured by X) are independent of X (i.e. exogenous) or in other words if all common causes or confounders to both X and Y have been observed. Conventional regression may assign significant associations to variables that are neither directly nor indirectly related to the outcome, and in this case, we have no performance guarantees on new data with changes in the distribution of these variables. Omitted variables are a common source of unobserved confounding but we note in Appendix B that similar biases also arise from other prevalent model mispecifications, such as measurement error (Carroll et al., 2006)." }, { "heading": "2.2 INVARIANCES WITH MULTIPLE ENVIRONMENTS", "text": "The underlying structural mechanism F, that also relates unobserved with observed variables, even if unknown, is stable irrespective of manipulations in exogenous variables that may give rise to heterogeneous data sources. Under certain conditions, statistical footprints emerge from this structural invariance across different data sources, properties testable from data that have been exploited recently, for example (Peters et al., 2016; Ghassami et al., 2017; Rothenhäusler et al., 2019).\nWe assume that such a heterogeneous data scenario applies, input and output pairs (X,Y ) are observed across heterogeneous data sources or environments e, defined as a probability distribution Pe over an observation space X × Y that arises, just like new unseen data, from manipulations in the distribution of exogenous variables in an underlying modelM. For the remainder of this section, consider restricting ourselves to data sources emerging from manipulations in exogenous EX , appearing in the structural equations of X only in an underlying additive noise model (see Appendix C.1 for the precise statement of assumptions and more context). It may be shown by considering the distributions of error terms Y −f ◦φ(X) and its correlation with any function of X , that the inner product∇βf(z;β0) , even if non-zero due to unobserved confounding, converges to a fixed unknown value equal across training environments (see Appendix C.1 for the derivation). With a similar decomposition to the one given in equation (3), in the population case, it holds that up to disturbance terms,(\nE (x,y)∼Pi ∇βf(z;β?)∇βf(z;β?)T − E (x,y)∼Pj\n∇βf(z;β?)∇βf(z;β?) )T (β0 − β?)\n= ( E\n(x,y)∼Pi ∇βf(z;β?) − E (x,y)∼Pj ∇βf(z;β?)\n) = 0 (4)\nwhere β? is a solution to,\nE (x,y)∼Pi ∇βf(z;β)(y − f(z;β))− E (x,y)∼Pj ∇βf(z;β)(y − f(z;β)) = 0. (5)\nand is consistent for the causal parameters β0 if unique. i, j ∈ E are the indices of any two observed environments in an index set E . This invariance across environments must hold for causal parameters (under certain conditions) even in the presence of unobserved confounders.\nA few remarks are necessary concerning this relationship and its extrapolation properties.\n• The first is based on the observation that, up to a constant, each inner product in (5) is the gradient of the squared error with respect to β. This reveals that the optimal predictor, in the presence of unobserved confounding, is not one that produces minimum loss but one that produces a non-zero loss gradient equal across environments. Seeking minimum error solutions, even in the population case, produces estimators with necessarily unstable correlations because the variability due to unobserved confounders is not explainable from observed data. Forcing gradients to be zero then forces models to utilize artifacts of the specific data collection process that are not related to the input-output relationship; and, for this reason, will not in general perform outside training data.\n• From (5) we may pose a sequence of moment conditions for each pair of available environments. We may then seek solutions β that make all of them small simultaneously. Solutions are unique if the set of moments is sufficient to identify β? exactly (and given our model assumptions may be interpreted as causal and robust to certain interventions). We revisit our introductory example to show in Appendix A that, in contrast to ERM and Invariant Risk Minimization (IRM) (a related approach proposed in (Arjovsky et al., 2019) we discuss in more detail in later sections), this procedure does recover the underlying causal model correctly in the presence of unobserved confounding.\n• In practice however, only a set of solutions may be identified with no performance guarantees for any individual solutions, and no guarantees if assumptions fail to hold. Moreover, even if accessible, causal solutions, robust to certain distribution shifts, may not always be desirable under more general shifts (recall for instance the experiments in the rightmost panel of Figure 1)." }, { "heading": "3 GENERALIZATION FROM A ROBUST OPTIMIZATION PERSPECTIVE", "text": "While certain invariances may hold in the presence of unobserved confounding, we have in general no guarantees on performance under more general manipulations. In this section we motivate a relaxation of the ideas presented above by considering bounds on the worst-case performance on data arising from distribution shifts informed by the structure of available environments.\nOptimizing for the worst case loss in a set of domains has been shown to ensure accurate prediction on any convex mixture of training environments (Ben-Tal et al., 2009). The space of convex mixtures,\nhowever, is restrictive. Systems of variables are in general high-dimensional and new manipulations likely occur at a new vertex not represented as a linear combination of training environments. By extrapolation we desire performance guarantees outside this convex hull. The extension we consider optimizes instead over an affine combination of training losses, similarly to (Krueger et al., 2020).\nLet ∆η := {{αe}e∈E : αe ≥ −η, ∑ e∈E αe = 1} be a collection of scalars and consider the set of\ndistributions defined by P := { ∑ e∈E αePe : {αe} ∈ ∆η}, all affine combinations of distributions defined by the available environments. η ∈ R defines the strength of the extrapolation, η = 0 corresponds to a convex hull of distributions but above that value the space of distributions is richer, going beyond what has been observed: affine combinations amplify the strength of manipulations that generated the observed training environments. The following theorem presents an upperbound to the robust problem (1) with affine combinations of errors.\nTheorem 1 Let {Pe}e∈E , be a set of available environments. Further let the parameter space of β be open and bounded, such that the expected loss function L as a function of β belongs to a Sobolev space. Then, the following inequality holds,\nsup {αe}∈∆η ∑ e∈E αe E (x,y)∼Pe L (f ◦ φ(x), y) ≤ E (x,y)∼Pe,e∼E L (f ◦ φ(x), y)\n+ (1 + nη) · C · ∣∣∣∣∣∣∣∣ sup e∈E E (x,y)∼Pe ∇βL (f ◦ φ(x), y)− E (x,y)∼Pe,e∼E ∇βL (f ◦ φ(x), y) ∣∣∣∣∣∣∣∣ L2\nwhere || · ||L2 denotes the L2-norm, C depends on the domain of β, n := |E| is the number of available environments and e ∼ E loosely denotes sampling indeces with equal probability from E . The proof is given in Appendix C.2. This bound illustrates the trade-off between invariances (of the type explored under unobserved confounding, that appear in the second term of the RHS of the inequality above) and prediction in-sample (the first term). A combination of them upper-bounds a robust optimization problem over affine combinations of training environments, and depending how much we weight each objective (prediction versus invariance) we can expect solutions to be more or less robust. Specifically, for η = −1/n the objective reduces to empirical risk minimization, but otherwise the upperbound increasingly weights differences in loss derivatives (violations of the invariances of section 2.2), and in the limit (η →∞) can be interpreted to be robust at least to any affine combination of training losses.\nNote that the requirement that F be fixed, or interventions occur on observed variables, is not necessary for generalization guarantees. As long as new data distributions can be represented as affine combinations of training distributions, we can expected performance to be as least as good as that observed for the robust problem in Theorem 1." }, { "heading": "3.1 PROPOSED OBJECTIVE", "text": "Our proposal is to guide the optimization of φ and β towards solutions that minimize the upperbound in Theorem 1, satisfying approximately the moment conditions (5) and simultaneously optimizing for minimum average error. Using Lagrange multipliers we define the general objective,\nminimize β,φ E (x,y)∼Pe,e∼E L (f ◦ φ(x), y) + λ · Var e∼E\n( || E\n(x,y)∼Pe ∇βL (f ◦ φ(x), y) ||L2\n) (6)\nwhere λ ≥ 0. We call this problem Derivative Invariant Risk Minimization (DIRM). This objective shares similarities with the objective proposed in (Krueger et al., 2020). The authors considered enforcing equality in environment-specific losses, rather than derivatives, as regularization, which can also be related to a robust optimization problem over an affine combination of errors. We have seen in section 2.2 however that equality in losses is not expected to hold in the presence of unobserved confounders (e.g. due to changes in the irreducible error across environments, which may occur also after interventions on target variables).\nThe L2 norm in the regularizer is an integral over the domain of values of β and is in general intractable. We approximate this objective in practice with norms on functional evaluations at each step of the optimization. Theorem 1 is used as a guide for optimization, we encourage the values of the regularizer to be zero and thus only indirectly the regularizer function itself. We give more details in Appendix D.1.1." }, { "heading": "3.2 ROBUSTNESS IN TERMS OF INTERVENTIONS", "text": "As is apparent in Theorem 1, performance guarantees on data from a new environment depend on the relationship of new distributions with those observed during training.\nLet f ◦ φλ→∞ minimize L among all functions that satisfy all pairs of moment conditions defined in (5); that is, a solution to our proposed objective in (6) with λ→∞. At optimality, it holds that gradients evaluated at this solution are equal across environments. As a consequence of Theorem 1, the loss evaluated at this solution with respect to any affine combination of environments is bounded by the average loss computed in-sample (denoted L, say),∑\ne∈E αe E (x,y)∼Pe L (f ◦ φ(x), y) ≤ L, for any set of αe ∈ ∆η (7)\nFrom the perspective of interventions in the underlying causal mechanism, this can be seen as a form of data-driven predictive stability across a range of distributions whose perturbations occur in the same direction as those observed during training. As a minimal example for intuition, consider a system of three variables (X,Y,H), with differing interventions on X only. Assume we have access to data sampled under two environments defined by joint distributions p1(X,Y,H) := p(Y,H|X)p1(X) and p2(X,Y,H) := p(Y,H|X)p2(X). Errors are bounded with respect to any (valid) distribution of the form p(Y,H|X)(α1p1(X)+α2p2(X)). For instance, a particular shift in the mean or variance of X observed during training can be extrapolated in the extreme to any shift in the mean or variance of new data. With this reasoning, if environment-specific distributions arise from differing interventions on all observed covariates X , then solutions are robust to arbitrary shifts in these variables. If, in addition, solutions are unique we may interpret them as causal, irrespective of the presence or not of unobserved confounders.\nThe generalization properties, however, go further. Interventions on unobserved variables and also target variables are accommodated for in f ◦ φλ→∞, if observed through shifted distributions in different available environments. Using our simple example in Figure 1 to verify this intuition empirically, we consider 3 scenarios corresponding to interventions on exogenous variables of X,H and Y . In each, training data from two environments is generated with means in the distribution of the concerned variables set to a value of 0 and 1 respectively, everything else being equal (σ2 := 1, H := EH ∼ N (0, 1)). Performance is evaluated on out-of-sample data generated by increasing the shift in the variable being studied up to a mean of 5. In all cases, we see in Figure 2 that performance is stable to increasing perturbations in the system as long as the heterogeneity in the data allows us to capture the direction of the unseen shift. Figure 2: Stability to general shifts." }, { "heading": "3.3 STABILITY OF CERTAIN OPTIMAL SOLUTIONS", "text": "A special case may also be considered when the underlying system of variables and the available environments allow for optimal solutions f ◦ φλ→∞ and f ◦ φλ=0 to coincide. In this case, the learned representation φ(x) results in a predictor f optimal on average and simultaneously with equal gradient in each environment, thus,\n|| E (x,y)∼Pe ∇βL (f ◦ φ(x), y) ||L2 = 0, for all e ∈ E (8)\nFor this representation φ, it follows that optimal solutions f learned on any new dataset sampled from an affine combination of training distributions coincides with this special solution. This gives us a sense of reproducibility of learning. In other words, if a specific feature is significant for predictions on the whole range of λ with the available data then it will likely be significant on new (related) data.\nThe above special case where all solutions in our hyperparameter range agree has important parallels with IRM (Arjovsky et al., 2019). The authors proposed a learning objective enforcing representations of data with minimum error on average and across environments, such that at optimum EPiY |φ?(X) = EPjY |φ?(X) for any pair (i, j) ∈ E . With unobserved confounding, both learning paradigms agree but, with unobserved confounding, minimum error solutions of IRM by design\nconverge to spurious associations (see the discussion after equation (5)) and are not guaranteed to generalize to more general environments. For example, in the presence of additive unobserved confounding H , irrespective of φ, we may have EPiY |φ?(X) = φ?(X) + EPiH 6= φ?(X) + EPjH = EPjY |φ?(X) if the means of H differ. The sought invariance then does not hold." }, { "heading": "4 RELATED WORK", "text": "Causality. There has been a growing interest in interpreting shifts in distribution to fundamentally arise from interventions in the causal mechanisms of data. Peters et al. (Peters et al., 2016) exploited this link for causal inference: causal relationships by definition being invariant to the observational regime. Invariant solutions, as a result of this connection, may be interpreted also as robust to certain interventions (Meinshausen, 2018), and recent work has explored learning invariances in various problem settings (Arjovsky et al., 2019; Rothenhäusler et al., 2019; Krueger et al., 2020; Gimenez & Zou, 2020). Among those, we note the invariance proposed in (Rothenhäusler et al., 2019), the authors seek to recover causal solutions with unobserved confounding. Generalization properties of these solutions were rarely studied, with one exception being Anchor regression (Rothenhäusler et al., 2018). The authors proposed to interpolate between empirical risk minimization and causal solutions with explicit robustness to certain interventions in a linear model. The present work may be interpreted as a non-linear formulation of this principle with a more general study of generalization.\nDomain generalization represent one direction of out-of-sample generalization by explicitly learning representations projecting out superficial environment-specific information. Recent work on domain generalization has included the use data augmentation (Volpi et al., 2018; Shankar et al., 2018), meta-learning to simulate domain shift (Li et al., 2018) and adversarially learning representations that are environment invariant (Ganin et al., 2016), even though explicitly aligning representations has important caveats when label distributions differ, articulated for instance in (Arjovsky et al., 2019).\nDistributionally robust optimization explicitly solves a worst-case optimization problem (1). A popular approach is to define P as a ball around the empirical distribution P̂ , for example using f -divergences or Wasserstein balls of a defined radius (Kuhn et al., 2019; Duchi et al., 2016; 2019; Sinha et al., 2017; Wozabal, 2012; Abadeh et al., 2015; Duchi & Namkoong, 2018). These are general and multiple environments are not required, but this also means that sets are defined agnostic to the geometry of plausible shifted distributions, and may therefore lead to solutions, when tractable, that are overly conservative or do not satisfy generalization requirements (Duchi et al., 2019)." }, { "heading": "5 EXPERIMENTS", "text": "Data linkages, electronic health records, and bio-repositories, are increasingly being collected to inform medical practice. As a result, also prediction models derived from healthcare data are being put forward as potentially revolutionizing decision-making in hospitals. Recent studies (Cabitza et al., 2017; Venugopalan et al., 2019), however, suggest that their performance may reflect not only their ability to identify disease-specific features, but also their ability to exploit spurious correlations due to unobserved confounding (such as varying data collection practices): a major challenge for the reliability of decision support systems. In this section, we explore this pattern conducting a wide analysis of domain generalization on image, speech and tabular data from the medical domain.\nWe consider the following baseline algorithms for performance comparisons.\n• Empirical Risk Minimization (ERM) that optimizes for minimum loss agnostic of data source. • Domain Robust Optimization (DRO) (Sagawa et al., 2019) that optimizes for minimum loss across\nthe worst convex mixture of training environments. • Domain Adversarial ENural Netowrks (DANN) (Ganin et al., 2016) that use domain adversarial\ntraining to facilitate transfer by augmenting the neural network architecture with an additional domain classifier to enforce the distribution of φ(X) to be the same across training environments.\n• Invariant Risk Minimization (IRM) (Arjovsky et al., 2019) that regularizes ERM ensuring representations φ(X) be optimal in every observed environment.\n• Risk Extrapolation (REx) (Krueger et al., 2020) that regularizes for equality in environment losses instead of considering their derivatives.\nAll trained models use the same convolutional or fully-connected architecture, where appropriate. Performance results are given in Table 1. Further experimental details and pseudo-code for DIRM can be found in Appendix D." }, { "heading": "5.1 DIAGNOSIS OF PNEUMONIA WITH CHEST X-RAY DATA", "text": "In this section, we attempt to replicate the study in (Zech et al., 2018). The authors observed a tendency of image models towards exploiting spurious correlations for the diagnosis on pneumonia from patient Chest X-rays that do not reproduce outside of training data. We use publicly available data from the National Institutes of Health (NIH) (Wang et al., 2017) and the Guangzhou Women and Children’s Medical Center (GMC) (Kermany et al., 2018). Differences in distribution are manifest, and can be seen for example in the top edge of mean pneumonia-diagnosed X-rays shown in Figure 3.\nIn this experiment, we exploit this (spurious) pathology correlation to demonstrate the need for solutions robust to changes in sitespecific features. We construct two training sets, in each case 90% and 80% of pneumonia-diagnosed patients were drawn from the NIH dataset and the remaining 10% and 20% of the pneumoniadiagnosed patients were drawn from the GMC dataset; the reverse logic (10%/90% split) was followed for the test set. Figure 3: Average pneumonia X-ray.\nOur results show that DIRM outperforms, suggesting that the proposed invariances guides solutions better towards robustness even to changes due to unobserved factors." }, { "heading": "5.2 DIAGNOSIS OF PARKINSON’S DISEASE WITH VOICE RECORDINGS", "text": "Parkinson’s disease is a progressive nervous system disorder that affects movement. Symptoms start gradually, sometimes starting with a barely noticeable tremor in a patient’s voice. This section investigates the performance of predictive models for the detection of Parskinson’s disease, trained on voice recordings of vowels, numbers and individual words and tested on vowel recordings of unseen patients.\nWe used the UCI Parkinson Speech Dataset with given training and testing splits (Sakar et al., 2013). Even though the distributions of features will differ in different types of recordings and patients, we would expect the underlying patterns in speech to reproduce across different samples. However, this is not the case for correlations learned with baseline training paradigms (Table 1). This suggests that spurious correlations due to the specific type of recording (e.g. different vowels or numbers), or even chance associations emphasized due to low sample sizes (120 examples), may be responsible for poor generalization performance. Our results show that correcting for spurious differences between recording types (DIRM, IRM, REx) can improve performance substantially." }, { "heading": "5.3 SURVIVAL PREDICTION WITH ELECTRONIC HEALTH RECORDS", "text": "This section investigates whether predictive models transfer across data from different medical studies (the MAGGIC, 2012, studies), all containing patients that experienced heart failure. The problem is to predict survival within 3 years of experiencing heart failure from a total of 33 demographic variables. We introduce a twist however, explicitly introducing unobserved confounding by omitting certain\npredictive variables. The objective is to test performance on new studies with shifted distributions, while knowing that these occur predominantly due to variability in unobserved variables.\nConfounded data is constructed by omitting a patient’s age from the data, found in a preliminary correlation analysis to be associated with the outcome as well as other significant predictors such as blood pressure and body mass index. This example is constructed to be able to control for how unobserved variables shift but note that we can expect similar phenomena in many other scenarios, where for instance a prediction model is taken to patients in a different hospital or country with fundamental shifts in the distribution of very relevant variables (e.g. socio-economic status, ethnicity, diet, etc.) even though this information is not reported in the data. Performance is tested on all studies of over 500 patients with balanced death rates, each having slightly different age distributions. (We give more details in Appendix D). We found DIRM, robust to changes in unobserved variables, to outperform all other methods.\nInfluential variables that reproduce across datasets. In the following, we tackle the problem of reproducibility of learned influential features across different experiments. Reproducing conclusions of influential features in different studies with potential shifts in the distribution is an important challenge, especially in healthcare where heterogeneity between patient populations is high. We showed in section 3.3 that in the event that the optimal predictor is invariant as a function λ ∈ [0,∞), optimal predictors estimated in every new dataset in the span of observed distributions, should be stable. We consider here a form of diluted stability for feature selection. For a single layer network, we consider significant those covariates with estimated parameters bounded away from zero in all solutions in the range λ ∈ [0, 1]. Comparisons are made with ERM (conventional logistic regression), both methods trained separately on 100 random pairs of studies. Figure 4 shows how many features (in the top 10 of predictive features) from each model intersect across pairs of studies. In constrast to ERM, our objective recovers significant features much more consistently\nserves to demonstrate the improved reproducibility we can expect from DIRM.\nFigure 4: Reproducible features." }, { "heading": "6 CONCLUSIONS", "text": "We have studied the problem of out-of-sample generalization from a new perspective, grounded in the underlying causal mechanism generating new data that may arise from shifts in observed, unobserved or target variables. Our proposal is a new objective that is provably robust to certain shifts in distribution, and is informed by certain statistical invariances in the presence of unobserved confounders. Our experiments show that we may expect better generalization performance and also better reproducibility of influential features in problems of variable selection.\nA limitation of our approach is that robustness guarantees crucially depend on the (unobserved) properties of available data. Using the proposed approach, Derivative Invariant Risk Minimization for prediction generally does not guarantee protection against unsuspected events. More specifically, we can not expect robust prediction when the heterogeneity in test data sets is different from the restricted set of shift interventions that have been observed on the training data sets. For example, in Theorem 1, the supremum contains distributions that lie in the affine combination of training environments, as opposed to arbitrary distributions." }, { "heading": "A ADDITIONAL EXPERIMENTS", "text": "So far, we have considered predictive performance under different data distributions with selected hyper-parameter configurations of all algorithms to illustrate heterogeneous behaviour of algorithms trained with different learning principles in the presence of unobserved confounders. In this section we revisit our introductory example to investigate in more details learned prediction rules and any sensitivities of interest, especially to hyper-parameter configurations.\nWe will use the same data generating mechanism presented in the introductory example in Figure 1 of the main body of this paper. Recall that we assume access to observations of variables (X1, X2, Y ) in two training datasets, each dataset sampled with differing interventions on (X1, X2) (in this case differing variances σ2 = 1 and σ2 = 2) from the following structural model,\nX2 := −H + EX2 , Y := X2 + 3H + EY , X1 := Y +X2 + EX1 H := EH\nwhere EX1 , EX2 ∼ N (0, σ2), EY ∼ N (0, 1), EH ∼ N (0, 1) are exogenous variables. H is an unobserved confounder, not observed during training but that influences the observed association between X2 and Y .\nA.1 RECOVERY OF CAUSAL COEFFICIENTS\nIn this section, given the above two training datasets, we inspect the weights learned in a simple one layer feed-forward neural network to determine exactly whether unobserved confounding induces a given learning paradigm to exploit spurious correlations and to what extent.\nBy way of preface, we have mentioned that causal, in contrast with spurious, solutions to a prediction problem may be defined as the argument solving,\nminimize f sup P∈P E(x,y)∼P [L(f(x), y)] (9)\nfor P defined as any distribution arising from arbitrary interventions on observed covariates x leading to shifts in their distribution Px (see sections 3.2 and 3.3 in (Meinshausen, 2018) for a detailed discussion of this result). This objective is a special case of the proposed optimization problem (6), specifically it is an affine combination (with λ → ∞) of distributions with different shifts in Px in all observed variables x.\nWe demonstrate this fact empirically in Table 2. In principle, causal solutions are recoverable with the proposed approach because we do observe during training environments with shifts in p(X1, X2), irrespective of the presence or not of unobserved confounders. We see that this holds approximately for the proposed objective with estimated coefficients (0.01, 0.95) for (X1, X2) close to the true causal coefficients (0, 1). In contrast, ERM returns biased coefficients and so does IRM.\nThis empirical observation is important because it highlights the fact that enforcing minimum gradients on average (ERM) or simultaneously across environments (the regularization proposed by IRM) is not appropriate to recover causal coefficients in the presence of unobserved confounders.\nIf however, no unobserved confounders exist in the system being modelled (H := 0 in the data generating mechanism) our objective and IRM are equivalent in the limit, and estimated parameters coincide with the causal solution approximately. This experiment is given in Table 3.\nA.2 SENSITIVITY TO HYPER-PARAMETERS\nThe robustness guarantees of any particular solution depends on the extent of the extrapolation desired (as a function of λ). For larger values of this parameter we can expect solutions to be robust in a larger set of distributions, spanning empirical risk minimization for λ = 0, to convex combinations, to training environments to arbitrary affine combinations of training environments for increasing λ.\nIn this section, we analysed performance in test data with the exact same data generating mechanism as considered in the introduction of the main body of this paper as a function of λ. Figure 5 gives our performance results that empirically verifies that the proposed approach interpolates between empirical risk minimization and causality in this case. We can see that for λ approaching zero solutions converge to ERM, for λ = 2 the solutions was equivalent to DRO, and for increasing λ the solutions approximate the causal one in the limit." }, { "heading": "B OTHER EXAMPLES OF UNOBSERVED CONFOUNDING", "text": "Measurement error. The data generating processes described in the main body of this paper for instance, as well as most of machine learning, assume that all nuisance variability enters the causal mechanisms of the data; that is, observed data reflects only causal drivers. If this is not the case, for example because of independent measurement noise observed in data but that does not propagate to across causal children, regression is known to be inconsistent in general (Carroll et al., 2006) and its bias is analogous to a form of unobserved confounding.\nConsider a simple model for illustration. Suppose (X,Y ) are observed subject to measurement noise, X? = X + Ex and Y ? = Y + Ey, which are not causally related to one another but rather Y = βX +E. Let Ex = βxH and Ey = βyH be the structure of measurement error independent of X and Y . Then substituting our observed data (X?, Y ?) into the underlying (X,Y ) relationship the observed model is,\nY ? = βX? + (βy − βxβ)H + E, X? = βxH +X (10)\nA special case of regression with unobserved confounders H ." }, { "heading": "C TECHNICAL RESULTS", "text": "This section provides a more complete discussion of the assumptions and justification statements relating to causality in section 2.2, and the proof of Theorem 1.\nC.1 INVARIANCES IN THE PRESENCE OF UNOBSERVED CONFOUNDING\nIn section 2.2 we justified exploiting a certain invariance of causal coefficients in the inner product of functions of the data X and residuals E, to occur even in the presence of unobserved confounders as long as interventions that define different environments do not involve unobserved confounders H .\nHere we show this invariance to hold in the special case of an additive model. The general data generation mechanism is as follows. Data sources, or different environments, emerge from manipulations in exogenous EX , related to X only, in an underlying additive model F with also additive functions f1, f2, f3, f4,\nY := f1(X) + f2(H) + EY , X := f3(X) + f4(H) + EX , H := EH (11)\nExogenous variables (EX , EY , EH) may have arbitrary distributions but only EX or EY vary across environments. Then it holds that,\nX = (I − f3)−1f4(H) + (I − f3)−1EX = (I − f3)−1f4(EH) + (I − f3)−1EX\nand that, ∇βf1(X)(Y − f1(X)) = ( ∇βf1(I − f3)−1f4(EH) +∇βf1(I − f3)−1EX ) · (f2(EH) + EY )\nwhich is a product of functions involving EH in one term, EH and EY in another term, EX and EH in another term, and EX and EY in the last term. Since (EX , EY , EH) are mutually independent taking expectations of product of functions involving EX and EH , EX and EH , and, EX and EY equals 0 assuming fi(Ej) = 0 for i = 1, . . . , 4 and j ∈ {X,Y,H}. So concluding, the expectation of the inner product ∇βf1(X)(Y − f1(X)) does not depend on EX nor EY and is thus stable across environments that have changing distributions for EX or EY . Now note that other functions than f1 may have this property as well, i.e. predictors that satisfy this invariance are not necessarily unique and will depend on the differences between available environments. If however, only one predictor exist that satisfies this invariance we may say that this predictor is causal. We summarize this claim in the following statement.\nProposition 1 Let Y and X be related by a non-linear additive model with unobserved confounding as in (11). Then,\nEPi∇βf1(X)(Y − f1(X)) = EPj∇βf1(X)(Y − f1(X)) (12)\nunder the assumption that distributions on (X,Y ) Pi and Pj are given by a data generating mechanism (11) subject to interventions on EX or EY only. Moreover, a function f satisfying the above equality, if unique is equal to f1.\nC.2 PROOF OF THEOREM 1\nWe restate the Theorem for convenience.\nTheorem 1 Let {Pe}e∈E , be a set of available environments. Further let the parameter space of β be open and bounded, such that the expected loss function L as a function of β belongs to a Sobolev space. Then, the following inequality holds,\nsup αe∈∆η ∑ e∈E αe E (x,y)∼Pe L (f ◦ φ(x), y) ≤ E (x,y)∼Pe,e∼E L (f ◦ φ(x), y)\n+ (1 + nη) · C · ∣∣∣∣∣∣∣∣ sup e∈E E (x,y)∼Pe ∇βL (f ◦ φ(x), y)− E (x,y)∼Pe,e∼E ∇βL (f ◦ φ(x), y) ∣∣∣∣∣∣∣∣ L2\nwhere C depends on the domain of β, n := |E| is the number of available environments and e ∼ E loosely denotes sampling indeces with equal probability from E . Proof. Let Ω denote the parameter space of β. The following derivation shows the claim,\nsup αe∈∆η ∑ e∈E αe E (x,y)∼Pe L (f ◦ φ(x), y)\n= (1 + nη) · sup e∈E E (x,y)∼Pe L (f ◦ φ(x), y)− η ∑ e∼E EPeL (f ◦ φ(x), y)\n= E (x,y)∼Pe,e∼E L (f ◦ φ(x), y) + (1 + nη) · sup e∈E E (x,y)∼Pe L (f ◦ φ(x), y)\n− (η + 1/n) ∑ e∼E E (x,y)∼Pe L (f ◦ φ(x), y)\n= E (x,y)∼Pe,e∼E L (f ◦ φ(x), y) + (1 + nη) · (\nsup e∈E E (x,y)∼Pe L (f ◦ φ(x), y)− E (x,y)∼Pe,e∼E\nL (f ◦ φ(x), y) )\n≤ E (x,y)∼Pe,e∼E L (f ◦ φ(x), y) + (1 + nη) ·M · ∣∣∣∣∣∣∣∣ sup e∈E E (x,y)∼Pe L (f ◦ φ(x), y)− E (x,y)∼Pe,e∼E L (f ◦ φ(x), y) ∣∣∣∣∣∣∣∣ L2\nwhere the inequality is given by the property that the evaluation functional is a bounded linear operator in certain Sobolev spacesW , for example with Ω = Rd and L2 norm. In particular this means that |f(β)| ≤M ||f ||L2 for all f ∈ W . It follows then also that the above is,\n≤ E (x,y)∼Pe,e∼E L (f ◦ φ(x), y) + (1 + nη) · P ·M · ∣∣∣∣∣∣∣∣ sup e∈E E (x,y)∼Pe ∇βL (f ◦ φ(x), y)− E (x,y)∼Pe,e∼E ∇βL (f ◦ φ(x), y) ∣∣∣∣∣∣∣∣ L2\nby Poincaré’s inequality for Sobolev functions defined on an open, bounded parameter space, see e.g. (Leoni, 2017). The assumption we make here for this last inequality to hold is that the region where the difference in loss functions is near zero is large enough such that the integral of the gradient is also large enough to control the integral of the function. This holds however for functions defined on many \"reasonable\" parameter spaces (Lipschitz suffices)." }, { "heading": "D EXPERIMENTAL DETAILS", "text": "This section gives implementation details of DIRM, additional experiments to test sensitivities relating to optimization choices, and a complete description of the data and experiments performed in the main body of this paper.\nD.1 IMPLEMENTATION DETAILS\nThe regularizer in DIRM’s objective in equation (6) controls the regularity or variation of the prediction function and encourages to learn a representation φ that results in a prediction function with similar derivatives in all training domains. The L2 norm integrates out the influence of β in the regularizer and thus most of the optimization involves φ, though β still plays a role in the first term of the objective.\nIn all our experiments, f : Rh → R as well as φ : Rd → Rh are implemented as fully connected neural networks (φ with optional hidden layers). β can thus be interpreted as the weights and biases of f . The L2 norm must be approximated in practice, which we do by evaluating the vector norm of the derivative of f with respect to β on a batch of training examples of each environment. The variance on the computed norms between environments is a proxy for the maximum deviation between environments with a smoother gradient vector field. Each step of the optimization then alternates between an update on φ and update on f , as detailed in the algorithm below.\nDIRM is sensitive to initialization and to the choice of hyperparameters – specifically its optimization schedule. In our experiments, we found best performance by increasing the relative weight of the penalty term λ after a fixed number of iterations (and similar implementations are used for IRM and REx that suffer from similar challenges). This we believe could be a significant limitation for its use in practice since this choice must be made a priori. We investigated the sensitivity of DIRM to this optimization schedule in Table 4 that shows test accuracy as a function of the iteration at which penalty term weight λ is increased.\nChoosing this number accurately is important for generalization performance. If λ is increased too early, different initialization values (and the complex loss landscape) lead to different solutions with unreliable performance and a large variance. This happens for all methods. An initial number of iterations minimizing loss in-sample improves estimates for all methods which then converge to solutions that exhibit lower variance.\nAlgorithm 1 DIRM Input: datasets D1, . . . ,DE in E different environments, parameter λ, batch size K Initialize: neural network model parameters φ, β while convergence criteria not satisfied do\nfor e = 1, . . . , E do Estimate loss Le(φ, β) empirically using a batch of K examples from De. Estimate derivatives ∇βLe(φ, β) empirically using a batch of K examples from De. end for Update β by stochastic gradient descent with,\n∇β\n( 1\nE E∑ e=1 Le(φ, β)\n)\nUpdate φ by stochastic gradient descent with,\n∇φ\n( 1\nE E∑ e=1 Le(φ, β) + λ ·Var(||∇βL1(φ, β)||22, . . . , ||∇βLE(φ, β)||22) ) end while\nD.1.1 APPROXIMATION OF THE L2 NORM IN PRACTICE\nThe bound given in Theorem 1 quantifies the discrepancy between function derivatives using the L2 norm, defined as an intregal over possible parameter values β. For neural networks, computation of the L2 norm is largely intractable and specifically, for networks of depth greater or equal to 4, it is an NP-hard problem (see Proposition 1 in (Triki et al., 2017)). Some approximation is thus unavoidable. One option is to recognise the L2 norm as an expectation over functional evaluations, ||f ||L2 = Ex∼U(Θ) [ ||f(x)||22 ]1/2 for a continuous function f taking values x sampled uniformly from its domain Θ. Empirical means are tractable yet they induce a much higher computational burden as these must be computed in every step of the optimization. Our approach is to take this approximation to its limit, making a single function evaluation at each step of the optimization using the current estimate β, as written in Algorithm 1.\nThis approximation loosens the connection between the bound given in Theorem 1 and the proposed algorithm. It remains justified however from a conceptual perspective as the objective of controlling an L2 type of norm is to encourage the regularizer function towards 0, and thus the values of the regularizer (which we do explicitly). For empirical comparisons with the empirical mean approach, we implemented empirical means using all combinations of parameter values chosen from a grid of 5 parameter values around the current estimate β, {0.25β, 0.5β, β, 2β, 4β}. Table 5 shows similar performance across the real data experiments considered in the main body of this paper. A single evaluation is in practice enough to monitor invariance of representations to environment-specific loss derivatives.\nD.2 DATA DETAILS\nD.2.1 X-RAY DATA\nWe create training environments with different proportions of X-rays from our two hospital sources to induce a correlation between the hospital (and its specific data collection procedure) and the pneumonia label. The objective is to encourage learning principles to exploit a spurious correlation, data collection mechanisms should not be related to the probability of being diagnosed with pneumonia. The reason for creating two training data sets with slightly different spurious correlation patterns is to nevertheless leave a statistical footprint in the distributions to disentangle stable (likely causal) and unstable (likely spurious). In each of the training and testing datasets we ensured positive and negative labels remained balanced. The training datasets contained 2002 samples each and the testing dataset contained 1144 samples.\nAll learning paradigms trained a convolutional neural network, 2 layers deep, with each layer consisting of a convolution (kernel size of 3 and a stride of 1). All predictions were made off of the deepest layer of the network. The number of input channels was 64, doubled for each subsequent layer, and dropout was applied after each layer. We optimize the binary cross-entropy loss using Adam (learning rate 0.001) without further regularization on parameters and use Xavier initialization. While learning with IRM and the proposed approach, the respective penalty λ = 1 is added to the loss after 5 epochs of learning with λ = 0. Experiments are run for a maximum of 20 epochs with early stopping based on validation performance. All results are averaged over 10 trials with different random splits of the data, and the reported uncertainty intervals are standard deviations of these 10 performance results.\nD.2.2 PARKINSON’S DISEASE SPEECH DATA\nThe data includes a total of 26 features recorded on each sample of speech and set training and testing splits which we use in our experiments. For each patient 26 different voice samples including sustained vowels, numbers, words and short sentences where recorded, which we considered to be different but related data sources. We created three training environments by concatenating features from three number recordings, concatenating features from three word recording and concatenating features from three sentences; for a total of 120 samples in each of the three training environment. The available testing split contained 168 recordings of vowels, which we expect to differ from training environments because these are different patients and do not contain numbers or words. Positive and negative samples were balanced in both training and testing environments.\nOn this data, for all learning paradigms we train a multi-layer perceptron with two hidden layers of size 64 with tanh activations and dropout (p = 0.5) after each layer. As in the image experiments, we optimize the binary cross-entropy loss using Adam (learning rate 0.001), L2 regularization on parameters and use Xavier initialization. While learning with IRM and the proposed approach, the respective penalty is added to the loss λ = 1 after 200 epochs of learning with λ = 0 to ensure stable optimization. Experiments are run for a maximum of 1000 epochs with early stopping based on the validation performance. All results are averaged over 10 trials with different random seeds of our algorithm. This is to give a sense of algorithm stability rather than performance stability.\nD.2.3 MAGGIC ELECTRONIC HEALTH RECORDS\nMAGGIC stands for Meta-Analysis Global Group in Chronic Heart Failure. The MAGGIC metaanalysis includes individual data on 39, 372 patients with heart failure (both reduced and preserved left-ventricular ejection fraction), from 30 cohort studies, six of which were clinical trials. 40.2% of patients died during a median follow-up of 2.5 years. For our purposes, we removed patients that were censored or lost to follow-up to ensure well-defined outcomes after 3 years after being discharged from their respective hospitals. A total of 33 variables describe each patient including demographic variables: age, gender, race, etc; biomarkers: blood pressure, haemoglobin levels, smoking status, ejection fraction, etc; and details of their medical history: diabetes, stroke, angina, etc.\nTo curate our training and testing datasets, we proceeded as follows. On all patients follow-up over 3 years, we estimated feature influence of survival status after three years. A number of variables were significantly associated with survival out of which we chose Age, also found correlated with BMI and a number of medical history features, as a confounder for the effect of these variables on survival. We\nused three criteria to select studies: having more than 500 patients enrolled and balanced death rates (circa 50%). 5 studies fitted these constraints: ’DIAMO’, ’ECHOS’, ’HOLA’, ’Richa’, ’Tribo’. Each was chosen in turn as a target environment with models trained on the other 4 training environments.\nFeature reproducibility experiments. A natural objective for the consistency of health care and such that we may reproduce the experiments and their results in different scenarios is to find relevant features that are not specific to an individual medical study, but can also be found (replicated) on other studies with different patients. Heterogeneous patients and studies, along with different national guidelines and standards of care make this challenging. In our experiments we made comparisons of reproducibility in parameter estimates for models trained using Empirical Risk Minimization (ERM) and DIRM. We chose networks with a single layer with logistic activation and focused on the estimation of parameter to understand the variability in training among different data sources. Naturally, feature importance measured by parameter magnitudes makes sense only after normalization of the covariates to the same (empirical) variance (equal to 1) in each study separately. After this preprocessing step, for both ERM and the proposed approach we trained separate networks on 100 random pairs of studies (each pair concatenated for ERM) and returned the top 10 significant features (by the magnitude of parameters). Over all sets of significant parameters we then identified how many intersected across a fixed number of the 100 runs.\nThe same architecture and hyperparameters as in Parkinson’s disease speech data experiments was used for MAGGIC data except that we increase the maximum training epochs to 5000." } ]
2,020
null
SP:dbb0ed3b53fc0905982b51853e83f5cdbaf3b535
[ "The paper describes an MLP architectures for problems in which the features do not have a known structure (eg, tabular data). A \"differentiable routing matrix\" partitions the data into K blocks. Then, standard MLPs are applied to each block and the results are recursively aggregated by moving forward in the model.", "This paper studies supervised classification problems where features are unstructured. For these problems, the authors propose a new neural network architecture that first reorganize the features into groups, then builds feed-forward networks on top each group, and finally aggregate the hidden nodes of each group to produce the final output. Empirical and ablation studies are conducted to show the performance of this approach. " ]
Despite the success of deep learning in domains such as image, voice, and graphs, there has been little progress in deep representation learning for domains without a known structure between features. For instance, a tabular dataset of different demographic and clinical factors where the feature interactions are not given as a prior. In this paper, we propose Group-Connected Multilayer Perceptron (GMLP) networks to enable deep representation learning in these domains. GMLP is based on the idea of learning expressive feature combinations (groups) and exploiting them to reduce the network complexity by defining local group-wise operations. During the training phase, GMLP learns a sparse feature grouping matrix using temperature annealing softmax with an added entropy loss term to encourage the sparsity. Furthermore, an architecture is suggested which resembles binary trees, where group-wise operations are followed by pooling operations to combine information; reducing the number of groups as the network grows in depth. To evaluate the proposed method, we conducted experiments on five different real-world datasets covering various application areas. Additionally, we provide visualizations on MNIST and synthesized data. According to the results, GMLP is able to successfully learn and exploit expressive feature combinations and achieve state-of-the-art classification performance on different datasets.
[]
[ { "authors": [ "Davide Anguita", "Alessandro Ghio", "Luca Oneto", "Xavier Parra", "Jorge Luis Reyes-Ortiz" ], "title": "A public domain dataset for human activity recognition using smartphones", "venue": "Esann,", "year": 2013 }, { "authors": [ "Sergul Aydore", "Bertrand Thirion", "Gael Varoquaux" ], "title": "Feature grouping as a stochastic regularizer for high-dimensional structured data", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Alan Baddeley" ], "title": "The magical number seven: Still magic after all these years", "venue": null, "year": 1994 }, { "authors": [ "James S Bergstra", "Rémi Bardenet", "Yoshua Bengio", "Balázs Kégl" ], "title": "Algorithms for hyper-parameter optimization", "venue": "In Advances in neural information processing systems,", "year": 2011 }, { "authors": [ "Peter Clifford" ], "title": "Markov random fields in statistics. Disorder in physical systems: A volume", "venue": "in honour of John M. Hammersley,", "year": 1990 }, { "authors": [ "Nelson Cowan" ], "title": "The magical number 4 in short-term memory: A reconsideration of mental storage capacity", "venue": "Behavioral and brain sciences,", "year": 2001 }, { "authors": [ "Adnan Darwiche" ], "title": "Modeling and reasoning with Bayesian networks", "venue": "Cambridge university press,", "year": 2009 }, { "authors": [ "Sourya Dey", "Kuan-Wen Huang", "Peter A Beerel", "Keith M Chugg" ], "title": "Characterizing sparse connectivity patterns in neural networks", "venue": "Information Theory and Applications Workshop (ITA),", "year": 2018 }, { "authors": [ "Dheeru Dua", "Casey Graff" ], "title": "UCI machine learning repository, 2017", "venue": "URL http://archive. ics.uci.edu/ml", "year": 2017 }, { "authors": [ "Jerome H Friedman" ], "title": "Greedy function approximation: a gradient boosting machine", "venue": "Annals of statistics,", "year": 2001 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In Proceedings of the thirteenth international conference on artificial intelligence and statistics,", "year": 2010 }, { "authors": [ "Andrés Hoyos-Idrobo", "Gaël Varoquaux", "Jonas Kahn", "Bertrand Thirion" ], "title": "Recursive nearest agglomeration (rena): fast clustering for approximation of structured signals", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Mohammad Kachuee", "Sajad Darabi", "Babak Moatamed", "Majid Sarrafzadeh" ], "title": "Dynamic feature acquisition using denoising autoencoders", "venue": "IEEE transactions on neural networks and learning systems,", "year": 2018 }, { "authors": [ "Mohammad Kachuee", "Kimmo Karkkainen", "Orpaz Goldstein", "Davina Zamanzadeh", "Majid Sarrafzadeh" ], "title": "Nutrition and health data for cost-sensitive learning", "venue": null, "year": 1902 }, { "authors": [ "Guolin Ke", "Jia Zhang", "Zhenhui Xu", "Jiang Bian", "Tie-Yan Liu" ], "title": "Tabnn: A universal neural network solution for tabular data", "venue": null, "year": 2018 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "arXiv preprint arXiv:1609.02907,", "year": 2016 }, { "authors": [ "Günter Klambauer", "Thomas Unterthiner", "Andreas Mayr", "Sepp Hochreiter" ], "title": "Self-normalizing neural networks", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Yann LeCun", "Corinna Cortes", "CJ Burges" ], "title": "Mnist handwritten digit database", "venue": "AT&T Labs [Online]. Available: http://yann. lecun. com/exdb/mnist,", "year": 2010 }, { "authors": [ "Decebal Constantin Mocanu", "Elena Mocanu", "Peter Stone", "Phuong H Nguyen", "Madeleine Gibescu", "Antonio Liotta" ], "title": "Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science", "venue": "Nature communications,", "year": 2018 }, { "authors": [ "George B Moody", "Roger G Mark" ], "title": "The impact of the mit-bih arrhythmia database", "venue": "IEEE Engineering in Medicine and Biology Magazine,", "year": 2001 }, { "authors": [ "Richard E Neapolitan" ], "title": "Learning bayesian networks, volume", "venue": null, "year": 2004 }, { "authors": [ "Oliver Richter", "Roger Wattenhofer" ], "title": "Treeconnect: A sparse alternative to fully connected layers", "venue": "IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI),", "year": 2018 }, { "authors": [ "Tim Salimans", "Durk P Kingma" ], "title": "Weight normalization: A simple reparameterization to accelerate training of deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning", "venue": null, "year": 1929 }, { "authors": [ "Enzo Tartaglione", "Skjalg Lepsøy", "Attilio Fiandrotti", "Gianluca Francini" ], "title": "Learning sparse neural networks via sensitivity-driven regularization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Jihun Yun", "Peng Zheng", "Eunho Yang", "Aurelie Lozano", "Aleksandr Aravkin" ], "title": "Trimming the l1 regularizer: Statistical analysis, optimization, and applications to deep learning", "venue": "In International Conference on Machine Learning,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks have been quite successful across various machine learning tasks. However, this advancement has been mostly limited to certain domains. For example in image and voice data, one can leverage domain properties such as location invariance, scale invariance, coherence, etc. via using convolutional layers (Goodfellow et al., 2016). Alternatively, for graph data, graph convolutional networks were suggested to leverage adjacency patterns present in datasets structured as a graph (Kipf & Welling, 2016; Xu et al., 2019).\nHowever, there has been little progress in learning deep representations for datasets that do not follow a particular known structure in the feature domain. Take for instance the case of a simple tabular dataset for disease diagnosis. Such a dataset may consist of features from different categories such as demographics (e.g., age, gender, income, etc.), examinations (e.g., blood pressure, lab results, etc.), and other clinical conditions. In this scenario, the lack of any known structure between features to be used as a prior would lead to the use of a fully-connected multilayer perceptron network (MLP). Nonetheless, it has been known in the literature that MLP architectures, due to their huge complexity, do not usually admit efficient training and generalization for networks of more than a few layers.\nIn this paper, we propose Group-Connected Multiplayer Perceptron (GMLP) networks. The main idea behind GMLP is to learn and leverage expressive feature subsets, henceforth referred to as feature groups. A feature group is defined as a subset of features that provides a meaningful representation or high-level concept that would help the downstream task. For instance, in the disease diagnosis example, the combination of a certain blood factor and age might be the indicator of a higher level clinical condition which would help the final classification task. Furthermore, GMLP leverages feature groups limiting network connections to local group-wise connections and builds a feature hierarchy via merging groups as the network grows in depth. GMLP can be seen as an architecture that learns expressive feature combinations and leverages them via group-wise operations.\nThe main contributions of this paper are as follows: (i) proposing a method for end-to-end learning of expressive feature combinations, (ii) suggesting a network architecture to utilize feature groups and\nlocal connections to build deep representations, (iii) conducting extensive experiments demonstrating the effectiveness of GMLP as well as visualizations and ablation studies for better understanding of the suggested architecture.\nWe evaluated the proposed method on five different real-world datasets in various application domains and demonstrated the effectiveness of GMLP compared to state-of-the-art methods in the literature. Furthermore, we conducted ablation studies and comparisons to study different architectural and training factors as well as visualizations on MNIST and synthesized data. To help to reproduce the results and encouraging future studies on group-connected architectures, we made the source code related to this paper available online 1." }, { "heading": "2 RELATED WORK", "text": "Fully-connected MLPs are the most widely-used neural models for datasets in which no prior assumption is made on the relationship between features. However, due to the huge complexity of fully-connected layers, MLPs are prone to overfitting resulting in shallow architectures limited to a few layers in depth (Goodfellow et al., 2016). Various techniques have been suggested to improve training these models which include regularization techniques such as L-1/L-2 regularization, dropout, etc. and normalization techniques such as layer normalization, weigh normalization, batch normalization, etc.(Srivastava et al., 2014; Ba et al., 2016; Salimans & Kingma, 2016; Ioffe & Szegedy, 2015). For instance, self-normalizing neural networks (SNNs) have been recently suggested as state of the art normalization methods that prevent vanishing or exploding gradients which help training feed-forward networks with higher depths (Klambauer et al., 2017).\nFrom the architectural perspective, there has been great attention toward networks consisting of sparse connections between layers rather than having dense fully-connected layers (Dey et al., 2018). Sparse connected neural networks are usually trained based on either a sparse prior structure over the network architecture (Richter & Wattenhofer, 2018) or based on pruning a fully-connected network to a sparse network (Yun et al., 2019; Tartaglione et al., 2018; Mocanu et al., 2018). However, it should be noted that the main objective of most sparse neural network literature has been focused on improving the memory and compute requirements while maintaining competitive accuracies compared to MLPs.\nAs a parallel line of research, the idea of using expressive feature combinations or groups has been suggested as a prior over the feature domain. Perhaps, the most successful and widespread use of this idea is in creating random forest models in which different trees are trained based on different feature subsets in order to deal with high-dimensional and high-variance data (Breiman, 2001). More recently, feature grouping is suggested by Aydore et al. (2019) as a statistical regularization technique to learn from datasets of large feature size and a small number of training samples. They do the forward network computation by projecting input features using samples taken from a bank of feature grouping matrices, reducing the input layer complexity and regularizing the model. In another recent study, Ke et al. (2018) used expressive feature combinations to learn from tabular datasets using a recursive encoder with a shared embedding network. They suggest a recursive architecture in which more important feature groups have a more direct impact on the final prediction.\nWhile promising results have been reported using these methods, feature grouping has been mostly considered as a preprocessing step. For instance, Aydore et al. (2019) uses the recursive nearest agglomeration (ReNA) (Hoyos-Idrobo et al., 2018) clustering to determine feature groups prior to the analysis. Alternatively, Ke et al. (2018) defined feature groups based on a pre-trained gradient boosting decision tree (GBDT) (Friedman, 2001). Feature grouping as a preprocessing step not only increases the complexity and raises practical considerations, but also limits the optimality of the selected features in subsequent analysis. In this study, we propose an end-to-end solution to learn expressive feature groups. Moreover, we introduce a network architecture to exploit interrelations within the feature groups to reduce the network complexity and to train deeper representations.\n1We plan to include a link to the source code and GitHub page related to this paper in the camera-ready version." }, { "heading": "3 PROPOSED METHOD", "text": "" }, { "heading": "3.1 ARCHITECTURE OVERVIEW", "text": "In this paper, we propose GMLP which intuitively can be broken down to three stages: (i) selecting expressive feature groups, (ii) learning dynamics within each group individually, and (iii) merging information between groups as the network grows in depth (see Figure 1). In this architecture, expressive groups are jointly selected during the training phase. Furthermore, GMLP is leveraging feature groups and using local group-wise weight layers to significantly reduce the number of parameters. While the suggested idea can be materialized as different architectures, in the current study, we suggest organization of the network as architectures resembling a binary tree spanning from leaves (i.e., features) to a certain abstraction depth closer to the root2. As the network grows deeper, after each local group-wise weight layer, half of the groups are merged using pooling operations, effectively reducing the width of the network while increasing the receptive field. At the last layer, all features within all groups are concatenated into a dense feature vector fed to the output layer." }, { "heading": "3.2 NOTATION", "text": "We consider the generic problem of supervised classification based on a dataset of feature and target pairs, D: (x1:N , y1:N ), where xi ∈ <d, yi ∈ {1 . . . C}, and N is the number of dataset samples. Furthermore, we define group size, m, as the number of neurons or elements within each group, and group count, k, as the number of selected groups which are essentially subsets of input features. Also, L is used to refer to the total depth of a network. We use zli ∈ <m to refer to activation values of group i in layer l. In this paper, we define all vectors as column vectors." }, { "heading": "3.3 NETWORK LAYERS", "text": "In this section, we present the formal definition of different GMLP network layers. The very first layer of the network, Group-Select, is responsible for organizing features into k groups of size m each. A routing matrix, Ψ, is used for connecting each neuron within each group to exactly one feature in the feature set:\nz01:k = Ψx, (1)\n2Please note that, in this paper, tree structures are considered to grow from leaves to the root . In other words, in this context, limiting the depth is synonymous with considering the tree portion spanning from a certain depth to leave nodes.\nwhere Ψ ∈ {0, 1}km×d is a sparse matrix determining features that are present in each group. As we are interested in jointly learning Ψ during the training phase, we use the following continuous relaxation:\nΨi,j ≈ exp(ψi,j/τ)∑j′=d\nj′=1 exp(ψi,j′/τ) . (2)\nIn this equation, ψ is a real-valued matrix reparameterizing the routing matrix through a softmax operation with temperature, τ . The lower the temperature, the more (2) converges to the desired discrete and sparse binary routing matrix. Note that, in the continuous relaxation, the matrix ψ can be optimized via the backpropagation of gradients from classification loss terms. In the next section, we provide further detail on temperature annealing schedules as well as other techniques to enhance the Ψ approximation.\nBased on selected groups, we suggest local fully-connected weight layers for each group: Group-FC. The goal of Group-FC is to extract higher-level representations using the selected expressive feature subsets. This operation is usually followed by non-linearity functions (e.g., ReLU), normalization operations (e.g, Batch Norm), and dropout. Formally, Group-FC can be defined as:\nzl+1i = f(W l iz l i + b l i), (3)\nwhere W li ∈ <m×m and bli ∈ <m are the weight matrix and bias vector, applied on group i at layer l. Here, f represents other subsequent operations such as non-linearity, normalization, and dropout.\nLastly, Group-Pool is defined as an operation which merges representations of two groups into a single group, reducing network width by half while increasing the effective receptive field:\nzl+1i = pool(z l i, z l i+k/2l+1), (4)\nwhere zli and z l i+k/2 are the ith group from the first and second halves, respectively; and pool is a pooling function from <2m to <m. In this study, we explore different variants of pooling functions such as max pooling, average pooling, or using linear weight layers as transformations from <2m to <m. Please note that while we use a similar terminology as pooling in convolutional networks, the pooling operation explained here is not applied location-wise, but instead it is applied feature-wise, between different groups pairs.\nThe values of m and k are closely related to the number and order of feature interactions for a certain task. Using proper m and k values enables us to reduce the parameter space while maintaining the model complexity required to solve the task. However, finding the ideal m and k directly from a given dataset is a very challenging problem. In this work, we treat m and k as hyperparameters to be found by a hyperparameter search." }, { "heading": "3.4 TRAINING", "text": "We define the objective function to be used for end-to-end training of weights as well as the routing matrix as:\nL = − 1 N ∑ i ∑ c yi,c log(Fθ(xi)) + λH(ψ) + α ∑ ω∈θ ||ω||22. (5)\nIn this objective function, the first term is the standard cross-entropy classification loss where Fθ denotes the GMLP network as a function with parameters θ, and N is the number of training samples used. The second term is an entropy loss over the distribution of the routing matrix that is weighted by the hyperparameter λ:\nH(ψ) = −1 d j=d∑ j=1 i=km∑ i=1 exp(ψi,j)∑j′=d j′=1 exp(ψi,j′) log( exp(ψi,j)∑j′=d j′=1 exp(ψi,j′) ). (6)\nH(ψ) is minimizing the entropy corresponding to the distribution of ψ regardless of the temperature used for Ψ approximation. Accordingly, λ can be viewed as a hyperparameter and as an additional method for encouraging sparse Ψ matrices. The last term in (5) is an L-2 regularization term with the hyperparameter α to control the magnitude of parameters in layer weights and in ψ. Note that\nwithout the L-2 regularization term, ψ elements may keep increasing during the optimization loop, since ψ only appears in normalized form in the objective function of (5).\nWe use Adam (Kingma & Ba, 2014) optimization algorithm starting from the default 0.001 learning rate and reducing the learning rate by a factor of 5 as the validation accuracy stops improving. Regarding the temperature annealing, during the training, the temperature is exponentially decayed from 1.0 to 0.01. In order to initialize the Group-FC weights, we used Xavier initialization (Glorot & Bengio, 2010) with m for both fan-in and fan-out values. Similarly, the ψ matrix is initialized by setting the fan-in equal to d and fan-out to km.\nFurther detail on architectures and hyperparameters used for each specific experiment as well as details on the software implementation are provided as appendices to this paper." }, { "heading": "3.5 ANALYSIS", "text": "The computational complexity of GMLP at the prediction time can be written as (for simplicity, ignoring bias and pooling terms):\nkm+ km2 + km2 2 + km2 4 + ...+ km2 2L−1 + C km 2L−1 . (7)\nIn this series, the first term, km, is the work required to organize features to groups. The subsequent terms, except the last term, are representing the computational cost of local fully-connected operations at each layer. The last term is the complexity of the output layer transformation from the concatenated features to the number of classes. Therefore, the computational complexity of GMLP at the prediction time can be written as O(km2 + Ckm\n2L−1 ). In comparison, the computational complexity of an MLP\nwith a similar network width would be:\nkmd+ k2m2 + k2m2 2 + k2m2 4 + ...+ k2m2 2L−1 + C km 2L−1 , (8)\nwhere the first term is the work required for the first network layer from d to km neurons, the second term is corresponding to a hidden layer of size km, and so forth. The last term is the complexity of the output layer similar to the case of GMLP. The overall work required from this equation is of O(kmd+ k2m2 + Ckm\n2L−1 ) complexity. This is substantially higher than GMLP, for typical k, d, and\nC values.\nAdditionally, the density of the Group-FC layer connections can be calculated as: km 2\nk2m2 = 1 k , which\nis very small for reasonably large number of k values used in our experiments. Also, assuming pooling operations in every other layer, the receptive field size or the maximum number of features impacting a neuron at layer l can be written as 2l−1m. For instance, a neuron in the first layer of the network is only connected to m features, and a neuron in the second layer is connected to two groups or 2m features and so forth." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "The proposed method is evaluated on five different real-world datasets, covering various domains and applications: permutation invariant CIFAR-10 (Krizhevsky et al., 2009), human activity recognition (HAPT) (Anguita et al., 2013), diabetes classification (Kachuee et al., 2019), UCI Landsat (Dua & Graff, 2017), and MIT-BIH arrhythmia classification (Moody & Mark, 2001) datasets. Additionally, we use MNIST (LeCun et al., 2010) and a synthesized dataset to provide further insight into the operation of GMLP (see Section 4.4). Table 1 presents a summary of datasets used in this study. Regarding the CIFAR-10 dataset, we permute the image pixels to discard pixel coordinates in our experiments. Note that the permutation is not changing across samples, it is merely a fixed random ordering used to remove pixel coordinates for each experiment. For all datasets, basic statistical normalization with µ = 0 and σ = 1 is used to normalize features as a preprocessing step. The only exception is CIFAR-10 for which we used the standard channel-wise normalization and standard data augmentation (i.e., random crops and random horizontal flips). The standard test and train data splits were used as dictated by dataset publishers. In cases that the separated sets are not provided,\ntest and train subsets are created by randomly splitting samples to 20% for test and the rest for training/validation.\nWe compare the performance of the proposed method with recent related work including SelfNormalizing Neural Networks (SNN) (Klambauer et al., 2017), Sparse Evolutionary Training (SET) (Mocanu et al., 2018)3, Feature Grouping as a Stochastic Regularizer (in this paper, denoted as FGR) (Aydore et al., 2019)4 as well as the basic dropout regularized and batch normalized MLPs. In order to ensure a fair comparison, we adapted source codes provided by other work to be compatible with our data loader and preprocessing modules.\nFurthermore, for each method, we conducted an extensive hyperparameter search using Microsoft Neural Network Intelligence (NNI) toolkit5 and the Tree-structured Parzen Estimator (TPE) tuner (Bergstra et al., 2011) covering different architectural and learning hyperparameters for each case. More detail on hyperparameter search spaces and specific architectures used in this paper is provided in Appendix A and Appendix B. We run each case using the best hyperparameter configuration eight times and report mean and standard deviation values." }, { "heading": "4.2 RESULTS", "text": "Table 2 presents a comparison between the proposed method (GMLP) and 4 other baselines: MLP, SNN (Klambauer et al., 2017), SET (Mocanu et al., 2018), and FGR (Aydore et al., 2019). As it can be seen from this comparison, GMLP outperforms other work, achieving state-of-the-art classification accuracies. Concerning the CIFAR-10 results, to the best of our knowledge, GMLP achieves a new state-of-the-art performance on permutation invariant CIFAR-10 augmented using the standard data augmentation. We believe that leveraging expressive feature groups enables GMLP to consistently perform better across different datasets.\nTo compare model complexity and performance we conduct an experiment by changing the number of model parameters and reporting the resulting test accuracies. Here, we reduce the number of parameters by reducing the width of each network; i.e. reducing the number of groups and hidden neurons for GMLP and MLP, respectively. Figure 2 shows accuracy versus the number of parameters for the GMLP and MLP baseline on CIFAR-10 and MIT-BIH datasets. Based on this figure, GMLP is able to achieve higher accuracies using significantly less number of parameters. It is consistent with the complexity analysis provided in Section 3.5. Note that in this comparison, we consider the number of parameters involved at the prediction time." }, { "heading": "4.3 ABLATION STUDY", "text": "objective. From this figure, it can be seen that excluding both techniques leads to a significantly lower performance. However, using any of the two techniques leads to relatively similar high accuracies. It is consistent with the intuition that the functionality of these techniques is to encourage learning sparse routing matrices, either using softmax temperatures or entropy regularization to achieve this. In this paper, in order to ensure sparse and low complexity routing matrices, we use both techniques simultaneously as in case (i).\nFigure 4 shows a comparison between GMLP models trained on CIFAR-10 using different pooling types: (i) linear transformation, (ii) max pooling, and (iii) average pooling. As it can be seen from this comparison, while there are slight differences in the convergence speed of using different pooling types, all of them achieve relatively similar accuracies. In our experiments, we decided to use max pooling and average pooling as they provide reasonable results without the need to introduce additional parameters required for the linear pooling method.\nFigure 5 shows learning curves for training CIFAR-10 GMLP models using different group sizes. As it can be seen from this figure, using very small group sizes would cause a reduction in the final accuracy. At the other extreme, the improvement achieved using larger values is negligible for m values more than 16. Finally, Figure 6 shows a comparison between learning curves for using a different number of groups. Using very small k values result in a significant reduction in performance. However, the rate of performance gains for using more groups is very small for k of more than 1536. Note that the number of model parameters and compute scales linearly with k and quadratically with m (see Section 3.5).\nFigure 4: Ablation study demonstrating the impact of different pooling functions.\n0 250 500 750 1000 1250 1500 1750 Training Epochs\n40\n45\n50\n55\n60\n65\n70\n75\nAc cu\nra cy\n(% )\nk=3072 k=1536 k=768 k=384 k=192 k=96\nFigure 6: Ablation study on the impact of using different number of groups (k). For this experiment, we used m=16." }, { "heading": "4.4 EXPERIMENTS ON MNIST AND SYNTHESIZED DATA", "text": "MNIST dataset (LeCun et al., 2010) is used to visually inspect the performance of the Group-Select layer. Figure 7 shows a heat-map of how frequently each pixel is selected across all feature groups for: (a) original MNIST samples, (b) MNIST samples where the lower-half is replaced by Gaussian noise. From Figure 7a, it can be seen that most groups are selecting pixels within the center of the frame, effectively discarding margin pixels. This is consistent with other work which show the importance of different locations for MNIST images (Kachuee et al., 2018). Apart from this, in Figure 7b, a version of the MNIST dataset is used in which half of the frame does not provide any useful information for the downstream classification task. From this figure, GMLP is not selecting any features to be used from the lower region.\nIn order to show the effectiveness of GMLP, we synthesized a dataset which has intrinsic and known expressive feature groups. Specifically, we used a simple Bayesian network as depicted in Figure 8. This network consists of six binary features, A to F, interacting with each other as specified by the graph edges, which determine the distribution of the target node, J. The graph and conditionals are designed such that each of the nodes in the second level take the XOR value of their parents with a 99% probability. The target node, J, is essentially one with a high probability if at least two of the second level nodes are one. We synthesized dataset by sampling 6,400 samples from the network (1,280 samples for test and the rest of training/evaluation). On this dataset, we trained a very simple GMLP consisting of four groups of size two, one group-wise fully-connected layer, and an output layer. Figure 9 shows the features selected for each group after the training phase (i.e., the Ψ matrix).\nFrom this figure, the Group-Select layer successfully learns to detect the feature pairs that are interacting, enabling the Group-FC layers to decode the non-linear XOR relations." }, { "heading": "5 DISCUSSION", "text": "Intuitively, training a GMLP model with certain groups can be viewed as a prior assumption over the number and order of interactions between the features. It is a reasonable prior assumption as in many natural datasets, a conceptual hierarchy exists where only a limited number of features interact with each other. Furthermore, it is consistent with the discoveries made in understanding the human decision-making process; finding that we are only able to consider at most nine factors at the same time during a decision making process (Cowan, 2001; Baddeley, 1994).\nAdditionally, GMLP can be considered as a more general neural counterpart of random forests. Both models use different subsets of features (i.e., groups) and learn interactions within each group. One major difference between the two methods is the fact that GMLP combines information between different groups using pooling operations, while random forest uses the selected features to train an\nensemble of independent trees on each group. From another perspective, the idea of studying feature groups is closely related to causal models such as Bayesian networks and factor graphs (Darwiche, 2009; Neapolitan et al., 2004; Clifford, 1990). These methods are often impractical for large-scale problems, because without a prior over the causal graph, they require an architecture search of the NP-complete complexity or more." }, { "heading": "6 CONCLUSION", "text": "In this paper, we proposed GMLP as a solution for deep learning in domains where the feature interactions are not known as prior and do not admit the use of convolutional or other techniques leveraging domain priors. GMLP jointly learns expressive feature combinations and employs groupwise operations to reduce the network complexity. We conducted extensive experiments demonstrating the effectiveness of the proposed idea and compared the achieved performances with state-of-the-art methods in the literature." }, { "heading": "A HYPERPARAMETER SEARCH SPACE", "text": "Tables 3-5 present the hyperparameter search space considered for experiments on GMLP, MLP, SNN, and FGR, respectively. For the GMLP search space, the number of groups is adjusted based on the number of features and samples in each specific task. Also, the number of layers is adjusted to be compatible with the number of groups being used. Regarding the FGR experiments, due to scalability issues of the published source provided by the original authors, we were only able to train networks with at most two hidden layers. For SET, as their architecture is evolutionary i.e., prunes certain weights and adds new ones, we only explored using a different number of hidden neurons in the range of 500 to 4000.\nRegarding the number of epochs, we used 2000 epochs for CIFAR-10, 1000 epochs for HAPT, 100 epochs for Diabetes, 300 epochs for Landsat, and 300 epochs for MIT-BIH experiments. The only exception is the SNN experiments where we had to reduce the learning rate to increase the stability of the training resulting in more epochs required to converge." }, { "heading": "B ARCHITECTURES", "text": "Table 6,7,8,9,10 show the selected architectures for GMLP, MLP, SNN, SET, and FGR, respectively. We used the following notation to indicate different layer types and parameters: GSel-k-m represents a Group-Select layer selecting k groups of m features each. GFC indicates Group-FC layers, and FC-x represents fully-connected layer with x hidden neurons. GPool-x is a Group-Pool layer of type x (max, mean, linear, etc.). Concat is concatenation of groups used prior to the output layer in GMLP architectures. SC-x refers to SET sparse evolutionary layer of size x." }, { "heading": "C SOFTWARE IMPLEMENTATION", "text": "Table 11 presents the list of software dependencies and versions used in our implementation. To produce results related to this paper, we used a workstation with 4 NVIDIA GeForce RTX-2080Ti GPUs, a 12 core Intel Core i9-7920X processor, and 128 GB memory. Each experiment took between about 30 minutes to 72 hours, based on the task and method being tested.\nD VISUAL ANALYSIS\nIn Figure 10, we present a visualization of the selected feature for 25 randomly selected groups in our final CIFAR-10 architecture. Red, green, and blue colors indicate which channel is selected for each location. Compared to visualizations that are frequently used for convolutional networks, as GMLP has the flexibility to select pixels at different locations and different color channels, it is not easy to find explicit patterns in this visualization. However, one noticeable pattern is that features selected from a certain color channel usually appear in clusters resembling irregularly shaped patches.\nFigure 11 shows the frequency in which each CIFAR-10 location is selected by the GMLP network. From this visualization, GMLP is mostly ignoring the border areas which can be a result of the data augmentation process used to train the network i.e., randomly cropping the center area and padding the margins.\nFigure 11: Visualization of pixels selected by the group-select layer for the CIFAR-10 GMLP model. Warmer colors represent features that are being selected more frequently." } ]
2,019
null
SP:adfae2d05cdf908663fa093cd58f0e8d50ab2d9a
[ "This paper proposes to search architectures of BERT model under various memory and latency contraints. The search algorithm is conducted by pretraining a big supernet that contains the all the sub-network structures, where the optimal models for different requirements are selected from it. Once an architecture is found, it is re-trained through pretraining-finetuning or two-stage distillation for each specific task. Several approaches (block-wise training and search, progressive shrinking, performance approximation) are proposed to improve the search efficiency. Experiments on GLUE benchmark shows the models found by proposed methods can achieve better accuracy than some of the previous compressed BERT models. The paper (together with the appendix) is clearly presented, and the idea is new and interesting to me. The experiments are detailed and comprehensive." ]
While pre-trained language models such as BERT and RoBERTa have achieved impressive results on various natural language processing tasks, they have huge numbers of parameters and suffer from huge computational and memory costs, which make them difficult for real-world deployment. Hence, model compression should be performed in order to reduce the computation and memory cost of pre-trained models. In this work, we aim to compress BERT and address the following two challenging practical issues: (1) The compression algorithm should be able to output multiple compressed models with different sizes and latencies, so as to support devices with different kinds of memory and latency limitations; (2) the algorithm should be downstream task agnostic, so that the compressed models are generally applicable for different downstream tasks. We leverage techniques in neural architecture search (NAS) and propose NAS-BERT, an efficient method for BERT compression. NAS-BERT trains a big supernet on a carefully designed search space containing various architectures and outputs multiple compressed models with adaptive sizes and latency. Furthermore, the training of NAS-BERT is conducted on standard self-supervised pre-training tasks (e.g., masked language model) and does not depend on specific downstream tasks. Thus, the models it produces can be used across various downstream tasks. The technical challenge of NAS-BERT is that training a big supernet on the pre-training task is extremely costly. We employ several techniques including block-wise search, search space pruning, and performance approximation to improve search efficiency and accuracy. Extensive experiments on GLUE benchmark datasets demonstrate that NAS-BERT can find lightweight models with better accuracy than previous approaches, and can be directly applied to different downstream tasks with adaptive model sizes for different requirements of memory or latency.
[]
[ { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "arXiv preprint arXiv:1409.0473,", "year": 2014 }, { "authors": [ "Gabriel Bender", "Pieter-Jan Kindermans", "Barret Zoph", "Vijay Vasudevan", "Quoc Le" ], "title": "Understanding and simplifying one-shot architecture search", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Han Cai", "Ligeng Zhu", "Song Han" ], "title": "Proxylessnas: Direct neural architecture search on target task and hardware", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Han Cai", "Chuang Gan", "Tianzhe Wang", "Zhekai Zhang", "Song Han" ], "title": "Once-for-all: Train one network and specialize it for efficient deployment", "venue": null, "year": 1908 }, { "authors": [ "Daniel Cer", "Mona Diab", "Eneko Agirre", "Iñigo Lopez-Gazpio", "Lucia Specia" ], "title": "SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation", "venue": "In Proceedings of the 11th International Workshop on Semantic Evaluation", "year": 2017 }, { "authors": [ "Daoyuan Chen", "Yaliang Li", "Minghui Qiu", "Zhen Wang", "Bofang Li", "Bolin Ding", "Hongbo Deng", "Jun Huang", "Wei Lin", "Jingren Zhou" ], "title": "Adabert: Task-adaptive bert compression with differentiable neural architecture search", "venue": "Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Xiangxiang Chu", "Bo Zhang", "Ruijun Xu", "Jixiang Li" ], "title": "Fairnas: Rethinking evaluation fairness of weight sharing neural architecture search", "venue": null, "year": 1907 }, { "authors": [ "Kevin Clark", "Minh-Thang Luong", "Quoc V Le", "Christopher D Manning" ], "title": "Electra: Pre-training text encoders as discriminators rather than generators", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ido Dagan", "Oren Glickman", "Bernardo Magnini" ], "title": "The pascal recognising textual entailment challenge", "venue": "In Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment,", "year": 2006 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": null, "year": 2019 }, { "authors": [ "William B. Dolan", "Chris Brockett" ], "title": "Automatically constructing a corpus of sentential paraphrases", "venue": "In Proceedings of the Third International Workshop on Paraphrasing (IWP2005),", "year": 2005 }, { "authors": [ "Thomas Elsken", "Jan Hendrik Metzen", "Frank Hutter" ], "title": "Neural architecture search: A survey", "venue": "arXiv preprint arXiv:1808.05377,", "year": 2018 }, { "authors": [ "Angela Fan", "Edouard Grave", "Armand Joulin" ], "title": "Reducing transformer depth on demand with structured dropout", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Mitchell A Gordon", "Kevin Duh", "Nicholas Andrews" ], "title": "Compressing bert: Studying the effects of weight pruning on transfer learning", "venue": "arXiv preprint arXiv:2002.08307,", "year": 2020 }, { "authors": [ "Zichao Guo", "Xiangyu Zhang", "Haoyuan Mu", "Wen Heng", "Zechun Liu", "Yichen Wei", "Jian Sun" ], "title": "Single path one-shot neural architecture search with uniform sampling", "venue": null, "year": 1904 }, { "authors": [ "Lu Hou", "Lifeng Shang", "Xin Jiang", "Qun Liu" ], "title": "Dynabert: Dynamic bert with adaptive width and depth", "venue": "arXiv preprint arXiv:2004.04037,", "year": 2020 }, { "authors": [ "Andrew Howard", "Mark Sandler", "Grace Chu", "Liang-Chieh Chen", "Bo Chen", "Mingxing Tan", "Weijun Wang", "Yukun Zhu", "Ruoming Pang", "Vijay Vasudevan" ], "title": "Searching for mobilenetv3", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Xiaoqi Jiao", "Yichun Yin", "Lifeng Shang", "Xin Jiang", "Xiao Chen", "Linlin Li", "Fang Wang", "Qun Liu" ], "title": "Tinybert: Distilling bert for natural language understanding", "venue": null, "year": 1909 }, { "authors": [ "Lukasz Kaiser", "Aidan N Gomez", "Francois Chollet" ], "title": "Depthwise separable convolutions for neural machine translation", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Antonios Karatzoglou", "Nikolai Schnell", "Michael Beigl" ], "title": "Applying depthwise separable and multi-channel convolutional neural networks of varied kernel size on semantic trajectories", "venue": "Neural Computing and Applications,", "year": 2020 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Guillaume Lample", "Alexandre Sablayrolles", "Marc’Aurelio Ranzato", "Ludovic Denoyer", "Hervé Jégou" ], "title": "Large memory layers with product keys", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Zhenzhong Lan", "Mingda Chen", "Sebastian Goodman", "Kevin Gimpel", "Piyush Sharma", "Radu Soricut" ], "title": "Albert: A lite bert for self-supervised learning of language representations", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Changlin Li", "Jiefeng Peng", "Liuchun Yuan", "Guangrun Wang", "Xiaodan Liang", "Liang Lin", "Xiaojun Chang" ], "title": "Block-wisely supervised neural architecture search with knowledge distillation", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Xiang Li", "Chen Lin", "Chuming Li", "Ming Sun", "Wei Wu", "Junjie Yan", "Wanli Ouyang" ], "title": "Improving one-shot nas by suppressing the posterior fading", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "Darts: Differentiable architecture search", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized bert pretraining approach", "venue": null, "year": 1907 }, { "authors": [ "Renqian Luo", "Tao Qin", "Enhong Chen" ], "title": "Balanced one-shot neural architecture optimization, 2019", "venue": null, "year": 2019 }, { "authors": [ "Renqian Luo", "Xu Tan", "Rui Wang", "Tao Qin", "Enhong Chen", "Tie-Yan Liu" ], "title": "Neural architecture search with gbdt", "venue": "arXiv preprint arXiv:2007.04785,", "year": 2020 }, { "authors": [ "J Scott McCarley" ], "title": "Pruning a bert-based question answering model", "venue": "arXiv preprint arXiv:1910.06360,", "year": 2019 }, { "authors": [ "Ilija Radosavovic", "Raj Prateek Kosaraju", "Ross Girshick", "Kaiming He", "Piotr Dollár" ], "title": "Designing network design spaces", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy Liang" ], "title": "SQuAD: 100,000+ questions for machine comprehension of text", "venue": "In EMNLP,", "year": 2016 }, { "authors": [ "Victor Sanh", "Lysandre Debut", "Julien Chaumond", "Thomas Wolf" ], "title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", "venue": null, "year": 1910 }, { "authors": [ "Sheng Shen", "Zhen Dong", "Jiayu Ye", "Linjian Ma", "Zhewei Yao", "Amir Gholami", "Michael W Mahoney", "Kurt Keutzer" ], "title": "Q-bert: Hessian based ultra low precision quantization of bert", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Richard Socher", "Alex Perelygin", "Jean Wu", "Jason Chuang", "Christopher D. Manning", "Andrew Ng", "Christopher Potts" ], "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "venue": "In EMNLP,", "year": 2013 }, { "authors": [ "Kaitao Song", "Hao Sun", "Xu Tan", "Tao Qin", "Jianfeng Lu", "Hongzhi Liu", "Tie-Yan Liu" ], "title": "Lightpaff: A two-stage distillation framework for pre-training and fine-tuning", "venue": null, "year": 2004 }, { "authors": [ "Siqi Sun", "Yu Cheng", "Zhe Gan", "Jingjing Liu" ], "title": "Patient knowledge distillation for bert model compression", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP),", "year": 2019 }, { "authors": [ "Mingxing Tan", "Quoc Le" ], "title": "Efficientnet: Rethinking model scaling for convolutional neural networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Henry Tsai", "Jayden Ooi", "Chun-Sung Ferng", "Hyung Won Chung", "Jason Riesa" ], "title": "Finding fast transformers: One-shot neural architecture search by component composition", "venue": "arXiv preprint arXiv:2008.06808,", "year": 2020 }, { "authors": [ "Iulia Turc", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "Well-read students learn better: On the importance of pre-training compact models", "venue": null, "year": 1908 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R Bowman" ], "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Hanrui Wang", "Zhanghao Wu", "Zhijian Liu", "Han Cai", "Ligeng Zhu", "Chuang Gan", "Song Han" ], "title": "Hat: Hardware-aware transformers for efficient natural language processing", "venue": "arXiv preprint arXiv:2005.14187,", "year": 2020 }, { "authors": [ "Linnan Wang", "Saining Xie", "Teng Li", "Rodrigo Fonseca", "Yuandong Tian" ], "title": "Sample-efficient neural architecture search by learning action space", "venue": "arXiv preprint arXiv:1906.06832,", "year": 2019 }, { "authors": [ "Wenhui Wang", "Furu Wei", "Li Dong", "Hangbo Bao", "Nan Yang", "Ming Zhou" ], "title": "Minilm: Deep selfattention distillation for task-agnostic compression of pre-trained transformers", "venue": "arXiv preprint arXiv:2002.10957,", "year": 2020 }, { "authors": [ "Ziheng Wang", "Jeremy Wohlwend", "Tao Lei" ], "title": "Structured pruning of large language models", "venue": "arXiv preprint arXiv:1910.04732,", "year": 2019 }, { "authors": [ "Alex Warstadt", "Amanpreet Singh", "Samuel R. Bowman" ], "title": "Neural network acceptability", "venue": "judgments. CoRR,", "year": 2018 }, { "authors": [ "Adina Williams", "Nikita Nangia", "Samuel Bowman" ], "title": "A broad-coverage challenge corpus for sentence understanding through inference", "venue": "In NAACL,", "year": 2018 }, { "authors": [ "Felix Wu", "Angela Fan", "Alexei Baevski", "Yann Dauphin", "Michael Auli" ], "title": "Pay less attention with lightweight and dynamic convolutions", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Canwen Xu", "Wangchunshu Zhou", "Tao Ge", "Furu Wei", "Ming Zhou" ], "title": "Bert-of-theseus: Compressing bert by progressive module replacing", "venue": "arXiv preprint arXiv:2002.02925,", "year": 2020 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Yiming Yang", "Jaime Carbonell", "Russ R Salakhutdinov", "Quoc V Le" ], "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Jiahui Yu", "Pengchong Jin", "Hanxiao Liu", "Gabriel Bender", "Pieter-Jan Kindermans", "Mingxing Tan", "Thomas Huang", "Xiaodan Song", "Ruoming Pang", "Quoc Le" ], "title": "Bignas: Scaling up neural architecture search with big single-stage models", "venue": "arXiv preprint arXiv:2003.11142,", "year": 2020 }, { "authors": [ "Kaiser" ], "title": "We describe the considerations to choose the operations in Table 1 as follows: 1) LSTM is not considered due to slow training and inference speed. 2) In our preliminary experiments, we try some variants of MHA and FFN (Lample et al., 2019; Wu et al., 2018), but fail to observe the advantages of small model size and/or better performance", "venue": "Karatzoglou et al.,", "year": 2018 }, { "authors": [ "2019 Sun et al", "2020 Song et al", "2019 Jiao et al", "Lan" ], "title": "Zafrir et al., 2019; Chen et al., 2020) usually compress pre-trained BERT model into a small model (usually no more than 66M) for efficiency and effectiveness", "venue": "pressed model sizes. Previous works (Sanh et al.,", "year": 2019 }, { "authors": [ "Adam (Kingma", "Ba" ], "title": "2014) with a learning rate of 1e-4, β1 = 0.9 and β2 = 0.999. The learning rate is warmed up to a peak value of 5e-4 for the first 10,000 steps, and then linearly decays. The weight decay is 0.01 and the dropout rate is 0.1. We apply the best practices proposed in Liu et al. (2019) to train the BERTbase on 16 NVIDIA V100 GPUs with large batches leveraging gradient accumulation (2048 samples per batch) and 125000 training", "venue": null, "year": 2048 }, { "authors": [ "Wang" ], "title": "The accuracy of the teacher models on dev set of the GLUE benchmark. The teacher model of MobileBERT is IB-BERTLARGE which reaches the similar accuracy as original BERTLARGE", "venue": "INFERENCE AND FLOPS SETUP Following Sun et al", "year": 2020 }, { "authors": [ "Clark" ], "title": "2019), the inference FLOPs are calcuated with single length-128 input. A.4 TWO-STAGE DISTILLATION Teacher Model (Pre-trained", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Pre-trained Transformer (Vaswani et al., 2017)-based language models like BERT (Devlin et al., 2019), XLNet (Yang et al., 2019) and RoBERTa (Liu et al., 2019) have achieved impressive performance on a variety of downstream natural language processing tasks. These models are pre-trained on massive language corpus via self-supervised tasks to learn language representation and fine-tuned on specific downstream tasks. Despite their effectiveness, these models are quite expensive in terms of computation and memory cost, which makes them difficult for the deployment on different downstream tasks and various resource-restricted scenarios such as online servers, mobile phones, and embedded devices. Therefore, it is crucial to compress pre-trained models for practical deployment.\nRecently, a variety of compression techniques have been adopted to compress pre-trained models, such as pruning (McCarley, 2019; Gordon et al., 2020), weight factorization (Lan et al., 2019), quantization (Shen et al., 2020; Zafrir et al., 2019), and knowledge distillation (Sun et al., 2019; Sanh et al., 2019; Chen et al., 2020; Jiao et al., 2019; Hou et al., 2020; Song et al., 2020). Several existing works (Tsai et al., 2020; McCarley, 2019; Gordon et al., 2020; Sanh et al., 2019; Zafrir et al., 2019; Chen et al., 2020; Lan et al., 2019; Sun et al., 2019) compress a large pre-trained model into a small or fast model with fixed size on the pre-training or fine-tuning stage and have achieved good compression efficiency and accuracy. However, from the perspective of practical deployment, a fixed size model cannot be deployed in devices with different memory and latency constraints. For example, smaller models are preferred in embedded devices than in online servers, and faster\ninference speed is more critical in online services than in offline services. On the other hand, some previous methods (Chen et al., 2020; Hou et al., 2020) compress the models on the fine-tuning stage for each specific downstream task. This can achieve good accuracy due to the dedicated design in each task. However, compressing the model for each task can be laborious and a compressed model for one task may not generalize well on another downstream task.\nIn this paper, we study the BERT compression in a different setting: the compressed models need to cover different sizes and latencies, in order to support devices with different kinds of memory and latency constraints; the compression is conducted on the pre-training stage so as to be downstream task agnostic. To this end, we propose NAS-BERT, which leverages neural architecture search (NAS) to automatically compress BERT models. We carefully design a search space that contains multi-head attention (Vaswani et al., 2017), separable convolution (Kaiser et al., 2018), feed-forward network and identity operations with different hidden sizes to find efficient models. In order to search models with adaptive sizes that satisfy diverse requirements of memory and latency constraints in different devices, we train a big supernet that contains all the candidate operations and architectures with weight sharing (Bender et al., 2018; Cai et al., 2018; 2019; Yu et al., 2020). In order to reduce the laborious compressing on each downstream task, we directly train the big supernet and get the compressed model on the pre-training task to make it applicable across different downstream tasks.\nHowever, it is extremely expensive to directly perform architecture search in a big supernet on the heavy pre-training task. To improve the search efficiency and accuracy, we employ several techniques including block-wise search, search space pruning and performance approximation during the search process: (1) We adopt block-wise search (Li et al., 2020a) to divide the supernet into blocks so that the size of the search space can be reduced exponentially. To train each block, we leverage a pre-trained teacher model, divide the teacher model into blocks similarly, and use the input and output hidden states of the corresponding teacher block as paired data for training. (2) To further reduce the search cost of each block (even if block-wise search has greatly reduced the search space) due to the heavy burden of the pre-training task, we propose progressive shrinking to dynamically prune the search space according to the validation loss during training. To ensure that architectures with different sizes and latencies can be retained during the pruning process, we further divide all the architectures in each block into several bins according to their model sizes and perform progressive shrinking in each bin. (3) We obtain the compressed models under specific constraints of memory and latency by assembling the architectures in every block using performance approximation, which can reduce the cost in model selection.\nWe evaluate the models compressed by NAS-BERT on the GLUE benchmark (Wang et al., 2018). The results show that NAS-BERT can find lightweight models with various sizes from 5M to 60M with better accuracy than that achieved by previous work. Our contributions of NAS-BERT can be summarized as follows:\n• We carefully design a search space that contains various architectures and different sizes, and apply NAS on the pre-training task to search for efficient lightweight models, which is able to deliver adaptive model sizes given different requirements of memory or latency and apply for different downstream tasks.\n• We further apply block-wise search, progressive shrinking and performance approximation to reduce the huge search cost and improve the search accuracy.\n• Experiments on the GLUE benchmark datasets demonstrate the effectiveness of NAS-BERT for model compression." }, { "heading": "2 RELATED WORK", "text": "BERT Model Compression Recently, compressing pre-trained language models has been studied extensively and several techniques have been proposed such as knowledge distillation, pruning, weight factorization, quantization and so on. Existing works (Tsai et al., 2020; Sanh et al., 2019; Sun et al., 2019; Song et al., 2020; Jiao et al., 2019; Lan et al., 2019; Zafrir et al., 2019; Shen et al., 2020; Wang et al., 2019b; Lan et al., 2019; Zafrir et al., 2019; Chen et al., 2020) aim to compress the pre-trained model into a fixed size of the model and have achieved a trade-off between the small parameter size (usually no more than 66M) and the good performance. However, these compressed models cannot be deployed in devices with different memory and latency constraints.\nRecent works (Hou et al., 2020) can deliver adaptive models for each specific downstream task and demonstrate the effectiveness of the task-oriented compression. For practical deployment, it can be laborious to compress models from each task. Other works (Fan et al., 2019) can produce compressed models on the pre-training stage that can directly generalize to downstream tasks, and allow for efficient pruning at inference time. However, they do not explore the potential of different architectures as in our work. Different from existing works, NAS-BERT aims for task-agnostic compression on the pre-training stage which eliminates the laborious compression for each specific downstream task, and carefully designs the search space which is capable to explore the potential of different architectures and deliver various models given diverse memory and latency requirements.\nNeural Architecture Search for Efficient Models Many works have leveraged NAS to search efficient models (Liu et al., 2018; Cai et al., 2018; Howard et al., 2019; Tan & Le, 2019; Cai et al., 2019; Yu et al., 2020; Wang et al., 2020a; Tsai et al., 2020). Most of them focus on computer vision tasks and rely on specific designs on the convolutional layers (e.g., inverted bottleneck convolution (Howard et al., 2019) or elastic kernel size (Cai et al., 2019; Yu et al., 2020)). Among them, once-for-all (Cai et al., 2019) and BigNAS (Yu et al., 2020) train a big supernet that contains all the candidate architectures and can get a specialized sub-network by selecting from the supernet to support various requirements (e.g., model size and latency). HAT (Wang et al., 2020a) also trains a supernet with the adaptive widths and depths for machine translation tasks. Our proposed NAS-BERT also trains a big supernet. However, different from these methods, we target model compression for BERT at the pre-training stage, which is a more challenging task due to the large model size and huge pre-training cost. Therefore, we introduce several techniques including blockwise search, progressive shrinking, and performance approximation to reduce the training cost and improve search efficiency. Tsai et al. (2020) apply one-shot NAS to search a faster Transformer but they cannot deliver multiple architectures to meet various constraints for deployment. Different from Tsai et al. (2020), NAS-BERT 1) progressively shrinks the search space to allocate more resources to promising architectures and thus can deliver various architectures without adding much computation; 2) designs bins in the shrinking algorithm to guarantee that we can search architectures to meet diverse memory and latency constraints. 3) explores novel architectures with convolution layer, multi-head attention, and feed-forward layer, and achieves better performance than previous works for BERT compression." }, { "heading": "3 METHOD", "text": "In this section, we describe NAS-BERT, which conducts neural architecture search to find small, novel and accurate BERT models. To meet the requirements of deployment for different memory and latency constraints and across different downstream tasks, we 1) train a supernet with a novel search space that contains different sizes of models for various resource-restricted devices, and 2) directly search the models on the pre-training task to make them generalizable on different downstream tasks. The method can be divided into three steps: 1) search space design (Section 3.1); 2) supernet training (Section 3.2); 3) model selection (Section 3.3). Due to the huge cost to train the big supernet on the heavy pre-training task and select compressed models under specific constraints, we introduce several techniques including block-wise search, search space pruning and performance approximation in Section 3.2 and 3.3 to reduce the search space and improve the search efficiency." }, { "heading": "3.1 SEARCH SPACE DESIGN", "text": "A novel search space allows the potential of combinations of different operations, instead of simply stacking basic Transformer block (multi-head attention and feed-forward network) as in the original BERT model. We adopt the chain-structured search space (Elsken et al., 2018), and construct an over-parameterized supernet A with L layers and each layer contains all candidate operations in O = {o1, · · · , oC}, where C is the number of predefined candidate operations. Residual connection is applied to each layer by adding the input to the output. There areCL possible paths (architectures) in the supernet, and a specific architecture a = (a1, · · · , aL) is a sub-net (path) in the supernet, where al ∈ O is the operation in the l-th layer, as shown in Fig. 2 (a). We adopt weight sharing mechanism that is widely used in NAS (Bender et al., 2018; Cai et al., 2019) for efficient training, where each architecture (path) shares the same set of operations in each layer.\nWe further describe each operation in O as follows: 1) Multi-head attention (MHA) and feedforward network (FFN), which are the two basic operations in Transformer and are popular in pretraining models (in this way we can cover BERT model as a subnet in our supernet). 2) Separable convolution (SepConv), whose effectiveness and efficiency in natural language processing tasks have been demonstrated by previous work (Kaiser et al., 2018; Karatzoglou et al., 2020). 3) Identity operation, which can support architectures with the number of layers less than L. Identity operation is regarded as a placeholder in a candidate architecture and can be removed to obtain a shallower network. More detailed considerations on choosing the operation set are in Appendix A.1. Apart from different types of operations, to allow adaptive model sizes, each operation can have different hidden sizes: {128, 192, 256, 384, 512}. In this way, architectures in the search space can be of different depths and widths. The complete candidate operation setO contains (1+1+3)∗5+1 = 26 operations, where the first product term represents the number of types of operations and 3 represents the SepConv with different kernel size {3, 5, 7}, the second product term represents that there are 5 different hidden sizes. We list 26 operations in Table 1. The detailed structure of separable convolution is shown in Fig. 1." }, { "heading": "3.2 SUPERNET TRAINING", "text": "" }, { "heading": "3.2.1 BLOCK-WISE TRAINING WITH KNOWLEDGE DISTILLATION", "text": "Directly training the whole supernet causes huge cost due to its large model size and huge search space. With limited computational resources (total training time, steps, etc.), the amortized training time of each architecture from the huge search space is insufficient for accurate evaluation (Chu et al., 2019; Luo et al., 2019; Li et al., 2020b). Inspired by Li et al. (2020a), we adopt block-wise search to uniformly divide the supernet A into N blocks (A1,A2, · · · ,AN ) to reduce the search space and improve the efficiency. To train each block independently and effectively, knowledge distillation is applied with a pre-trained BERT model. The pre-trained BERT model (teacher) is divided into corresponding N blocks as in Fig. 2. The input and output hidden states of the corresponding block in the teacher model are used as the paired data to train the block in the supernet (student). Specifically, the n-th student block receives the output of the (n − 1)-th teacher block as the input and is optimized to predict the output of the n-th teacher block with mean square loss\nLn = ||f(Yn−1;An)−Yn||22, (1) where f(·;An) is the mapping function of n-th block An, Yn is the output of the n-th block of the teacher model (Y0 is the output of the embedding layer of the teacher model). At each training step,\nwe randomly sample an architecture from the search space following Guo et al. (2019); Bender et al. (2018); Cai et al. (2019), which is memory-efficient due to the single path optimization. Different from Li et al. (2020a), we allow different hidden sizes and incorporate identity layer in each block to support elastic width and depth to derive models that meet various requirements. Besides, the search space within each block in our work is larger compared to Li et al. (2020a) (100x larger) which is much more sample in-efficient and requires more techniques (described in Section 3.2.2) to improve the training efficiency. Since the hidden sizes of the student block may be different from that in the teacher block, we cannot directly leverage the input and output hidden of the teacher block as the training data of the student block. To solve this problem, we use a learnable linear transformation layer at the input and output of student block to transform each hidden size to match that of the teacher block, as shown in Fig. 2." }, { "heading": "3.2.2 PROGRESSIVE SHRINKING", "text": "Although block-wise training can largely reduce the search space, the supernet still requires huge time for convergence due to the heavy pre-training task. To further improve the training effectiveness and efficiency, we propose to progressively shrink the search space in each block during training to allocate more training resources to more promising candidates (Wang et al., 2019a; Li et al., 2020b; Luo et al., 2020). However, simply pruning the architectures cannot ensure to obtain different sizes of models, since larger models in An are likely to be pruned on the early stage of training due to its difficulty of optimization (Chu et al., 2019; Luo et al., 2019; Li et al., 2020b) and smaller models are likely to be pruned at the late stage due to limited capacity. Therefore, we assign the architectures in An into different bins where each bin represents a short range of model sizes. Besides, we also apply latency constraints in each bin to avoid models accepted parameter size but large latency. Denote pb = b B ·p(a t) and lb = bB · l(a t) as the maximum parameter size and latency for the b-th bin, where p(·) and l(·) calculate the parameter size and latency, at is the largest model in the search space and B is the number of bins. The architecture a in b-th bin should meet (1) pb > p(a) > pb−1 and (2) lb > l(a) > lb−1. Architectures that cannot satisfy the constraint of latency are removed.\nThen we conduct the progressive shrinking algorithm in each bin at the end of each training epoch as follows: 1) Sample E architectures in each bin and get the validation losses on the dev set; 2) Rank the E architectures according to their validation losses in descending order; 3) Remove R architectures with the largest losses. The shrinking algorithm terminates when there are only m architectures left in the search space to avoid all the architectures being deleted. The design of bins ensures the diversity of models when shrinking the search space, which makes it possible to select a model given diverse constraints at the model selection stage." }, { "heading": "3.3 MODEL SELECTION", "text": "After the supernet training with progressive shrinking, each block contains m ∗ B possible architectures and the whole supernet (N blocks) contains (m ∗ B)N complete architectures. The model selection procedure is as follows: 1) We build a large lookup table LTarch with (m ∗ B)N items,\nwhere each item contains the meta-information of a complete architecture: (architecture, parameter, latency, loss). Since it is extremely time-consuming to measure the exact latency and loss for (m ∗B)N (e.g., 108 in our experiments) architectures, we use performance approximation to obtain the two values as described in the next paragraph. 2) For a given constraint of model size and inference latency, we select the top T architectures with low loss from LTarch that meet the parameter and latency constraint. 3) We evaluate the validation loss of the top T complete architectures on the dev set and select the best one as the final compressed model. The compressed model is associated with an embedding layer with adaptive size rather than a fixed size, and the embedding size is determined by the rules according to the requirement of target model size (see Appendix A.5).\nNext we introduce the performance approximation of the latency and loss when building the lookup table LTarch. We measure the latency of each candidate operation (just 26 in our design) on the target device and store in a lookup table LTlat in advance, and then approximate the latency of an architecture l(a) by l(a) = ∑L l=1 l(a\nl) following Cai et al. (2018), where l(al) is from LTlat. To approximate the loss of an architectures in LTarch, we add up the block-wise distillation loss of the sub-architecture in each block on the dev set. Obtaining the dev loss of all sub-architectures in all blocks only involves m ∗B ∗N evaluations." }, { "heading": "4 EXPERIMENT", "text": "" }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "Supernet Training Setup The supernet consists of L = 24 layers, which is consistent with BERTbase (Devlin et al., 2019) (BERTbase has 12 Transformer layers with 24 sub-layers in total, since each Transformer layer has a MHA and FFN). We use a pre-trained BERTbase (Devlin et al., 2019) as the teacher model. The detailed configurations of the search space and teacher model training can be found in Appendix A.1 and Appendix A.2. The supernet is divided into N = 4 blocks and the search space in each block is divided into B = 10 bins. We first train the supernet for 3 epochs without progressive shrinking as a warm start, and then begin to shrink at the end of each later epoch. We randomly sample E = 2000 architectures for validation (evaluate all architectures when the number of architectures in search space is less than 2000) and perform the progressive shrinking to remove R = E/2 architectures for each bin as in Section 3.2.2. The shrinking process terminates when only m = 10 architectures are left in each bin in each block, and the training also ends. The considerations about how to decide these hyper-parameters are described in Appendix A.5. The supernet is trained on English Wikipedia plus BookCorpus (16GB size), with a batch size of 1024 sentences and each sentence consists of 512 tokens. The training costs 3 days on 32 NVIDIA P40 GPUs while training the BERTbase teacher model costs 5 days. Other training configurations remain the same as the teacher model. The latency used in progressive shrinking and model selection is measured on Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60 GHz with 12 cores, but our method can be easily applied to other devices (e.g., mobile platforms, embedded devices) by using the lookup table LTlat (described in Section 3.3) measured and built for the corresponding device. We select T = 100 models from the table LTarch on the model selection stage. In progressive shrinking, to reduce the time of evaluating all the candidates, we only evaluate on 5 batches rather than the whole dev set, which is accurate enough for the pruning according to our preliminary study.\nEvaluation Setup on Downstream Tasks We evaluate the effectiveness of NAS-BERT by pretraining the compressed models on the original pre-training task and fine-tuning on the GLUE benchmark (Wang et al., 2018), which includes CoLA (Warstadt et al., 2018), SST-2 (Socher et al., 2013), MRPC (Dolan & Brockett, 2005), STS-B (Cer et al., 2017), QQP (Chen et al., 2018), MNLI (Williams et al., 2018), QNLI (Rajpurkar et al., 2016) and RTE (Dagan et al., 2006). Similar to previous methods (Sanh et al., 2019; Wang et al., 2020b; Turc et al., 2019; Hou et al., 2020), we also apply knowledge distillation and conduct it on two stages (i.e., pre-training and fine-tuning) as the default setting for evaluation. The details of two-stage distillation can be found in Appendix A.4. Considering the focus of our work is to compress pre-trained models with novel architectures instead of knowledge distillation, we only use prediction layer distillation and leave the various distillation techniques like layer-by-layer distillation, embedding layer distillation and attention matrix distillation (Sun et al., 2019; Jiao et al., 2019; Sanh et al., 2019; Hou et al., 2020; Wang et al., 2020b) that\ncan further improve the performance as to future work. During fine-tuning on the GLUE benchmark, RTE, MRPC and STS-B are started from the model fine-tuned on MNLI following Liu et al. (2019)." }, { "heading": "4.2 RESULTS", "text": "Accuracy of NAS-BERT While our NAS-BERT can compress models with adaptive sizes, we only show the results of compressed models with 60M, 30M, 10M and 5M parameter sizes (denoted as NAS-BERT60, NAS-BERT30, NAS-BERT10 and NAS-BERT5 respectively) on the GLUE benchmark due to the large evaluation cost, and list the model structures with different sizes in Appendix A.6. We compare our NAS-BERT models with hand-designed BERT models under the same parameter size and latency (denoted as BERT60, BERT30, BERT10 and BERT5 respectively). We follow several principles when manually designing the BERT models: we just use the original BERT structure (MHA plus FFN) and keep the parameter, latency, depth and width as close as possible to the corresponding NAS-BERT models. The detailed structures of the hand-designed BERT models are introduced in the next paragraph. To demonstrate the advantages of architectures searched by NAS-BERT, the comparisons are evaluated in two settings: 1) only with pre-training and fine-tuning (PF), and 2) pre-training and fine-tuning with two-stage knowledge distillation (KD). We measure the inference speedup of NAS-BERT compared with BERTbase and inference FLOPs following Clark et al. (2019). The configurations are in Appendix A.3. The results are shown in Table 2, from which we can see that NAS-BERT outperforms hand-designed BERT baseline across almost all the tasks under various model sizes. Especially, the smaller the model size is, the larger gap can be observed (e.g., NAS-BERT5 vs. BERT5). The results show that NAS-BERT can search for efficient lightweight models that are better than Transformer based models.\nBERT Baselines We follow several principles when manually designing the BERT models in Table 2: 1) The size of the embedding layer is the same as that of the corresponding NAS-BERT model; 2) We use the original BERT structure (MHA plus FFN) and keep the parameter, latency, depth and width as close as possible to the corresponding NAS-BERT model. The baseline BERT models in Table 2 are: BERT60 (L=10, H=512, A=8), BERT30 (L=6, H=512, A=8), BERT10 (L=6, H=256, A=4) and BERT5 (L=6, H=128, A=2) where L is the number of layers, H is the hidden size, and A is the number of attention heads.\nComparison with Previous Work Next, we compare the effectiveness of our NAS-BERT to previous methods on BERT compression. Since they usually compress BERT into a model size of about\n66M or 60M, we use our NAS-BERT60 for comparison. We mainly compare our NAS-BERT with 1) DistilBERT (Sanh et al., 2019), which uses knowledge distillation on the pre-training stage; 2) BERT-PKD (Sun et al., 2019), which distills the knowledge from the intermediate layers and the final output logits on the pre-training stage; 3) BERT-of-Theseus (Xu et al., 2020), which uses module replacement for compression; 4) MiniLM (Wang et al., 2020b), which transfers the knowledge from the self-attention module; 5) PD-BERT (Turc et al., 2019), which distills the knowledge from the target domain in BERT training; 6) DynaBERT (Hou et al., 2020), which uses network rewiring to adjust width and depth of BERT for each downstream task. 7) TinyBERT (Jiao et al., 2019), which leverage embedding layer, hidden layer, and attention matrix distillation to mimic the teacher model at both the pre-training and fine-tuning stages. To make comparison with DynaBERT and TinyBERT, we also use their data augmentation (Hou et al., 2020; Jiao et al., 2019) on downstream tasks. Table 3 reports the results on the dev and test set of the GLUE benchmark. Without data augmentation, NAS-BERT achieves better results on nearly all the tasks compared to previous work. Further, with data augmentation, NAS-BERT outperforms DynaBERT and TinyBERT. Different from these methods that leverage advanced knowledge distillation techniques in pre-training and/or fine-tuning, NAS-BERT mainly takes advantage of architectures and achieves better accuracy, which demonstrates the advantages of NAS-BERT in model compression." }, { "heading": "4.3 ABLATION STUDY", "text": "Ablation Study on Progressive Shrinking To verify the effectiveness of progressive shrinking (PS), we train the supernet with the same number of training epochs but without progressive\nshrinking. We follow the same procedure in NAS-BERT for model selection and final evaluation on downstream tasks. Due to the huge evaluation cost during the model selection on the whole search space without progressive shrinking, it costs 50 hours (evaluation cost on the shrinked search space only takes 5 minutes since there are only 10 architectures remaining in each bin in each block). The results are shown in Table 4. It can be seen that NAS-BERT with progressive shrinking searches better architectures, with less total search time.\nWe further show the training loss curve in Fig. 3. It can be seen that the superset without progressive shrinking suffers from slow convergence. The huge number of architectures in the supernet need long time for sufficient training. Given a fixed budget of training time, progressive shrinking can ensure the promising architectures to be trained with more resources and result in more accurate evaluation, and thus better architectures can be selected. On the contrary, without progressive shrinking, the amortized training time of each architecture is insufficient, resulting in inaccurate evaluation and model selection.\nDifferent Progressive Shrinking Approaches Instead of pruning architectures (paths) from the search space, we can also prune operations (nodes) from the supernet (Radosavovic et al., 2020; Luo et al., 2020) in progressive shrinking. From the perspective of supernet, the former is to remove paths and the latter is to remove nodes from the supernet. To evaluate the performance of operations (nodes) in supernet, at the end of each training epoch, we evaluate the architectures on the dev set and prune the search space according to the performance (validation loss) of operations. The validation loss of the operation oi in l-layer is estimated by the mean validation losses of all the architectures whose operation in the l-th layer al = oi. The shrinking algorithm proceeds as follows:\n• Sample E architectures and get the validation losses on the dev set.\n• Rank operations according to their mean validation losses in descending order.\n• Prune operations with the largest losses from the supernet repeatedly until removing R (R is a hyper-parameter to control the speed of pruning) of the architectures in the search space.\nThe shrinking algorithm performs at the end of each training epoch, and terminates when only m = 10 architectures are left in each bin in each block, and the training also ends. For the fair comparison, we set m = 10 and E = 1000, which are same as settings in Section 4.1. At the end of each epoch, we perform this algorithm to remove R = 30% architectures for each bin. In this way, the algorithm terminates at the same epoch as that in Section 3.2.2. At the end of each training epoch, we evaluate the architectures on the dev set and prune the search space according to the performance of operations. As shown in Table 5, pruning architectures in progressive shrinking achieves better results." }, { "heading": "5 CONCLUSION", "text": "In this paper, we propose NAS-BERT, which leverages neural architecture search (NAS) to compress BERT models. We carefully design a search space with different operations associated with different hidden sizes, to explore the potential of diverse architectures and derive models with adaptive sizes according to the memory and latency requirements of different devices. The compression is conducted on the pre-training stage and is downstream task agnostic, where the compressed models can be applicable for different downstream tasks. Experiments on the GLUE benchmark datasets demonstrate the effectiveness of our proposed NAS-BERT compared with both hand-designed BERT baselines and previous works on BERT compression. For future work, we will explore more advanced search space and NAS methods to achieve better performance." }, { "heading": "A APPENDIX", "text": "A.1 OPERATION SET AND SEARCH SPACE\nThe Design Choice of the Operation Set In addition to MHA and FFN, LSTM, convolution and variants of MHA and FFN have achieved good performance in many NLP tasks (Chen et al., 2020; Kaiser et al., 2018; Karatzoglou et al., 2020; Bahdanau et al., 2014). We describe the considerations to choose the operations in Table 1 as follows: 1) LSTM is not considered due to slow training and inference speed. 2) In our preliminary experiments, we try some variants of MHA and FFN (Lample et al., 2019; Wu et al., 2018), but fail to observe the advantages of small model size and/or better performance. 3) Considering the parameter size of convolution isK ∗H2 and separable convolution (SepConv) isH2+K∗H whereK andH are the kernel size and hidden size, instead of convolution, we use SepConv with a larger kernel (with a larger receptive field) without significantly increasing the model size and the latency. Based on these considerations, we add SepConv into the candidate operation set.\nTo determine the possible hidden sizes for operations, we mainly consider the range of the compressed model sizes. Previous works (Sanh et al., 2019; Sun et al., 2019; Song et al., 2020; Jiao et al., 2019; Lan et al., 2019; Zafrir et al., 2019; Shen et al., 2020; Lan et al., 2019; Zafrir et al., 2019; Chen et al., 2020) usually compress pre-trained BERT model into a small model (usually no more than 66M) for efficiency and effectiveness. Similarly, in this work, we aim to obtain compressed models less than 66M. Therefore, we choose the hidden sizes between 128 and 512 for the candidate operations, which enables a good trade-off between efficiency and effectiveness1.\nThe Complexity of the Search Space The supernet consists of L = 24 layers. If we do not use block-wise search, there would be 2624 ≈ 1034 paths (possible candidate architectures) in the supernet. We divide the supernet into N = 4 blocks and each block contains 6 layers. Within each block, the hidden size of the 6 layers are required to be consistent, and the hidden sizes across different blocks can be different. So there are 5 ∗ 66 = 233280 paths (possible candidate sub-architectures) in each block, where the first 5 is the number of candidate hidden sizes and 66 represents that there are 6 operations in 6 layers. Due to the identity operation, there is an unnecessary increase in the possible sequence of operations (architecture) as pointed in Li et al. (2020a). For example, the architecture {FFN, identity, FFN, identity} is equivalent to {FFN, FFN, identity, identity}. Thus we only keep the architectures that all of the identity operations are at the end (e.g., {FFN, FFN, identity, identity}) and delete other redundant architectures. After cleaning the redundancy, the search space in each block is reduced from the original 233280 to 97650, which largely improves the sample efficiency. We can select sub-architectures from each block and ensemble them to get a complete model. Considering N = 4 blocks, there are 976504 (about 1020 possible combinations). Therefore the number of possible models is reduced from 1034 to 1020.\nA.2 TRAINING CONFIGURATIONS\nTeacher Model We train the BERTbase (L=12, H=768, A=12) (Devlin et al., 2019) as the teacher model, where L is the number of layers, H is the hidden size, and A is the number of attention heads. Following Devlin et al. (2019), we use BookCorpus plus English Wikipedia as pre-training data (16GB in total). We use Adam (Kingma & Ba, 2014) with a learning rate of 1e-4, β1 = 0.9 and β2 = 0.999. The learning rate is warmed up to a peak value of 5e-4 for the first 10,000 steps, and then linearly decays. The weight decay is 0.01 and the dropout rate is 0.1. We apply the best practices proposed in Liu et al. (2019) to train the BERTbase on 16 NVIDIA V100 GPUs with large batches leveraging gradient accumulation (2048 samples per batch) and 125000 training steps. We present the performance of the teacher model and compare it with teacher models used in other works in Table 6. Our teacher model is better than others, which is mainly caused by the volatility of RTE and CoLA (small dataset). After removing these two datasets, the performance of the teacher model (average score: 89.74) is close to the teacher model of other methods (DistilBERT: 89.73, BERTof-Theseus: 88.76 and DynaBERT: 89.68). In this way, NAS-BERT can still show its effectiveness compared with other approaches, without considering RTE and CoLA in Table 3.\n1We do not use hidden size smaller than 128 since it cannot yield model with enough accuracy.\nA.3 INFERENCE AND FLOPS SETUP\nFollowing Sun et al. (2019); Wang et al. (2020b), the inference time is evaluated on QNLI training set with the batch size of 128 and the maximum sequence length of 128. The numbers reported in Table 2 are the average of 100 batches on Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz with 12 cores. Following Clark et al. (2019), the inference FLOPs are calcuated with single length-128 input.\nA.4 TWO-STAGE DISTILLATION\nTwo-stage distillation means applying knowledge distillation in both the pre-training and the finetuning stage. Previous methods (Song et al., 2020; Sanh et al., 2019; Jiao et al., 2019; Gordon et al., 2020) have proved that using two-stage distillation is superior to the single-stage distillation. The pipeline of two-stage distillation is shown in Fig. 4. The procedure of two-stage distillation can be summarized as follows:\n1. Pre-train the teacher model on the pre-training corpus. 2. Pre-train the light-weight student model with knowledge distillation from the pre-trained\nteacher model in step 1. 3. Fine-tune the pre-trained teacher model in step 1 on the downstream task. 4. Fine-tune the pre-trained student model in step 2 with knowledge distillation from the fine-\ntuned teacher model in step 3 on the target downstream task.\nTo simplify our expression, we denote the parameter of the student and the teacher model as θS and θT respectively. We adopt a general formulation for knowledge distillation in both stages:\nL(x, y; θS , θT ) = {X ,Y}∑ (x,y) (1− λ) · LMLE(x, y; θS) + λ · LKL(f(x; θT ), f(x; θS)), (2)\nwhere LMLE is the maximal likelihood loss of the student model θS over the training corpus (X ,Y) (i.e., masked language model loss on the pre-training stage or classification/regression loss on the fine-tuning stage), and LKL is the KL divergence between the predicted probability distribution f(x; θT ) of the teacher model θT and the distribution f(x; θS) of the student model θS . λ is a hyper-parameter to trade off LMLE and LKL, and is set as 0.5 in our experiments. There are other advanced distillation techniques (e.g., embedding distillation or attention distillation) but here we only consider prediction layer distillation.\nA.5 OTHER DESIGN CHOICES IN NAS-BERT\nWhy dividing the model into N(= 4) blocks? If N is small, the search space for each block will be huge (e.g., CL possible architectures when N = 1) and the training is costly. If N is large, candidate architectures in each block will be very limited (e.g., C architectures when N = L) and cannot explore the potential of combinations of different operations. We set N = 4 for a trade-off following Li et al. (2020a).\nWhy dividing the search space in each block into B(= 10) bins? If B is too large, there are at least m ∗ B architectures during the whole process of supernet training. The amortized training time of each architecture can be insufficient, resulting in inaccurate evaluation and model selection as shown in Section 4.3. If B is very small, we cannot get various architectures at the end of the training. Consequently, we set B = 10 for the trade-off.\nWhy conducting progressive shrinking untilm(= 10) architectures is reached? As introduced in Section 3.3, we have (m ∗ B)N possible combinations to build the lookup table LTarch. When m = 10, the table has 108 architectures for the selection, which is enough to select models under diverse requirements. If m is too large, storing the big table LTarch with so many items ((m ∗B)N ) is costly and the evaluation cost in Section 3.3 will increase exponentially.\nIs the hidden size transformation module in Fig. 2 retained in the final model? The answer is No. In the final derived architecture, we re-train the architecture by two-stage distillation without adding a hidden size transformation module because the architecture does not need to match the hidden size to the one in the teacher model. However, there may be hidden size mismatch in the architecture itself due to the design of blocks, and we simply add an additional linear layer to handle it.\nHow to decide the embedding layer on the model selection stage? In order to save computing resources, we do not search the width of the embedding layer in NAS-BERT. We manually design the embedding layer for the final derived architecture according to the requirement of compressed model size: 1) <10M, WE=64, 2) <20M, WE=128, 3) <35M, WE=256, 4) <50M, WE=384 and 5) >50M, WE=512, where WE is the width of the embedding layer. For example, we choose the embedding layer with hidden size 512 when that target model size is required to be larger than 50M.\nA.6 SEARCHED ARCHITECTURES BY NAS-BERT\nOur NAS-BERT can generate different compressed models given specific constraints, as described in Section 3.3. Besides the several NAS-BERT models we evaluate in Table 2, we further select architectures with various sizes, from 5M to 60M with 5M interval, yielding totally 12 different architectures with different sizes. We present the architectures in the below figures (Figure (a)∼(l)).\n(a) 60M (b) 55M\n(c) 50M (d) 45M\n(e) 40M (f) 35M\n(g) 30M (h) 25M\n(i) 20M (j) 15M\n(k) 10M (l) 5M" } ]
2,020
null
SP:55f630e6b41243dfe92ea4269bb1a1e6e8109974
[ "This paper builds on recent work characterising deep neural networks in terms of Neural Tangent Kernels and Neural Path Features. Over the past few years, a number of papers have developed the theory of Neural Tangent Kernels, which can be used to interpret infinite width deep neural networks in the context of a particular type of kernel. A recent paper (Lakshminarayanan and Singh, NeurIPS 2020) provided a new perspective on Neural Tangent Kernels for Gated Neural Networks, by decomposing the network into independent paths. For a fixed set of network weights, we can consider each path to give rise to a feature, corresponding to whether this path is active (i.e., is not switched off by one of the gates on the path). Then, the output of the neural network can be viewed as a weighted sum of active paths, equivalently the dot product of the neural path feature vector and a neural path value vector. Lakshminarayanan and Singh showed that under certain assumptions, a kernel defined in terms of the neural path feature is approximately equal to the neural tangent kernel (up to a constant). Specifically, they show that the value of the neural tangent kernel matrix tends to a constant multiple of the neural path kernel matrix as the width of the network goes to infinity. This suggests that the key component in a deep neural network with RELU activations is the gating structure, which defines active subnetworks, as opposed to the values. " ]
Recent works have connected deep learning and kernel methods. In this paper, we show that architectural choices such as convolutional layers with pooling, skip connections, make deep learning a composite kernel learning method, where the kernel is a (architecture dependent) composition of base kernels: even before training, standard deep networks have in-built structural properties that ensure their success. In particular, we build on the recently developed ‘neural path’ framework1 that characterises the role of gates/masks in fully connected deep networks with ReLU activations.
[]
[ { "authors": [ "Sanjeev Arora", "Simon S Du", "Wei Hu", "Zhiyuan Li", "Russ R Salakhutdinov", "Ruosong Wang" ], "title": "On exact computation with an infinitely wide neural net", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Francis R Bach", "Gert RG Lanckriet", "Michael I Jordan" ], "title": "Multiple kernel learning, conic duality, and the smo algorithm", "venue": "In Proceedings of the twenty-first international conference on Machine learning,", "year": 2004 }, { "authors": [ "Yuan Cao", "Quanquan Gu" ], "title": "Generalization bounds of stochastic gradient descent for wide and deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Corinna Cortes", "Mehryar Mohri", "Afshin Rostamizadeh" ], "title": "Learning non-linear combinations of kernels", "venue": "In Advances in neural information processing systems,", "year": 2009 }, { "authors": [ "Simon S Du", "Jason D Lee", "Haochuan Li", "Liwei Wang", "Xiyu Zhai" ], "title": "Gradient descent finds global minima of deep neural networks", "venue": "arXiv preprint,", "year": 2018 }, { "authors": [ "Jonathan Fiat", "Eran Malach", "Shai Shalev-Shwartz" ], "title": "Decoupling gating from linearity", "venue": "CoRR, abs/1906.05032,", "year": 2019 }, { "authors": [ "Mehmet Gönen", "Ethem Alpaydın" ], "title": "Multiple kernel learning algorithms", "venue": "The Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Arthur Jacot", "Franck Gabriel", "Clément Hongler" ], "title": "Neural tangent kernel: Convergence and generalization in neural networks", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Arthur Jacot", "Franck Gabriel", "Clément Hongler" ], "title": "Freeze and chaos for dnns: an ntk view of batch normalization, checkerboard and boundary effects", "venue": null, "year": 1907 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": null, "year": 2014 }, { "authors": [ "Chandrashekar Lakshminarayanan", "Amit Vikram Singh" ], "title": "Neural path features and neural path kernel : Understanding the role of gates in deep learning", "venue": "arXiv preprint,", "year": 2020 }, { "authors": [ "Sören Sonnenburg", "Gunnar Rätsch", "Christin Schäfer", "Bernhard Schölkopf" ], "title": "Large scale multiple kernel learning", "venue": "Journal of Machine Learning Research,", "year": 2006 }, { "authors": [ "Bo Xie", "Yingyu Liang", "Le Song" ], "title": "Diverse neural network learns true target functions", "venue": "In Artificial Intelligence and Statistics,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "The success of deep learning is attributed to feature learning. The conventional view is that feature learning happens in the hidden layers of a deep network: in the initial layers simple low level features are learnt, and sophisticated high level features are learnt as one proceeds in depth. In this viewpoint, the penultimate layer output is the final hidden feature and the final layer learns a linear model with these hidden features. While this interpretation of feature learning is intuitive, beyond the first couple of layers it is hard make any meaningful interpretation of what happens in the intermediate layers.\nRecent works Jacot et al. (2018); Arora et al. (2019); Cao and Gu (2019) have provided a kernel learning interpretation for deep learning by showing that in the limit of infinite width deep learning becomes kernel learning. These works are based on neural tangents, wherein, the gradient of the network output with respect to the network parameters known as the neural tangent features (NTFs) are considered as the features. Arora et al. (2019) show that at randomised initialisation of weights, the kernel matrix associated with the NTFs, known as the neural tangent kernel (NTK) converges to a deterministic matrix and that optimisation and generalisation of infinite width deep neural networks is characterised by this deterministic kernel matrix. Cao and Gu (2019) provided generalisation bounds in terms of the NTK matrix. Arora et al. (2019) also proposed a pure-kernel method based on CNTK (NTK of convolutional neural networks, i.e., CNNs) which significantly outperformed the previous state-of-the-art kernel methods. The NTK either as an interpretation or as a method in itself has been very successful. Nevertheless it has some open issues namely i) non-interpretable: the kernel is the inner product of gradients and has no physical interpretation, ii) no feature learning: the NTFs are random and fixed during training and iii) performance gap: finite width CNN outperforms the infinite width CNTK, i.e., NTK does not fully explain the success of deep learning.\nRecently, Lakshminarayanan and Singh (2020) developed a neural path (NP) framework to provide a kernel interpretation for deep learning that addresses the open issues in the current NTK framework. Here, DNNs with ReLU activations are considered, and the gates (on/off state of ReLU) are encoded in the so called neural path feature (NPF) and the weights in the network in the so called neural path value (NPV). The key findings can be broken into the following steps.\nStep 1: The NPFs and NPV are decoupled. Gates are treated as masks, which are held in a separate feature network and applied to the main network called the value network. This enables one to study the various kinds of gates (i.e., NPFs), such as random gates (of a randomly initialised network), semi-learnt gates (sampled at an intermediate epoch during training), and learnt gates (sampled from a fully trained network). This addresses the feature learning issue.\nStep 2: When the gates/masks are decoupled and applied externally it follows that NTK = const ⇥ NPK, at random initialisation of weights. For a pair of input examples, NPK is a similarity measure\n1Introduced for the first time in the work of Lakshminarayanan and Singh (2020).\nthat depends on the size of the sub-network formed by the gates that are active simultaneously for examples. This addresses the interpretability issue.\nStep 3: CNTK performs better than random gates/masks and gates/masks from fully trained networks perform better than CNTK. This explains the performance gap between CNN and CNTK. It was also observed (on standard datasets) that when learnt gates/masks are used, the weights of the value network can be reset and re-trained from scratch without significant loss of performance." }, { "heading": "1.1 CONTRIBUTIONS IN THIS WORK", "text": "We attribute the success of deep learning to the following two key ingredients: (i) a composite kernel with gates as fundamental building blocks and (ii) allowing the gates to learn/adapt during training. Formally, we extend the NP framework of Lakshminarayanan and Singh (2020) as explained below.\n• Composite Kernel: The NPK matrix has a composite structure (architecture dependent).\n1. Fully-Connected networks: H fc is the Hadamard product of the input data Gram matrix, and the kernel matrices corresponding to the binary gating features of the individual layers.\n2. Residual networks (ResNets) with skip connections: H res assumes a sum of products form. In particular, consider a ResNet with (b + 2) blocks and b skip connections. Within this ResNet there are i = 1, . . . , 2b possible dense networks, and then H res = P2b i=1 CiH fc i\n, where Ci > 0 are positive constants based on normalisation layers.\n3. Convolutional neural networks (CNN) with pooling: Hcnn is rotation invariant.\n• Gate Learning: We show that learnt gates perform better than random gates. Starting with the setup of Lakshminarayanan and Singh (2020), we build combinatorially many models by,\n1. permuting the order of the layers when we apply them as external masks, 2. having two types of modes based on input provided to the value network namely i) ‘standard’:\ninput is the actual image and ii) ‘all-ones’: input is a tensor with all entries as ‘1’.\nWe observe in our experiments that the performance is robust to such combinatorial variations.\nMessage: This work along with that of Lakshminarayanan and Singh (2020) provides a paradigm shift in understanding deep learning. Here, gates play a central role. Each gate is related to a hyperplane, and gates together form layer level binary features whose kernels are the base kernels. Laying out these binary features depth-wise gives rise to a product of the base kernels. The skip connections gives a ‘sum of product’ structure, and convolution with pooling gives rotation invariance.\nOrganisation: Section 2 describes the network architectures namely fully-connected, convolutional and residual, which we take up for theoretical analysis. Section 3 extends the neural path framework to CNN and ResNet. Section 4 explains the composite kernel. Section 5 connects the NTK and NPK for CNN and ResNet. Section 6 consists of numerical experiments." }, { "heading": "2 ARCHITECTURES: FULLY CONNECTED, CONVOLUTIONAL AND RESIDUAL", "text": "In this section, we present the three architectures that we take up for theoretical analysis. These are i) fully connected (FC or FC-DNN), ii) convolutional (CNN) and iii) residual (ResNets). In what follows, [n] is the set {1, . . . , n}, and the dataset is given by (xs, ys)ns=1 2 Rdin ⇥ R. Fully Connected: We consider fully connected networks with width ‘w’ and depth ‘d’.\nCNN: We consider a 1-dimensional convolutional neural network with circular convolutions (see\nTable 2), with dcv convolutional layers (l = 1, . . . , dcv), followed by a global-average/max-pooling layer (l = dcv + 1) and dfc (l = dcv + 2, . . . , dcv + dfc + 1) FC layers. The convolutional window size is wcv < din, the number of filters per convolutional layer is w, and the width of the FC is also w. Definition 2.1 (Circular Convolution). For x 2 Rdin , i 2 [din] and r 2 {0, . . . , din 1}, define :\n(i) i r = i+ r, for i+ r din and i r = i+ r din, for i+ r > din. (ii) rot(x, r)(i) = x(i r), i 2 [din].\n(iii) qx,⇥(ifout, iout, l) = P\nicv,iin ⇥(icv, iin, iout, l) · zx,⇥(ifout (icv 1), iin, l 1), where iin/iout are\nthe indices (taking values in [w]) of the input/output filters. icv denotes the indices of the convolutional window (taking values in [wcv]) between input and output filters iin and iout. ifout denotes the indices (taking values in [din], the dimension of input features) of individual nodes in a given output filter.\nDefinition 2.2 (Pooling). Let Gpool x,⇥(ifout, iout, dcv + 1) denote the pooling mask, then we have\nzx,⇥(iout, dcv + 1) = X\nifout\nzx,⇥(ifout, iout, dcv) ·Gpoolx,⇥(ifout, iout, dcv + 1),\nwhere in the case of global-average-pooling Gpool x,⇥(ifout, iout, dcv + 1) = 1 din , 8iout 2 [w], ifout 2 [din], and in the case of max-pooling, for a given iout 2 [w], Gpoolx,⇥(imax, iout, dcv + 1) = 1 where imax = argmaxifout zx,⇥(ifout, iout, dcv), and G pool x,⇥(ifout, iout, dcv + 1) = 0, 8ifout 6= imax.\nResNet: We consider ResNets with ‘(b + 2)’ blocks and ‘b’ skip connections between the blocks (Figure 1). Each block is a FC-DNN of depth ‘dblk’ and width ‘w’. Here, pre i , post i\n, i 2 [b] are normalisation variables. Definition 2.3 (Sub FC-DNNs). Let 2[b] denote the power set of [b] and let J 2 2[b] denote any subset of [b]. Define the‘J th’ sub-FC-DNN of the ResNet to be the fully connected network obtained by ignoring/removing (see Figure 1) the skip connections skip\nj , 8j 2 J (see Figure 1)." }, { "heading": "3 NEURAL PATH FRAMEWORK", "text": "In this section, we extend the neural path framework developed by LS2020, to CNN and ResNet architectures described in the previous section. The neural path framework exploits the gating property of ReLU activation, which can be thought of as gate/mask that blocks/allows its pre-activation input depending on its 0/1 state ( 0 if pre-activation is negative and 1 if pre-activation is positive). The key idea here is to break a DNN (with ReLU) into paths, and express its output as a summation of the contribution of the paths. The contribution of a path is the product of the signal in its input node, the weights in the path and the gates in the path. For a DNN with P paths, for an input x 2 Rdin , the gating information is encoded in a novel neural path feature (NPF), x,⇥ 2 RP and a novel neural path value (NPV), v⇥ 2 RP encodes the weights. The output of the DNN is then the inner product of the NPFs and NPVs, i.e., ŷ⇥(xs) = h xs,⇥, v⇥i (Proposition 3.4). Definition 3.1. A path starts from an input node, passes through weights, hidden nodes, and normal-\nisation constants and ends at the output node.\nProposition 3.1. The total number of paths in FC-DNN, CNN and ResNet are respectively given by\nP fc = dinw(d 1), P cnn = din(wcvw)dcvw(dfc 1) and P res = din ·\nP b\ni=0\nb\ni\nw (i+2)dblk 1.\nNotation 3.1 (Index Maps). The ranges of index maps I f l , Icv l , Il are [din], [wcv] and [w] respectively. The index maps are used to identify the nodes through which a path p passes. Further, let IJ (p) : [P res] ! 2[b] specify the indices of the skip connections ignored in path p. Also, we follow the convention that weights and gating values of layers corresponding to blocks skipped are 1.\nDefinition 3.2 (Path Activity). The product of the gating values in a path p is its ‘activity’ denoted by A⇥(x, p). We define:\n(a) A⇥(x, p) = ⇧ d 1 l=1 Gx,⇥(Il(p), l), for FC-DNN and ResNet.\n(b) A⇥(x, p) = ⇧ dcv+1 l=1 Gx,⇥(I f l (p), Il(p), l) ·⇧ dcv+dfc+1 l=dcv+2 Gx,⇥(Il(p), l), for CNN.\nIn CNN, the pooling layer is accounted by letting G = Gpool for l = dcv + 1.\nDefinition 3.3 (Bundle Paths of Sharing Weights). Let P̂ cnn = P cnn din , and {B1, . . . , BP̂ cnn} be a collection of sets such that 8i, j 2 [P̂ cnn], i 6= j we have Bi \\Bj = ; and [P̂ cnn i=1Bi = [P cnn]. Further, if paths p, p 0 2 Bi, then Icvl (p) = Icvl (p0), 8l = 1, . . . , dcv and Il(p) = Il(p0), 8l = 0, . . . , dcv.\nProposition 3.2. There are exactly din paths in a bundle.\nDefinition 3.4 (Normalisation Factor). Define (J ) = ⇧ j2J pre j · ⇧ j02[b] post j0\nWeight sharing is shown in the the cartoon in Figure 2, which shows a CNN with din = 3, w = 1, wcv = 2, dcv = 3, dfc = 0. Here, the red coloured paths all share the same weights ⇥(1, 1, 1, l), l = 1, 2, 3 and the blue coloured paths all share the same weights given by ⇥(2, 1, 1, l), l = 1, 2, 3.\nDefinition 3.5 (Neural Path Value). The product of the weights and normalisation factors in a path p is its ‘value’. The value of a path bundle is the value of any path in that bundle. The path/bundle\nvalues are denoted by v⇥(p)/v⇥(Bp̂) and are defined as follows:\n(a) v⇥(p) = ⇧dl=1⇥(Il 1(p), Il(p), l).\n(b) v⇥(Bp̂) = ⇧ dcv l=1⇥(Icvl (p), Il 1(p), Il(p), l) ·⇧ dcv+dfc+1 l=dcv+2 ⇥(Il 1(p), Il(p), l), for any p 2 Bp̂.\n(c) v⇥(p) = ⇧dl=1⇥(Il 1(p), Il(p), l) · (IJ (p)).\nThe neural path value is defined as v⇥ = (v⇥(p), p 2 [P fc]) 2 RP\nfc\n, v⇥ = (v⇥(Bp̂), p̂ 2 [P̂ cnn]) 2\nRP̂ cnn , and v⇥ = (v⇥(p), p 2 [P res]) 2 RP res for FC-DNN, CNN and ResNet respectively.\nProposition 3.3 (Rotational Invariance). Internal variables in the convolutional layers are circularly symmetric, i.e., for r 2 {0, . . . , din 1} it follows that (i) zrot(x,r),⇥(ifout, ·, ·) = zx,⇥(ifout r, ·, ·), (ii) qrot(x,r),⇥(ifout, ·, ·) = qx,⇥(ifout r, ·, ·) and (iii) Grot(x,r),⇥(ifout, ·, ·) = Gx,⇥(ifout r, ·, ·). Definition 3.6. The neural path feature (NPF) corresponding to a path p is given by\n(a) x,⇥(p) = x(I f0(p))A⇥(xs, p) for FC-DNN and ResNet.\n(b) x,⇥(p̂) = P p̂2Bp̂ x(I f 0(p))A⇥(x, p) for CNN.\nThe NPF is defined as x,⇥ = ( x,⇥(p), p 2 [P fc]) 2 RP\nfc\n, x,⇥ = ( x,⇥(Bp̂), p̂ 2 [P̂ cnn]) 2 RP̂\ncnn\n,\nand x,⇥ = ( x,⇥(p), p 2 [P res]) 2 RP\nres\nfor FC-DNN, CNN and ResNet respectively.\nProposition 3.4 (Output=hNPF,NPVi). The output of the network can be written as an inner product of the NPF and NPV, i.e., ŷ⇥(x) = h x,⇥, v⇥i." }, { "heading": "4 NEURAL PATH KERNEL: COMPOSITE KERNEL BASED ON SUB-NETWORKS", "text": "In this section, we will discuss the properties of neural path kernel (NPK) associated with the NPFs defined in Section 3. Recall that a co-ordinate of NPF can be non-zero only if the corresponding path is active. Consequently, the NPK for a pair of input examples is a similarity measure that depends on the number of paths that are active for both examples. Such common active paths are captured in a quantity denoted by ⇤ (Definition 4.2). The number of active paths are in turn dependent on the number of active gates in each layer, a fact that endows the NPK with a hierarchical/composite structure. Gates are the basic building blocks, and the gates in a layer for a w-dimensional binary feature whose kernels are the base kernels. When the layers are laid out depth-wise, we obtain a product of the base kernels. When skip connections are added, we obtain a sum of products of base kernels. And presence of convolution with pooling provides rotational invariance.\nDefinition 4.1. Define the NPK matrix to be H⇥ = >⇥ ⇥, where ⇥ = ( x1,⇥, . . . , xn,⇥) 2 RP⇥n is the NPF matrix. Definition 4.2. Define ⇤⇥(i, x, xs0) = |{p 2 [P ] : I0(p) = i, A⇥(xs, p) = A⇥(xs0 , p) = 1}| to be\ntotal number of ‘active’ paths for both xs and xs0 that pass through input node i.\nDefinition 4.3 (Layer-wise Kernel). Let Gx,⇥(·, l) 2 Rw be w-dimensional feature of the gating values in layer l for input x 2 Rdin . Define layer-wise kernels:\nH lyr l,⇥(s, s 0) = hGxs,⇥(·, l)Gxs0 ,⇥(·, l)i\nLemma 4.1 (Product Kernel). Let H fc denote the NPK of a FC-DNN, and for D 2 Rdin⇥din be a diagonal matrix with strictly positive entries, and u, u 0 2 Rdin let hu, u0iD = P din i=1 D(i)u(i)u 0(i).\nH fc ⇥(s, s 0) = hxs, xs0i⇤(·,xs,xs0 ) = hxs, x 0 s i⇧d 1 l=1 H lyr l,⇥(s, s 0)\nLemma 4.2 (Sum of Product Kernel). Let Hres⇥ be the NPK of the ResNet, and H J ⇥ be the NPK of the sub-FC-DNN within the ResNet obtained by ignoring those skip connections in the set J . Then,\nH res ⇥ = X\nJ22[b] H\nJ ⇥\nLemma 4.3 (Rotational Invariant Kernel). Let Hcnv⇥ denote the NPK of a CNN, then\nH cnv ⇥ (s, s 0) =\ndin 1X\nr=0\nhxs, rot(xs0 , r)i⇤(·,xs,rot(xs0 ,r)) = din 1X\nr=0\nhrot(xs, r), xs0i⇤(·,rot(xs,r),xs0 )" }, { "heading": "5 MAIN THEORETICAL RESULT", "text": "In this section, we proceed with the final step in extending the neural path theory to CNN and ResNet. As with LS2020, we first describe the deep gated network (DGN) setup that decouples the NPFs and NPV, and follow it up with the main result that connects the NPK and the NTK in the DGN setting.\nDGN set up was introduced by LS2020 to analytically characterise the role played by the gates in a ‘standalone’ manner. The DGN has two networks namely the feature network parameterised by ⇥f 2 Rdfnet which holds the NPFs (i.e., the gating information) and a value network parameterised by ⇥v 2 Rdvnet which holds the NPV. The combined parameterisation is denoted by ⇥DGN = (⇥f,⇥v) 2 Rdfnet+dvnet . Thus the learning problem in the DGN is ŷ⇥DGN(x) = h x,⇥f , v⇥vi.\nDefinition 5.1. The DGN has 4 regimes namely decoupled learning (DL), fixed learnt (FL), fixed random-dependent initialisation (FR-DI) and fixed random-independent initialisation (FR-II). In all the regimes ŷ⇥DGN is the output, and ⇥ v\n0 is always initialised at random and is trainable. However, the regimes differ based on i) trainability of ⇥f, ii) initialisation ⇥f0 as described below.\nDL : ⇥f is trainable, and ⇥f0 and ⇥ v 0 are random and statistically independent, > 0. FL : ⇥f is non-trainable, and ⇥f0 is pre-trained; ⇥ v 0 is statistically independent of ⇥ f 0. FR-II : ⇥f is non-trainable, and ⇥f0 and ⇥ v 0 are random and statistically independent. FR-DI : ⇥f is non-trainable, and ⇥f0 = ⇥ v 0.\nDGN Regimes: The flexibility in a DGN is that a) ⇥f can be trainable/non-trainable and b) ⇥f0 can be random or pre-trained using ŷ⇥f as the output (Definition 5.1). By using the DGN setup we can study the role of gates by comparing (a) learnable (DL) vs fixed gates (FL, FR-DI, FR-II), (b) random (FR-DI, FR-II) vs learnt gates (FL) and (c) dependent (FR-DI) vs independent initialisations (FR-II). In the DL regime ‘soft-ReLU’ is chosen to enable gradient flow through the feature network.\nProposition 5.1. Let K⇥DGN be the NTK matrix of the DGN, then K⇥DGN = K v ⇥DGN +K f ⇥DGN , with\nOverall NTK K⇥DGN(s, s0) = h xs,⇥DGN , xs0 ,⇥DGNi, where x,⇥DGN = r⇥DGN ŷ⇥DGN(x) 2 R dnet Feature NTK Kv⇥DGN(s, s 0) = h v xs,⇥DGN , v xs0 ,⇥ DGNi, where vx,⇥DGN = r⇥v ŷ⇥DGN(x) 2 R d v net\nValue NTK K f⇥DGN(s, s 0) = h f xs,⇥DGN , f xs0 ,⇥ DGNi, where fx,⇥DGN = r⇥f ŷ⇥DGN(x) 2 Rd f net\nRemark: There are two separate NTKs, each one corresponding to feature and value networks respectively. In the case of fixed regimes, K f = 0.\nTheorem 5.1. (i) ⇥v0 is statistically independent of ⇥ f 0 (ii) ⇥ v 0 are i.i.d symmetric Bernoulli over { ,+ }. Let fc = cscalepw and cv = cscalep wwcv for FC and convolutional layers. As w ! 1, we have:\n(ii) K v ⇥DGN0 ! fcH⇥f0 , fc = d 2(d 1) fc for FC-DNN,\n(ii) K v ⇥DGN0 ! cvH⇥f0 , cv = 1 din 2\n⇣ dcv 2(dcv 1) cv 2dfc fc + dfc 2dcvcv 2(dfc 1) fc ⌘ for CNN with GAP,\n(iii) K v ⇥DGN0 !\nP J22[b] J rs H\nJ ⇥f0 , J rs\n= (|J |+ 2)dblk 2 (|J |+2)dblk 1 fc (J )2 for ResNet.\n• fc, cv, rs : The simplest of all is fc = d 2(d 1)fc , where d is due the fact that there are d weights in a path and in the exponent of fc, factor (d 1) arises because the gradient of a particular weight is product of all the weights in the path excluding the said weight itself, and the factor of 2 is due to the fact that NTK is an inner product of two gradients. cv is similar to fc with separate bookkeeping for the convolutional and FC layers, and 1\nd 2 in is due to the GAP layer. In rs, the fc for all the sub-FC-DNNs within the ResNet are scaled by the corresponding normalisation factors and summed.\n• Decoupling In a DNN with ReLU (and FR-DI regime of DGN), NPV and NPF are not statistically independent at initialisation, i.e., Theorem 5.1 does not hold. However, the current state-of-the-art analysis Jacot et al. (2018); Arora et al. (2019); Cao and Gu (2019) is in the infinite width (w ! 1) regime, wherein, the change in activations during training is only of the order q 1 w\n, which goes to 0 as w ! 1. Hence, though assumption in Theorem 5.1 may not hold exactly, it is not a strong assumption to fix the NPFs for the purpose of analysis. Once the NPFs are fixed, it only natural to statistically decouple the NPV from fixed NPFs (Theorem 5.1 hold in FR-II, FL and DL regimes).\n• Gates are Key: In simple terms, Theorem 5.1 says that if the gates/masks are known, then the weights are expendable, a fact which we also verify in our extensive experiments." }, { "heading": "6 NUMERICAL EXPERIMENTS", "text": "We now show via experiments that gates indeed play a central role in deep learning. For this we use the DGN setup (Figure 4) to create models in the 4 regimes namely DL, FL, FR-II and FR-DI. In\neach of the 4 regimes, we create combinatorially many models via a) permutation of the layers when the copied from the feature to the value network, and b) setting the input to the value network to 1 (in training and testing), i.e., a tensor with all its entries to be 1. We observe that in all the 4 regimes, the models are robust to the combinatorial variations.\nSetup: Datasets are MNIST and CIFAR-10. For CIFAR-10, we use Figure 4 with 3⇥ 3 windows and 128 filters in each layer. For MNIST, we use FC instead of the convolutional layers. All the FC-DNNs and CNNs are trained with ‘Adam’ [10] (step-size = 3 ·10 4 , batch size = 32). A ResNet called DavidNet [12] was trained with SGD ( step-size = 0.5, batch size = 256). We use = 10.\nReporting of Statistics: The results are summarised in Figure 4. For FC-DNN and CNN, in each of the 4 regimes, we train 48 = 2(xv = x/xv = 1)⇥ 24(layer permutations) models. Each of these models are trained to almost 100% accuracy and the test performance is taken to be the best obtained in a given run. Each of the 48 models is run only once. For the ResNet, we train only two model for each of the 4 regimes ( without permuting the layers, but with image as well as ‘all-ones’ input variation) and here each mode is run 5 times.\n• Result Discussion: Recall that in regimes FR-II and FR-DI the gates are fixed and random, and only ⇥v are trained. In DL regime, both ⇥f and ⇥v are trained, and FL regime ⇥f is pre-trained and fixed, and only ⇥v is trained. In the following discussion, we compare the performance of the models in various regimes, along with the performance of CNTK of Arora et al. (2019) (77.43% in CIFAR-10) and the performance of standard DNN with ReLU. The main observations are listed below (those by Lakshminarayanan and Singh (2020) are also revisited for the sake of completeness).\n1. Decoupling: There is no performance difference between FR-II and FR-DI.Further, decoupled learning of gates (DL) performs significantly better than fixed random gates (FR), and the gap between standard DNN with ReLU and DL is less than 3%. This marginal performance loss seems to be worthy trade off for fundamental insights of Theorem 5.1 under the decoupling assumption.\n2. Recovery: The fixed learnt regime (FL) shows that using the gates of a pre-trained ReLU network, performance can be recovered by training the NPV. Also, by interpreting the input dependent component of a model to be the features and the input independent component to be the weights, it makes sense to look at the gates/NPFs as the hidden features and NPV as the weights.\n3. Random Gates: FR-II does perform well in all the experiments (note that for a 10-class problem, a random classifier would achieve only 10% test accuracy). Given the observation that the gates are the true features, and the fact that is no learning in the gates in the fixed regime, and the performance of fixed random gates can be purely attributed to the in-built structure.\n4. Gate Learning: We group the models into three sets where S1 = { ReLU, FL , DL}, S2 = { FR} and S3 = { CNTK }, and explain the difference in performance due to gate learning. S2 and S3 have no gate learning. However, S3 due to its infinite width has better averaging resulting in a well formed kernel and hence performs better than S2 which is a finite width. Thus, the difference between S2 and S3 can be attributed to finite versus infinite width. Both S1 and S2 are finite width, and hence, conventional feature learning happens in both S1 and S2, but, S1 with gate learning is better (77.5% or above in CIFAR-10) than S2 (67% in CIFAR-10) with no gate learning. Thus neither finite width, nor the conventional feature learning explain the difference between S1 and S2. Thus, ‘gate learning’ discriminates the regimes S1, S2 and S3 better than the conventional feature learning view.\n5. Permutation and Input Invariance: The performance (in all the 4 regimes) is robust to ‘all-ones’ inputs. Note that in the ‘all-ones’ case, the input information affects the models only via the gates. Here, all the entries of the input Gram matrix are identical, and the NPK depends only on ⇤, which is the measure of sub-network active simultaneously for the various input pairs. The performance (in all the 4 regimes) is also robust to permutation of the layers. This can be attributed to the product ⇧(d 1)\nl=1 H lyr l,⇥ of the layer level base kernels being order invariant.\n6. Visualisation: Figure 5 compares the hidden layer outputs of a standard DNN with ReLU with 4 layers, and that of a DGN which copies the gates from the standard DNN, but, reverses the gating masks when applying to the value network. Also, the value network of the DGN was provided with a fixed random input (as shown in Figure 5). Both the models achieved about 80% test accuracy, an otherwise surprising outcome, yet, as per the theory developed in this paper, a random input to the value network should not have much effect on performance, and this experiment confirms the same." }, { "heading": "7 RELATED AND FUTURE WORK", "text": "Our paper extended the work of Lakshminarayanan and Singh (2020) to CNN and ResNet. Further, we pointed out the composite nature of the underlying kernel. Experiments with permuted masks and constant inputs are also significant and novel evidences, which to our knowledge are first of their kind in literature. Gated linearity was studied recently by Fiat et al. (2019), however, they considered only single layered gated networks.Jacot et al. (2018); Arora et al. (2019); Cao and Gu (2019); Jacot et al. (2019); Du et al. (2018) have all used the NTK framework to understand questions related to optimisation and/or generalisation in DNNs. We now discuss the future work below.\n1. Base Kernel: At randomised initialisation, for each l, H\nlyr l,⇥0 (s,s0)\nw is the fraction of gates that\nare simultaneously active for input examples s, s0, which in the limit of infinite width is equal to 1 2 angle(zxs(·, l), zxs0 (·, l)) (Xie et al., 2017). Further, due to the property of ReLU to pass only positive components, we conjecture that the pairwise angle between input examples measured at the\nhidden layer outputs is a decreasing function of depth and as l ! 1, H\nlyr l,⇥0 (s,s0)\nw ! 12 , 8s, s 0 2 [n]. We reserve a formal statement on the behaviour of H lyr\nl,⇥0 for the future.\n2. Multiple Kernel Learning (Gönen and Alpaydın, 2011; Bach et al., 2004; Sonnenburg et al., 2006; Cortes et al., 2009) is the name given to a class of methods that learn a linear or non-linear combination of one or many base kernels. For instance, Cortes et al. (2009) consider polynomial combinations of base kernels, which also has a ‘sum of products’ form. Our experiments do indicate that the learning in the gates (and hence the underlying base kernels) has a significant impact. Understanding K f (Proposition 5.1) might be a way to establish the extent and nature of kernel learning in deep learning. It is also interesting to check if in ResNet the kernels of its sub-FC-DNNs are combined optimally." }, { "heading": "8 CONCLUSION", "text": "We attributed the success of deep learning to the following two key ingredients: (i) a composite kernel with gates as fundamental building blocks and (ii) allowing the gates to learn/adapt during training. We justified our claims theoretically as well as experimentally. This work along with that of Lakshminarayanan and Singh (2020) provides a paradigm shift in understanding deep learning. Here, gates play a central role. Each gate is related to a hyper-plane, and gates together form layer level binary features whose kernels are the base kernels. Laying out these binary features depth-wise gives rise to a product of the base kernels. The skip connections gives a ‘sum of product’ structure, and convolution with pooling gives rotation invariance. The learning in the gates further enhances the generalisation capabilities of the models." } ]
2,020
null
SP:e17f92caae3e2bd4830eadeb4b268c1c82d43e4d
[ "The authors propose the inclusion of an auxiliary task for training an RL model, where the auxiliary task objective is to learn an abstraction of the state-action space that clusters (s,a) pairs according to their expected return. The authors first describe a basic abstraction learning framework (Z-learning) followed by the extension to Deep RL as an auxiliary task (RCRL). The authors present results in Atari (discrete action) building on Rainbow, showing an improvement compared to baselines on median HNS in the low-data regime, and results on DMControl (continuous action) building on SAC, showing similar or improved performance compared to baselines." ]
Recently, various auxiliary tasks have been proposed to accelerate representation learning and improve sample efficiency in deep reinforcement learning (RL). However, existing auxiliary tasks do not take the characteristics of RL problems into consideration and are unsupervised. By leveraging returns, the most important feedback signals in RL, we propose a novel auxiliary task that forces the learnt representations to discriminate state-action pairs with different returns. Our auxiliary loss is theoretically justified to learn representations that capture the structure of a new form of state-action abstraction, under which state-action pairs with similar return distributions are aggregated together. In low data regime, our algorithm outperforms strong baselines on complex tasks in Atari games and DeepMind Control suite, and achieves even better performance when combined with existing auxiliary tasks.
[ { "affiliations": [], "name": "REINFORCEMENT LEARNING" }, { "affiliations": [], "name": "Guoqing Liu" }, { "affiliations": [], "name": "Chuheng Zhang" }, { "affiliations": [], "name": "Li Zhao" }, { "affiliations": [], "name": "Tao Qin" }, { "affiliations": [], "name": "Jinhua Zhu" }, { "affiliations": [], "name": "Jian Li" }, { "affiliations": [], "name": "Nenghai Yu" }, { "affiliations": [], "name": "Tie-Yan Liu" } ]
[ { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Aravind Srinivas", "Michael Laskin", "Pieter Abbeel" ], "title": "Curl: Contrastive unsupervised representations for reinforcement learning", "venue": "arXiv preprint arXiv:2004.04136,", "year": 2020 }, { "authors": [ "Steven C Suddarth", "YL Kergosien" ], "title": "Rule-injection hints as a means of improving network performance and learning time", "venue": "In European Association for Signal Processing Workshop,", "year": 1990 }, { "authors": [ "Richard S Sutton", "Joseph Modayil", "Michael Delp", "Thomas Degris", "Patrick M Pilarski", "Adam White", "Doina Precup" ], "title": "Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction", "venue": "In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume", "year": 2011 }, { "authors": [ "Carles Gelada", "Saurabh Kumar", "Jacob Buckman", "Ofir Nachum", "Marc G Bellemare" ], "title": "Deepmdp: Learning continuous latent space models for representation learning", "venue": null, "year": 1906 }, { "authors": [ "Marc Bellemare", "Will Dabney", "Robert Dadashi", "Adrien Ali Taiga", "Pablo Samuel Castro", "Nicolas Le Roux", "Dale Schuurmans", "Tor Lattimore", "Clare Lyle" ], "title": "A geometric perspective on optimal representations for reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Vincent François-Lavet", "Yoshua Bengio", "Doina Precup", "Joelle Pineau" ], "title": "Combined reinforcement learning via abstract representations", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Wei Shen", "Xiaonan He", "Chuheng Zhang", "Qiang Ni", "Wanchu Dou", "Yan Wang" ], "title": "Auxiliary-task based deep reinforcement learning for participant selection problem in mobile crowdsourcing", "venue": null, "year": 2008 }, { "authors": [ "Amy Zhang", "Rowan McAllister", "Roberto Calandra", "Yarin Gal", "Sergey Levine" ], "title": "Learning invariant representations for reinforcement learning without reconstruction", "venue": "arXiv preprint arXiv:2006.10742,", "year": 2020 }, { "authors": [ "Will Dabney", "André Barreto", "Mark Rowland", "Robert Dadashi", "John Quan", "Marc G Bellemare", "David Silver" ], "title": "The value-improvement path: Towards better representations for reinforcement learning", "venue": null, "year": 2006 }, { "authors": [ "Max Jaderberg", "Volodymyr Mnih", "Wojciech Marian Czarnecki", "Tom Schaul", "Joel Z Leibo", "David Silver", "Koray Kavukcuoglu" ], "title": "Reinforcement learning with unsupervised auxiliary tasks", "venue": "arXiv preprint arXiv:1611.05397,", "year": 2016 }, { "authors": [ "Danijar Hafner", "Timothy Lillicrap", "Jimmy Ba", "Mohammad Norouzi" ], "title": "Dream to control: Learning behaviors by latent imagination", "venue": "arXiv preprint arXiv:1912.01603,", "year": 2019 }, { "authors": [ "Danijar Hafner", "Timothy Lillicrap", "Ian Fischer", "Ruben Villegas", "David Ha", "Honglak Lee", "James Davidson" ], "title": "Learning latent dynamics for planning from pixels", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Marc G Bellemare", "Yavar Naddaf", "Joel Veness", "Michael Bowling" ], "title": "The arcade learning environment: An evaluation platform for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2013 }, { "authors": [ "Matteo Hessel", "Joseph Modayil", "Hado Van Hasselt", "Tom Schaul", "Georg Ostrovski", "Will Dabney", "Dan Horgan", "Bilal Piot", "Mohammad Azar", "David Silver" ], "title": "Rainbow: Combining improvements in deep reinforcement learning", "venue": "arXiv preprint arXiv:1710.02298,", "year": 2017 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Evan Shelhamer", "Parsa Mahmoudieh", "Max Argus", "Trevor Darrell" ], "title": "Loss is its own reward: Selfsupervision for reinforcement learning", "venue": "arXiv preprint arXiv:1612.07307,", "year": 2016 }, { "authors": [ "Daniel Guo", "Bernardo Avila Pires", "Bilal Piot", "Jean-bastien Grill", "Florent Altché", "Rémi Munos", "Mohammad Gheshlaghi Azar" ], "title": "Bootstrap latent-predictive representations for multitask reinforcement learning", "venue": null, "year": 2004 }, { "authors": [ "Kuang-Huei Lee", "Ian Fischer", "Anthony Liu", "Yijie Guo", "Honglak Lee", "John Canny", "Sergio Guadarrama" ], "title": "Predictive information accelerates learning in rl", "venue": "arXiv preprint arXiv:2007.12401,", "year": 2020 }, { "authors": [ "Bogdan Mazoure", "Remi Tachet des Combes", "Thang Doan", "Philip Bachman", "R Devon Hjelm" ], "title": "Deep reinforcement and infomax learning", "venue": "arXiv preprint arXiv:2006.07217,", "year": 2020 }, { "authors": [ "Vivek Veeriah", "Matteo Hessel", "Zhongwen Xu", "Janarthanan Rajendran", "Richard L Lewis", "Junhyuk Oh", "Hado P van Hasselt", "David Silver", "Satinder Singh" ], "title": "Discovery of useful questions as auxiliary tasks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Tom Schaul", "Daniel Horgan", "Karol Gregor", "David Silver" ], "title": "Universal value function approximators", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "Diana Borsa", "André Barreto", "John Quan", "Daniel Mankowitz", "Rémi Munos", "Hado van Hasselt", "David Silver", "Tom Schaul" ], "title": "Universal successor features approximators", "venue": "arXiv preprint arXiv:1812.07626,", "year": 2018 }, { "authors": [ "Vivek Veeriah", "Junhyuk Oh", "Satinder Singh" ], "title": "Many-goals reinforcement learning", "venue": "arXiv preprint arXiv:1806.09605,", "year": 2018 }, { "authors": [ "Tim de Bruin", "Jens Kober", "Karl Tuyls", "Robert Babuška" ], "title": "Integrating state representation learning into deep reinforcement learning", "venue": "IEEE Robotics and Automation Letters,", "year": 2018 }, { "authors": [ "Piotr Mirowski", "Razvan Pascanu", "Fabio Viola", "Hubert Soyer", "Andrew J Ballard", "Andrea Banino", "Misha Denil", "Ross Goroshin", "Laurent Sifre", "Koray Kavukcuoglu" ], "title": "Learning to navigate in complex environments", "venue": "arXiv preprint arXiv:1611.03673,", "year": 2016 }, { "authors": [ "Elise van der Pol", "Daniel E Worrall", "Herke van Hoof", "Frans A Oliehoek", "Max Welling" ], "title": "Mdp homomorphic networks: Group symmetries in reinforcement learning", "venue": null, "year": 2006 }, { "authors": [ "Matteo Hessel", "Hubert Soyer", "Lasse Espeholt", "Wojciech Czarnecki", "Simon Schmitt", "Hado van Hasselt" ], "title": "Multi-task deep reinforcement learning with popart", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Debidatta Dwibedi", "Jonathan Tompson", "Corey Lynch", "Pierre Sermanet" ], "title": "Learning actionable representations from visual observations", "venue": "IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),", "year": 2018 }, { "authors": [ "Yusuf Aytar", "Tobias Pfaff", "David Budden", "Thomas Paine", "Ziyu Wang", "Nando de Freitas" ], "title": "Playing hard exploration games by watching youtube", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ankesh Anand", "Evan Racah", "Sherjil Ozair", "Yoshua Bengio", "Marc-Alexandre Côté", "R Devon Hjelm" ], "title": "Unsupervised state representation learning in atari", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Thomas Dean", "Robert Givan" ], "title": "Model minimization in markov decision processes", "venue": "In AAAI/IAAI,", "year": 1997 }, { "authors": [ "Robert Givan", "Thomas Dean", "Matthew Greig" ], "title": "Equivalence notions and model minimization in markov decision processes", "venue": "Artificial Intelligence,", "year": 2003 }, { "authors": [ "Lihong Li", "Thomas J Walsh", "Michael L Littman" ], "title": "Towards a unified theory of state abstraction for mdps", "venue": "In ISAIM,", "year": 2006 }, { "authors": [ "Balaraman Ravindran" ], "title": "Smdp homomorphisms: An algebraic approach to abstraction in semi markov decision processes", "venue": null, "year": 2003 }, { "authors": [ "Balaraman Ravindran", "Andrew G Barto" ], "title": "An algebraic approach to abstraction in reinforcement learning", "venue": "PhD thesis, University of Massachusetts at Amherst,", "year": 2004 }, { "authors": [ "Balaraman Ravindran", "Andrew G Barto" ], "title": "Approximate homomorphisms: A framework for nonexact minimization in markov decision processes", "venue": null, "year": 2004 }, { "authors": [ "Jonathan Taylor", "Doina Precup", "Prakash Panagaden" ], "title": "Bounding performance loss in approximate mdp homomorphisms", "venue": "In Advances in Neural Information Processing Systems,", "year": 2009 }, { "authors": [ "Ondrej Biza", "Robert Platt" ], "title": "Online abstraction with mdp homomorphisms for deep learning", "venue": "arXiv preprint arXiv:1811.12929,", "year": 2018 }, { "authors": [ "Yuval Tassa", "Saran Tunyasuvunakool", "Alistair Muldal", "Yotam Doron", "Siqi Liu", "Steven Bohez", "Josh Merel", "Tom Erez", "Timothy Lillicrap", "Nicolas Heess" ], "title": "dmcontrol: Software and tasks for continuous control, 2020", "venue": null, "year": 2020 }, { "authors": [ "Lukasz Kaiser", "Mohammad Babaeizadeh", "Piotr Milos", "Blazej Osinski", "Roy H Campbell", "Konrad Czechowski", "Dumitru Erhan", "Chelsea Finn", "Piotr Kozakowski", "Sergey Levine" ], "title": "Model-based reinforcement learning for atari", "venue": null, "year": 1903 }, { "authors": [ "Hado P van Hasselt", "Matteo Hessel", "John Aslanides" ], "title": "When to use parametric models in reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Alex X Lee", "Anusha Nagabandi", "Pieter Abbeel", "Sergey Levine" ], "title": "Stochastic latent actor-critic: Deep reinforcement learning with a latent variable model", "venue": null, "year": 1907 }, { "authors": [ "Denis Yarats", "Amy Zhang", "Ilya Kostrikov", "Brandon Amos", "Joelle Pineau", "Rob Fergus" ], "title": "Improving sample efficiency in model-free reinforcement learning from images", "venue": null, "year": 1910 }, { "authors": [ "Pablo Samuel Castro" ], "title": "Scalable methods for computing state similarity in deterministic markov decision processes", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Dipendra Misra", "Mikael Henaff", "Akshay Krishnamurthy", "John Langford" ], "title": "Kinematic state abstraction and provably efficient rich-observation reinforcement learning", "venue": null, "year": 1911 }, { "authors": [ "Marc G Bellemare", "Will Dabney", "Rémi Munos" ], "title": "A distributional perspective on reinforcement learning", "venue": "arXiv preprint arXiv:1707.06887,", "year": 2017 }, { "authors": [ "∈ A. A" ], "title": "COMPARISON WITH π-BISIMULATION Similar to Z-irrelevance, a recently proposed state abstraction π-bisimulation (Castro, 2020) is also tied to a behavioral policy π. It is beneficial to compare the coarseness of Z-irrelevance and π-bisimulation. For completeness, we restate the definition of π-bisimulation", "venue": null, "year": 2020 }, { "authors": [ "Misra" ], "title": "Notice that Corollary 4.1.1 is the asymptotic case (n → ∞) for Theorem 4.1. We first provide a sketch proof for Corollary 4.1.1 which ignores sampling issues and thus more illustrative. Later, we provide the proof for Theorem", "venue": null, "year": 2019 }, { "authors": [ "ΦN | |WN" ], "title": "We complete the proof by combining the above results", "venue": null, "year": 2019 }, { "authors": [ "van Hasselt" ], "title": "Hyperparameters. We use exactly the same hyperparameters", "venue": "Unlike CURL (or other previous work such as Jaderberg et al", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep reinforcement learning (RL) algorithms can learn representations from high-dimensional inputs, as well as learn policies based on such representations to maximize long-term returns simultaneously. However, deep RL algorithms typically require large numbers of samples, which can be quite expensive to obtain (Mnih et al., 2015). In contrast, it is usually much more sample efficient to learn policies with learned representations/extracted features (Srinivas et al., 2020). To this end, various auxiliary tasks have been proposed to accelerate representation learning in aid of the main RL task (Suddarth and Kergosien, 1990; Sutton et al., 2011; Gelada et al., 2019; Bellemare et al., 2019; François-Lavet et al., 2019; Shen et al., 2020; Zhang et al., 2020; Dabney et al., 2020; Srinivas et al., 2020). Representative examples of auxiliary tasks include predicting the future in either the pixel space or the latent space with reconstruction-based losses (e.g., Jaderberg et al., 2016; Hafner et al., 2019a;b).\nRecently, contrastive learning has been introduced to construct auxiliary tasks and achieves better performance compared to reconstruction based methods in accelerating RL algorithms (Oord et al., 2018; Srinivas et al., 2020). Without the need to reconstruct inputs such as raw pixels, contrastive learning based methods can ignore irrelevant features such as static background in games and learn more compact representations. Oord et al. (2018) propose a contrastive representation learning method based on the temporal structure of state sequence. Srinivas et al. (2020) propose to leverage the prior knowledge from computer vision, learning representations that are invariant to image augmentation. However, existing works mainly construct contrastive auxiliary losses in an unsupervised manner, without considering feedback signals in RL problems as supervision.\nIn this paper, we take a further step to leverage the return feedback to design a contrastive auxiliary loss to accelerate RL algorithms. Specifically, we propose a novel method, called Return-based\n∗This work is conducted at Microsoft Research Asia. The first two authors contributed equally to this work.\nContrastive representation learning for Reinforcement Learning (RCRL). In our method, given an anchor state-action pair, we choose a state-action pair with the same or similar return as the positive sample, and a state-action pair with different return as the negative sample. Then, we train a discriminator to classify between positive and negative samples given the anchor based on their representations as the auxiliary task. The intuition here is to learn state-action representations that capture return-relevant features while ignoring return-irrelevant features.\nFrom a theoretical perspective, RCRL is supported by a novel state-action abstraction, called Zπirrelevance. Zπ-irrelevance abstraction aggregates state-action pairs with similar return distributions under certain policy π. We show that Zπ-irrelevance abstraction can reduce the size of the stateaction space (cf. Appendix A) as well as approximate the Q values arbitrarily accurately (cf. Section 4.1). We further propose a method called Z-learning that can calculate Zπ-irrelevance abstraction with sampled returns rather than the return distribution, which is hardly available in practice. Zlearning can learn Zπ-irrelevance abstraction provably efficiently. Our algorithm RCRL can be seen as the empirical version of Z-learning by making a few approximations such as integrating with deep RL algorithms, and collecting positive pairs within a consecutive segment in a trajectory of the anchors.\nWe conduct experiments on Atari games (Bellemare et al., 2013) and DeepMind Control suite (Tassa et al., 2018) in low data regime. The experiment results show that our auxiliary task combined with Rainbow (Hessel et al., 2017) for discrete control tasks or SAC (Haarnoja et al., 2018) for continuous control tasks achieves superior performance over other state-of-the-art baselines for this regime. Our method can be further combined with existing unsupervised contrastive learning methods to achieve even better performance. We also perform a detailed analysis on how the representation changes during training with/without our auxiliary loss. We find that a good embedding network assigns similar/dissimilar representations to state-action pairs with similar/dissimilar return distributions, and our algorithm can boost such generalization and speed up training.\nOur contributions are summarized as follows:\n• We introduce a novel contrastive loss based on return, to learn return-relevant representations and speed up deep RL algorithms.\n• We theoretically build the connection between the contrastive loss and a new form of stateaction abstraction, which can reduce the size of the state-action space as well as approximate the Q values arbitrarily accurately.\n• Our algorithm achieves superior performance against strong baselines in Atari games and DeepMind Control suite in low data regime. Besides, the performance can be further enhanced when combined with existing auxiliary tasks." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 AUXILIARY TASK", "text": "In reinforcement learning, the auxiliary task can be used for both the model-based setting and the model-free setting. In the model-based settings, world models can be used as auxiliary tasks and lead to better performance, such as CRAR (François-Lavet et al., 2019), Dreamer (Hafner et al., 2019a), and PlaNet (Hafner et al., 2019b). Due to the complex components (e.g., the latent transition or reward module) in the world model, such methods are empirically unstable to train and relies on different regularizations to converge. In the model-free settings, many algorithms construct various auxiliary tasks to improve performance, such as predicting the future (Jaderberg et al., 2016; Shelhamer et al., 2016; Guo et al., 2020; Lee et al., 2020; Mazoure et al., 2020), learning value functions with different rewards or under different policies (Veeriah et al., 2019; Schaul et al., 2015; Borsa et al., 2018; Bellemare et al., 2019; Dabney et al., 2020), learning from many-goals (Veeriah et al., 2018), or the combination of different auxiliary objectives (de Bruin et al., 2018). Moreover, auxiliary tasks can be designed based on the prior knowledge about the environment (Mirowski et al., 2016; Shen et al., 2020; van der Pol et al., 2020) or the raw state representation (Srinivas et al., 2020). Hessel et al. (2019) also apply auxiliary task to the multi-task RL setting.\nContrastive learning has seen dramatic progress recently, and been introduced to learn state representation (Oord et al., 2018; Sermanet et al., 2018; Dwibedi et al., 2018; Aytar et al., 2018; Anand\net al., 2019; Srinivas et al., 2020). Temporal structure (Sermanet et al., 2018; Aytar et al., 2018) and local spatial structure (Anand et al., 2019) has been leveraged for state representation learning via contrastive losses. CPC (Oord et al., 2018) and CURL (Srinivas et al., 2020) adopt a contrastive auxiliary tasks to accelerate representation learning and speed up main RL tasks, by leveraging the temporal structure and image augmentation respectively. To the best of our knowledge, we are the first to leverage return to construct a contrastive auxiliary task for speeding up the main RL task." }, { "heading": "2.2 ABSTRACTION", "text": "State abstraction (or state aggregation) aggregates states by ignoring irrelevant state information. By reducing state space, state abstraction can enable efficient policy learning. Different types of abstraction are proposed in literature, ranging from fine-grained to coarse-grained abstraction, each reducing state space to a different extent. Bisimulation or model irrelevance (Dean and Givan, 1997; Givan et al., 2003) define state abstraction under which both transition and reward function are kept invariant. By contrast, other types of state abstraction that are coarser than bisimulation such as Qπ irrelevance orQ∗ irrelevance (see e.g., Li et al., 2006), which keep the Q function invariant under any policy π or the optimal policy respectively. There are also some works on state-action abstractions, e.g., MDP homomorphism (Ravindran, 2003; Ravindran and Barto, 2004a) and approximate MDP homomorphism (Ravindran and Barto, 2004b; Taylor et al., 2009) , which are similar to bisimulation in keeping reward and transition invariant, but extending bisimulation from state abstraction to stateaction abstraction.\nIn this paper, we consider a new form of state-action abstraction Zπ-irrelevance, which aggregates state-action pairs with the same return distribution and is coarser than bisimulation or homomorphism which are frequently used as auxiliary tasks (e.g., Biza and Platt, 2018; Gelada et al., 2019; Zhang et al., 2020). However, it is worth noting that Zπ-irrelevance is only used to build the theoretical foundation of our algorithm, and show that our proposed auxiliary task is well-aligned with the main RL task. Representation learning in deep RL is in general very different from aggregating states in tabular case, though the latter may build nice theoretical foundation for the former. Here we focus on how to design auxiliary tasks to accelerate representation learning using contrastive learning techniques, and we propose a novel return-based contrastive method based on our proposed Zπ-irrelevance abstraction." }, { "heading": "3 PRELIMINARY", "text": "We consider a Markov Decision Process (MDP) which is a tuple (S,A, P,R, µ, γ) specifying the state space S, the action space A, the state transition probability P (st+1|st, at), the reward R(rt|st, at), the initial state distribution µ ∈ ∆S and the discount factor γ. Also, we denote x := (s, a) ∈ X := S × A to be the state-action pair. A (stationary) policy π : S → ∆A specifies the action selection probability on each state. Following the policy π, the discounted sum of future rewards (or return) is denoted by the random variable Zπ(s, a) = ∑∞ t=0 γ\ntR(st, at), where s0 = s, a0 = a, st ∼ P (·|st−1, at−1), and at ∼ π(·|st). We divide the range of return into K equal bins {R0 = Rmin, R1, · · · , RK = Rmax} such that Rk −Rk−1 = (Rmax−Rmin)/K,∀k ∈ [K], where Rmin (resp. Rmax) is the minimum (reps. maximum) possible return, and [K] := {1, 2, · · · ,K}. We use b(R) = k ∈ [K] to denote the event that R falls into the kth bin, i.e., Rk−1 < R ≤ Rk. Hence, b(R) can be viewed as the discretized version of the return, and the distribution of discretized return can be represented by a K-dimensional vector Zπ(x) ∈ ∆K , where the k-th element equals to Pr[Rk−1 < Zπ(x) ≤ Rk]. The Q function is defined as Qπ(x) = E[Zπ(x)], and the state value function is defined as V π(s) = Ea∼π(·|s)[Zπ(s, a)]. The objective for RL is to find a policy π that maximizes the expected cumulative reward J(π) = Es∼µ[V π(s)]. We denote the optimal policy as π∗ and the corresponding optimal Q function as Q∗ := Qπ ∗ ." }, { "heading": "4 METHODOLOGY", "text": "In this section, we present our method, from both theoretical and empirical perspectives. First, we propose Zπ-irrelevance, a new form of state-action abstraction based on return distribution. We show that the Q functions for any policy (and therefore the optimal policy) can be represented under\nAlgorithm 1: Z-learning 1: Given the policy π, the number of bins for the return K, a constant N ≥ Nπ,K , the encoder\nclass ΦN , the regressor classWN , and a distribution d ∈ ∆X with supp(d) = X 2: D = ∅ 3: for i = 1, · · · , n do 4: x1, x2 ∼ d 5: R1 ∼ Zπ(x1), R2 ∼ Zπ(x2) 6: D = D ∪ {(x1, x2, y = I[b(R1) 6= b(R2)])} 7: end for 8: (φ̂, ŵ) = arg minφ∈ΦN ,w∈WN L(φ,w;D), where L(φ,w;D) is defined in (1) 9: return the encoder φ̂\nZπ-irrelevance abstraction. Then we consider an algorithm, Z-learning, that enables us to learn Zπ-irrelevance abstraction from the samples collected using π. Z-learning is simple and learns the abstraction by only minimizing a contrastive loss. We show that Z-learning can learn Zπ-irrelevance abstraction provably efficiently. After that, we introduce return-based contrastive representation learning for RL (RCRL) that incorporates standard RL algorithms with an auxiliary task adapted from Z-learning. At last, we present our network structure for learning state-action embedding, upon which RCRL is built.\n4.1 Zπ -IRRELEVANCE ABSTRACTION\nA state-action abstraction aggregates the state-action pairs with similar properties, resulting in an abstract state-action space denoted as [N ], where N is the size of abstract state-action space. In this paper, we consider a new form of abstraction, Zπ-irrelevance, defined as follows: Given a policy π, Zπ-irrelevance abstraction is denoted as φ : X → [N ] such that, for any x1, x2 ∈ X with φ(x1) = φ(x2), we have Zπ(x1) = Zπ(x2). Given a policy π and the parameter for return discretization K, we use Nπ,K to denote the minimum N such that a Zπ-irrelevance exists. It is true that Nπ,K ≤ Nπ,∞ ≤ |φB(S)| |A| for any π and K, where |φB(S)| is the number of abstract states for the coarsest bisimulation (cf. Appendix A).\nProposition 4.1. Given a policy π and any Zπ-irrelevance φ : X → [N ], there exists a function Q : [N ]→ R such that |Q(φ(x))−Qπ(x)| ≤ Rmax−RminK ,∀x ∈ X .\nWe provide a proof in Appendix A. Note that K controls the coarseness of the abstraction. When K →∞,Zπ-irrelevance can accurately represent the value function and therefore the optimal policy when π → π∗. When using an auxiliary task to learn such abstraction, this proposition indicates that the auxiliary task (to learn a Zπ-irrelevance) is well-aligned with the main RL task (to approximate Q∗). However, large K results in a fine-grained abstraction which requires us to use a large N and more samples to learn the abstraction (cf. Theorem 4.1). In practice, this may not be a problem since we learn a state-action representation in a low-dimensional space Rd instead of [N ] and reuse the samples collected by the base RL algorithm. Also, we do not need to choose a K explicitly in the practical algorithm (cf. Section 4.3)." }, { "heading": "4.2 Z-LEARNING", "text": "We propose Z-learning to learn Zπ-irrelevance based on a dataset D with a contrastive loss (see Algorithm 1). Each tuple in the dataset is collected as follows: First, two state-action pairs are drawn i.i.d. from a distribution d ∈ ∆X with supp(d) = X (cf. Line 4 in Algorithm 1). In practice, we can sample state-action pairs from the rollouts generated by the policy π. In this case, a stochastic policy (e.g., using -greedy) with a standard ergodic assumption on MDP ensures supp(d) = X . Then, we obtain returns for the two state-action pairs (i.e., the discounted sum of the rewards after x1 and x2) which can be obtained by rolling out using the policy π (cf. Line 5 in Algorithm 1). The binary label y for this state-action pair indicates whether the two returns belong to the same bin (cf. Line 6 in Algorithm 1). The contrastive loss is defined as follows:\nmin φ∈ΦN ,w∈WN\nL(φ,w;D) := E(x1,x2,y)∼D [ (w(φ(x1), φ(x2))− y)2 ] , (1)\nwhere the class of encoders that map the state-action pairs to N discrete abstractions is defined as ΦN := {X → [N ]}, and the class of tabular regressors is defined asWN := {[N ]× [N ]→ [0, 1]}. Notice that we choose N ≥ Nπ,K to ensure that a Zπ-irrelevance φ : X → [N ] exists. Also, to aggregate the state-action pairs, N should be smaller than |X | (otherwise we will obtain an identity mapping). In this case, mapping two state-action pairs with different return distributions to the same abstraction will increase the loss and therefore is avoided. The following theorem shows that Z-learning can learn Zπ-irrelevance provably efficiently.\nTheorem 4.1. Given the encoder φ̂ returned by Algorithm 1, the following inequality holds with probability 1− δ and for any x′ ∈ X :\nEx1∼d,x2∼d [ I[φ̂(x1) = φ̂(x2)] ∣∣∣Zπ(x′)T (Zπ(x1)− Zπ(x2)) ∣∣∣] ≤ √ 8N\nn\n( 3 + 4N2 lnn+ 4 ln |ΦN |+ 4 ln( 2 δ ) ) ,\n(2)\nwhere |ΦN | is the cardinality of encoder function class and n is the size of the dataset.\nWe provide the proof in Appendix B. Although |ΦN | is upper bounded by N |X |, it is generally much smaller for deep encoders that generalize over the state-action space. The theorem shows that whenever φ̂ maps two state-actions x1, x2 to the same abstraction, Zπ(x1) ≈ Zπ(x2) up to an error proportional to 1/ √ n (ignoring the logarithm factor). The following corollary shows that φ̂ becomes a Zπ-irrelevance when n→∞. Corollary 4.1.1. The encoder φ̂ returned by Algorithm 1 with n→∞ is a Zπ-irrelevance, i.e., for any x1, x2 ∈ X , Zπ(x1) = Zπ(x2) if φ̂(x1) = φ̂(x2)." }, { "heading": "4.3 RETURN-BASED CONTRASTIVE LEARNING FOR RL (RCRL)", "text": "We adapt Z-learning as an auxiliary task that helps the agent to learn a representation with meaningful semantics. The auxiliary task based RL algorithm is called RCRL and shown in Algorithm 2. Here, we use Rainbow (for discrete control) and SAC (for continuous control) as the base RL algorithm for RCRL. However, RCRL can also be easily incorporated with other model-free RL algorithms. While Z-learning relies on a dataset sampled by rolling out the current policy, RCRL constructs such a dataset using the samples collected by the base RL algorithm and therefore does not require additional samples, e.g., directly using the replay buffer in Rainbow or SAC (see Line 7 and 8 in Algorithm 2). Compared with Z-learning, we use the state-action embedding network that is shared with the base RL algorithm φ : X → Rd as the encoder, and use an additional discriminator trained by the auxiliary task w : Rd × Rd → [0, 1] as the regressor. However, when implementing Z-learning as the auxiliary task, the labels in the dataset may be unbalanced. Although this does not cause problems in the theoretical analysis since we assume the Bayes optimizer can be obtained for the contrastive loss, it may prevent the discriminator from learning properly in practice (cf. Line 8 in Algorithm 1). To solve this problem, instead of drawing samples independently from the replay buffer B (analogous to sampling from the distribution d in Z-learning), we sample the pairs for D as follows: As a preparation, we cut the trajectories in B into segments, where each segment contains state-action pairs with the same or similar returns. Specifically, in Atari games, we create a new segment once the agent receives a non-zero reward. In DMControl tasks, we first prescribe a threshold and then create a new segment once the cumulative reward within the current segment exceeds this threshold. For each sample in D, we first draw an anchor state-action pair from B randomly. Afterwards, we generate a positive sample by drawing a state-action pair from the same segment of the anchor state-action pair. Then, we draw another state-action pair randomly from B and use it as the negative sample. We believe our auxiliary task may boost the learning, due to better return-induced representations that facilitate generalization across different state-action pairs. Learning on one state-action pair will affect the value of all state-action pairs that share similar representations with this pair. When the embedding network assigns similar representations to similar state-action pairs (e.g., sharing similar distribution over returns), the update for one state-action pair is representative for the updates for other similar state-action pairs, which improves sample efficiency. However, such generalization may not be achieved by the base RL algorithm since, when trained by the algorithm with only a\nAlgorithm 2: Return based Contrastive learning for RL (RCRL) 1 Initialize the embedding φθ : X → Rd and a discriminator wϑ : Rd × Rd → [0, 1] 2 Initialize the parameters ϕ for the base RL algorithm that uses the learned embedding φθ 3 Given a batch of samples D, the loss function for the base RL algorithm is LRL(φθ, ϕ;D) 4 A replay buffer B = ∅ 5 foreach iteration do 6 Rollout the current policy and store the samples to the replay buffer B 7 Draw a batch of samples D from the replay buffer B 8 Update the parameters with the loss function L(φθ, wϑ;D) + LRL(φθ, ϕ;D) 9 end\n10 return The learned policy\nreturn-based loss, similar state-action pairs may have similar Q values but very different representations. One may argue that, we can adopt several temporal difference updates to propagate the values for state-action pairs with same sampled return, and finally all such pairs are assigned with similar Q values. However, since we adopt a learning algorithm with bootstrapping/temporal difference learning and frozen target network in deep RL, it could take longer time to propagate the value across different state-action pairs, compared with direct generalization over state-action pairs with contrastive learning.\nMeanwhile, since we construct auxiliary tasks based on return, which is a very different structure from image augmentation or temporal structure, our method could be combined with existing methods to achieve further improvement." }, { "heading": "4.4 NETWORK STRUCTURE FOR STATE-ACTION EMBEDDING", "text": "In our algorithm, the auxiliary task is based on the state-action embedding, instead of the state embedding that is frequently used in the previous work (e.g., Srinivas et al., 2020). To facilitate our algorithm, we design two new structures for Atari 2600 Games (discrete action) and DMControl Suite (continuous action) respectively. We show the structure in Figure 1. For Atari, we learn an action embedding for each action and use the element-wise product of the state embedding and action embedding as the state-action embedding. For DMControl, the action embedding is a realvalued vector and we use the concatenation of the action embedding and the state embedding." }, { "heading": "5 EXPERIMENT", "text": "In this section, we conduct the following experiments: 1) We implement RCRL on Atari 2600 Games (Bellemare et al., 2013) and DMControl Suite (Tassa et al., 2020), and compare with stateof-the-art model-free methods and strong model-based methods. In particular, we compare with CURL (Srinivas et al., 2020), a top performing auxiliary task based RL algorithm for pixel-based control tasks that also uses a contrastive loss. In addition, we also combine RCRL with CURL to study whether our auxiliary task further boosts the learning when combined with other auxiliary\ntasks. 2) To further study the reason why our algorithm works, we analyze the generalization of the learned representation. Specifically, we compare the cosine similarity between the representations of different state-action pairs. We provide the implementation details in Appendix C." }, { "heading": "5.1 EVALUATION ON ATARI AND DMCONTROL", "text": "Experiments on Atari. Our experiments on Atari are conducted in low data regime of 100k interactions between the agent and the environment, which corresponds to two hours of human play. We show the performance of different algorithms/baselines, including the scores for average human (Human), SimPLe (Kaiser et al., 2019) which is a strong model-based baseline for this regime, original Rainbow (Hessel et al., 2017), Data-Efficient Rainbow (ERainbow, van Hasselt et al., 2019), ERainbow with state-action embeddings (ERainbow-sa, cf. Figure 1 Left), CURL (Srinivas et al., 2020) that is based on ERainbow, RCRL which is based on ERainbow-sa, and the algorithm that combines the auxiliary loss for CURL and RCRL to ERainbow-sa (RCRL+CURL). We show the evaluation results of our algorithm and the baselines on Atari games in Table 1. First, we observe that using state-action embedding instead of state embedding in ERainbow does not lead to significant performance change by comparing ERainbow with ERainbow-sa. Second, built upon ERainbow-sa, the auxiliary task in RCRL leads to better performance compared with not only the base RL algorithm but also SimPLe and CURL in terms of the median human normalized score (HNS). Third, we can see that RCRL further boosts the learning when combined with CURL and achieves the best performance for 7 out of 26 games, which shows that our auxiliary task can be successfully combined with other auxiliary tasks that embed different sources of information to learn the representation.\nExperiments on DMControl. For DMControl, we compare our algorithm with the following baselines: Pixel SAC (Haarnoja et al., 2018) which is the base RL algorithm that receives images as the input; SLAC (Lee et al., 2019) that learns a latent variable model and then updates the actor and critic based on it; SAC+AE (Yarats et al., 2019) that uses a regularized autoencoder for reconstruction in the auxiliary task; PlaNet (Hafner et al., 2019b) and Dreamer (Hafner et al., 2019a) that learn a latent space world model and explicitly plan with the learned model. We also compare with a skyline, State SAC, the receives the low-dimensional state representation instead of the image as the input. Different from Atari games, tasks in DMControl yield dense reward. Consequently, we split the trajectories into segments using a threshold such that the difference of returns within each segment does not exceed this threshold. Similarly, we test the algorithms in low data regime of 500k interactions. We show the evaluation results in Figure 2. We can see that our auxiliary task not only brings performance improvement over the base RL algorithm but also outperforms CURL and other state-of-the-art baseline algorithms in different tasks. Moreover, we observe that our algorithm is\nmore robust across runs with different seeds compared with Pixel SAC and CURL (e.g., for the task Ball in cup, Catch)." }, { "heading": "5.2 ANALYSIS ON THE LEARNED REPRESENTATION", "text": "We analyze on the learned representation of our model to demonstrate that our auxiliary task attains a representation with better generalization, which may explain why our algorithm succeeds. We use cosine similarity to measure the generalization from one state-action pair to another in the deep learning model. Given two state-action pairs x1, x2 ∈ X , cosine similarity is defined as φθ(x1)\nTφθ(x2) ||φθ(x1)|| ||φθ(x2)|| , where φθ(·) is the learnt embedding network.\nWe show the cosine similarity of the representations between positive pairs (that are sampled within the same segment and therefore likely to share similar return distributions) and negative pairs (i.e., randomly sampled state-action pairs) during the training on the game Alien in Figure 4. First, we observe that when a good policy is learned, the representations of positive pairs are similar while those of negative pairs are dissimilar. This indicates that a good representation (or the representation that supports a good policy) aggregates the state-action pairs with similar return distributions. Then, we find that our auxiliary loss can accelerate such generalization for the representation, which makes RCRL learn faster." }, { "heading": "6 CONCLUSION", "text": "In this paper, we propose return-based contrastive representation learning for RL (RCRL), which introduces a return-based auxiliary task to facilitate policy training with standard RL algorithms. Our auxiliary task is theoretically justified to learn representations that capture the structure of Zπirrelevance, which can reduce the size of the state-action space as well as approximate the Q values arbitrarily accurately. Experiments on Atari games and the DMControl suite in low data regime demonstrate that our algorithm achieves superior performance not only when using our auxiliary task alone but also when combined with other auxiliary tasks, .\nAs for future work, we are interested in how to combine different auxiliary tasks in a more sophisticated way, perhaps with a meta-controller. Another potential direction would be providing a theoretical analysis for auxiliary tasks and justifying why existing auxiliary tasks can speed up deep RL algorithms." }, { "heading": "7 ACKNOWLEDGMENT", "text": "Guoqing Liu and Nenghai Yu are supported in part by the Natural Science Foundation of China under Grant U20B2047, Exploration Fund Project of University of Science and Technology of China under Grant YD3480002001. Chuheng Zhang and Jian Li are supported in part by the National Natural Science Foundation of China Grant 61822203, 61772297, 61632016 and the Zhongguancun Haihua Institute for Frontier Information Technology, Turing AI Institute of Nanjing and Xi’an Institute for Interdisciplinary Information Core Technology." }, { "heading": "B PROOF", "text": "Notice that Corollary 4.1.1 is the asymptotic case (n → ∞) for Theorem 4.1. We first provide a sketch proof for Corollary 4.1.1 which ignores sampling issues and thus more illustrative. Later, we provide the proof for Theorem 4.1 which mainly follows the techniques used in Misra et al. (2019).\nB.1 PROOF OF COROLLARY 4.1.1\nRecall that Z-learning aims to solve the following optimization problem:\nmin φ∈ΦN ,w∈WN\nL(φ,w;D) := E(x1,x2,y)∼D [ (w(φ(x1), φ(x2))− y)2 ] , (5)\nwhich can also be regarded as finding a compound predictor f(·, ·) := w(φ(·), φ(·)) over the function class FN := {(x1, x2)→ w(φ(x1), φ(x2)) : w ∈ WN , φ ∈ ΦN}. For the first step, it is helpful to find out the Bayes optimal predictor f∗ when size of the dataset is infinite. We notice the fact that the Bayes optimal predictor for a square loss is the conditional mean, i.e., given a distribution D, the Bayes optimal predictor f∗ = argminf E(x,y)∼D[(f(x) − y)2] satisfies f∗(x′) = E(x,y)∼D[y | x = x′]. Using this property, we can obtain the Bayes optimal predictor over all the functions {X × X → [0, 1]} for our contrastive loss:\nf∗(x1, x2) = E(x′1,x′2,y)∼D[y | x ′ 1 = x1, x ′ 2 = x2]\n= ER1∼Zπ(x1),R2∼Zπ(x2)I[b(R1) 6= b(R2)] = 1− Zπ(x1)TZπ(x2),\n(6)\nwhere we use D to denote the distribution from which each tuple in the dataset D is drawn. To establish the theorem, we require that such an optimal predictor f∗ to be in the function class FN . Following a similar argument to Proposition 10 in Misra et al. (2019), it is not hard to show that using N > Nπ,K is sufficient for this realizability condition to hold.\nCorollary 4.1.1. The encoder φ̂ returned by Algorithm 1 with n → ∞ is a Zπ-irrelevance, i.e., for any x1, x2 ∈ X , Zπ(x1) = Zπ(x2) if φ̂(x1) = φ̂(x2).\nProof of Corollary 4.1.1. Considering the asymptotic case (i.e., n → ∞), we have f̂ = f∗ where f̂(·, ·) := ŵ(φ̂(·), φ̂(·)) and ŵ and φ̂ is returned by Algorithm 1. If φ̂(x1) = φ̂(x2), we have for any x ∈ X ,\n1− Zπ(x1)TZπ(x) = f∗(x1, x) = f̂(x1, x) = f̂(x2, x) = f∗(x2, x) = 1− Zπ(x2)TZπ(x).\nWe obtain Zπ(x1) = Zπ(x2) by letting x = x1 or x = x2.\nB.2 PROOF OF THEOREM 4.1\nTheorem 4.1. Given the encoder φ̂ returned by Algorithm 1, the following inequality holds with probability 1− δ and for any x′ ∈ X :\nEx1∼d,x2∼d [ I[φ̂(x1) = φ̂(x2)] ∣∣∣Zπ(x′)T (Zπ(x1)− Zπ(x2)) ∣∣∣] ≤ √ 8N\nn\n( 3 + 4N2 lnn+ 4 ln |ΦN |+ 4 ln( 2 δ ) ) ,\n(7)\nwhere |ΦN | is the cardinality of encoder function class and n is the size of the dataset.\nThe theorem shows that 1) whenever φ̂ maps two state-actions x1, x2 to the same abstraction, Zπ(x1) ≈ Zπ(x2) up to an error proportional to 1/ √ n (ignoring the logarithm factor), and 2) when the difference of the return distributions (e.g., Zπ(x1)− Zπ(x2)) is large, the chance that two state-action pairs (e.g., x1 and x2) are assigned with the same encoding is small. However, since state-action pairs are sampled i.i.d. from the distribution d, the bound holds in an average sense instead of the worse-case sense.\nThe overview of the proof is as follows: Note that the theorem builds connection between the optimization problem defined in equation 5 and the learned encoder φ̂. To prove the theorem, we first find out the minimizer f∗ of the optimization problem when the number of samples is infinite (which was calculated in equation 6 in the previous subsection). Then, we bound the difference between the learned predictor f̂ and f∗ in Lemma B.2 (which utilizes Lemma B.1) when the number of samples is finite. At last, utilizing the compound structure of f̂ , we can relate it to the encoder φ̂. Specifically, given x1, x2 ∈ X with φ̂(x1) = φ̂(x2), we have f̂(x1, x′) = f̂(x2, x′),∀x′ ∈ X . Definition B.1 (Pointwise covering number). Given a function class G : X → R, the pointwise covering number at scale , N (G, ), is the size of the set V : X → R such that ∀g ∈ G,∃v ∈ V : supx∈X |g(x)− v(x)| < . Lemma B.1. The logarithm of the pointwise covering number of the function class FN satisfies lnN (FN , ) ≤ N2 ln( 1 ) + ln |ΦN |.\nProof. Recall that FN := {(x1, x2) → w(φ(x1), φ(x2)) : w ∈ WN , φ ∈ ΦN}, where WN := {[N ] × [N ] → [0, 1]} and ΦN := {X → [N ]}. Given > 0, we discretize [0, 1] to Y := { , · · · , d 1 e }. Then, we defineWN := {[N ]×[N ]→ Y } and it is easy to see thatWN is a covering of WN with |WN | ≤ ( 1 )\nN2 . Next, we observe that FN := {(x1, x2) → w(φ(x1), φ(x2)) : w ∈ WN , φ ∈ ΦN} is a covering of FN and |FN | = |ΦN | |WN |. We complete the proof by combining the above results.\nProposition B.1 (Proposition 12 in Misra et al. (2019)). Consider a function class G : X → R and n samples {(xi, yi)}ni=1 drawn from D, where xi ∈ X and yi ∈ [0, 1]. The Bayes optimal function is g∗ = arg ming∈G E(x,y)∼D [ (g(x)− y)2 ] and the empirical risk minimizer is\nĝ = arg ming∈G 1n ∑n i=1 [ (g(xi)− yi)2 ] . With probability at least 1− δ for a fixed δ ∈ (0, 1),\nE(x,y)∼D [ (ĝ(x)− g∗(x))2 ] ≤ inf >0 [ 6 + 8 ln(2N (G, )/δ) n ] ,\nLemma B.2. Given the empirical risk minimizer f̂ in Algorithm 1, we have\nE(x1,x2)∼D [ (f̂(x1, x2)− f∗(x1, x2))2 ] ≤ ∆reg = 6\nn +\n8\nn\n( N2 lnn+ ln |ΦN |+ ln( 2\nδ )\n) , (8)\nwith probability at least 1− δ.\nProof. This directly follows from the combination of Lemma B.1 and Proposition B.1 by letting = 1n .\nProof of Theorem 4.1. First, we denote Ei := I[φ̂(x1) = i = φ̂(x2)] and bi := Ex∼d [ I[φ̂(x) = i] ] to be the prior probability over the i-th abstraction. Then, for all x′ ∈ X , we have\nEx1∼d,x2∼d [ I[φ̂(x1) = i = φ̂(x2)] ∣∣Zπ(x′)T (Zπ(x1)− Zπ(x2)) ∣∣] =Ex1∼d,x2∼d [Ei |f∗(x1, x′)− f∗(x2, x′)|]\n≤Ex1∼d,x2∼d [ Ei ( |f∗(x1, x′)− f̂(x1, x′)|+ |f∗(x1, x′)− f̂(x2, x′)| )] =Ex1∼d,x2∼d [ Ei ( |f∗(x1, x′)− f̂(x1, x′)|+ |f∗(x1, x′)− f̂(x1, x′)|\n)] ≤2Ex1∼d [ I[φ̂(x1) = i]\n∣∣f∗(x1, x′)− f̂(x1, x′)∣∣] ≤2 √ bi∆reg.\nThe first step is obtained using Equation (6). The second step is the triangle inequality. The third step uses the fact that under event Ei, we have φ̂(x1) = φ̂(x2) and therefore f̂(x1, x′) = f̂(x2, x′). In the fourth step, we drop the dependence on the other variable, and marginalize over Dcoup. The last step is the Cauchy-Schwarz inequality.\nAt last, we complete the proof by summing over all the i ∈ [N ] and using the fact ∑N i=1 √ bi ≤ √ N and Lemma B.2.\nC IMPLEMENTATION DETAILS\nC.1 IMPLEMENTATION FOR ATARI GAMES\nCodebase. For Atari games, we use the codebase from https://github.com/Kaixhin/ Rainbow which is an implementation for data-efficient Rainbow (van Hasselt et al., 2019).\nNetwork architecture. To implement our algorithm, we modify the architecture for the value network as shown in Figure 1 Left. In data-efficient Rainbow, the state embedding has a dimension of 576. We maintain an action embedding for each action, which is a vector of the same dimension and treated as trainable parameters. Then, we generate the state-action embedding by conducting an element-wise product to the state embedding and the action embedding. This state-action embedding is shared with the auxiliary task and the main RL task. Afterwards, the value network outputs the return distribution for this state-action pair (noting that Rainbow uses a distributional RL algorithm, C51 (Bellemare et al., 2017)) instead of the return distributions of all actions for the input state as is in the original implementation.\nHyperparameters. We use exactly the same hyperparameters as those used in van Hasselt et al. (2019) and CURL (Srinivas et al., 2020) to quantify the gain brought by our auxiliary task and compare with CURL. We refer the readers to their paper for the detailed list of the hyperparameters.\nBalancing the auxiliary loss and the main RL loss. Unlike CURL (or other previous work such as Jaderberg et al. (2016); Yarats et al. (2019)) that uses different/learned coefficients/learning rates for different games to balance the auxiliary task and the RL updates, our algorithm uses equal weight and learning rate for both the auxiliary task and the main RL task. This demonstrates the our auxiliary task is robust and does not need careful tuning for these hyperparameters compared with the previous work.\nAuxiliary loss. Since the rewards in Atari games are sparse, we divide the segments such that all the state-action pairs within the same segment have the same return. This corresponds to the setting of Z-learning with K → ∞ where the positive sample has exactly the same return with that of the anchor. Then, the auxiliary loss for each update is calculated as follows: First, we sample a batch of 64 anchor state-action pairs from the prioritized replay memory. Then, for each state-action pair, we sample the corresponding positive pair (i.e., the state-action pair within the same segment as the anchor state-action pair) and the corresponding negative pair (randomly selected from the replay memory). The auxiliary loss is calculated on these samples with effectively |D| = 128.\nC.2 IMPLEMENTATION FOR DMCONTROL SUITE\nCodebase. We use SAC as the base RL algorithm and build our algorithm on top of the publicly released implementation from CURL (Srinivas et al., 2020).\nNetwork architecture. Similarly, we modify the architecture for the critic network in SAC. In SAC, the state embedding has a dimension of 50. Since the actions are continuous vectors of dimension d in the continuous control tasks of DMControl suite, we directly concatenate the action to the state embedding, resulting in a state-action embedding of size 50 + d. Then, the critic network receives the state-action embedding as the input and outputs the Q value. The actor network receives the state embedding as the input and output the action selection distribution on the corresponding state. Note that, although our auxiliary loss is based on the state-action embedding, the state embedding used by the actor network is also trained by the auxiliary loss through back-propagation of the gradients.\nHyperparameters. We set the threshold for dividing the segments to 1.0, i.e., when appending transitions to the replay buffer, we start a new segment when the cumulative reward within the last segment exceeds this threshold. The auxiliary loss and the hyperparameters to balance the auxiliary loss and the main RL loss are the same as those used for Atari games. Other hyperparameters we use are exactly the same as those in CURL implementation and we refer the readers to their paper for the details." }, { "heading": "D ADDITIONAL EXPERIMENT RESULTS", "text": "D.1 MORE SEEDS RESULTS OF REPRESENTATION ANALYSIS\nFor clearness, we only show the result of representation analysis with a single seed in the main text. We add the results for multiple seeds here. The detailed description of analysis task can be found in the first paragraph in Section 5.2.\nD.2 HIGH DATA REGIME RESULTS\nTo empirically study how applicable our model is to higher data regimes, we run the experiments on the first five Atari games (of Table 1) for 1.5 millon agent interactions. We show the evaluation results of both our algorithm and the rainbow baseline in Table 2. We can see that RCRL outperforms the ERainbow-sa baseline for 4 out of 5 games, which may imply that our auxiliary task has the potential to improve performance in the high-data regime." } ]
2,021
null
SP:bd0775160c5ab06f765a031236995c84926b5f70
[ "Setting appropriate learning rate for network optimization is an important task in deep learning applications. This paper investigates the setting of learning rates for network parameters in different levels, e.g., individual parameter, each layer and global levels. By setting the constraints on the learning rates at multiple scales, the paper derived a hierarchical learning rate setting approach, which is the combination of adaptive learning rates at different levels. " ]
In this study, we investigate learning rate adaption at different levels based on the hyper-gradient descent framework and propose a method that adaptively learns the optimizer parameters by combining different levels of adaptations. Meanwhile, we show the relationship between regularizing over-parameterized learning rates and building combinations of adaptive learning rates at different levels. The experiments on several network architectures, including feed-forward networks, LeNet-5 and ResNet-18/34, show that the proposed multi-level adaptive approach can significantly outperforms baseline adaptive methods in a variety of circumstances.
[]
[ { "authors": [ "L.B. Almeida", "T. Langlois", "J.D. Amaral", "A. Plakhov" ], "title": "Parameter adaptation in stochastic optimization. On-Line Learning in Neural Networks, Publications of the Newton Institute", "venue": null, "year": 1998 }, { "authors": [ "M. Andrychowicz", "M. Denil", "S. Gomez", "M.W. Hoffman", "D. Pfau", "T. Schaul", "B. Shillingford", "N. De Freitas" ], "title": "Learning to learn by gradient descent by gradient descent", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "A.G. Baydin", "R. Cornish", "D.M. Rubio", "M. Schmidt", "F. Wood" ], "title": "Online learning rate adaptation with hypergradient descent", "venue": null, "year": 2017 }, { "authors": [ "A.G. Baydin", "B.A. Pearlmutter", "A.A. Radul", "J.M. Siskind" ], "title": "Automatic differentiation in machine learning: a survey", "venue": null, "year": 2018 }, { "authors": [ "J. Duchi", "E. Hazan", "Y. Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "JMLR, 12:2121–2159,", "year": 2011 }, { "authors": [ "L. Franceschi", "M. Donini", "P. Frasconi", "M. Pontil" ], "title": "Forward and reverse gradient-based hyperparameter optimization. In ICML, pages 1165–1173", "venue": "JMLR. org,", "year": 2017 }, { "authors": [ "K. He", "J. Sun" ], "title": "Convolutional neural networks at constrained time cost", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "H. Karimi", "J. Nutini", "M. Schmidt" ], "title": "Linear convergence of gradient and proximal-gradient methods under the polyak-łojasiewicz condition", "venue": "In Joint European Conference on Machine Learning and Knowledge Discovery in Databases,", "year": 2016 }, { "authors": [ "D.P. Kingma", "J. Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "ICLR,", "year": 2015 }, { "authors": [ "A. Krizhevsky", "G. Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "University of Toronto,", "year": 2012 }, { "authors": [ "H. Lang", "L. Xiao", "P. Zhang" ], "title": "Using statistics to automate stochastic optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Y. LeCun", "L. Bottou", "Y. Bengio", "P. Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Y. LeCun" ], "title": "Lenet-5, convolutional neural networks. URL: http://yann", "venue": "lecun. com/exdb/lenet,", "year": 2015 }, { "authors": [ "Z. Li", "S. Arora" ], "title": "An exponential learning rate schedule for deep learning", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "L. Liu", "H. Jiang", "P. He", "W. Chen", "X. Liu", "J. Gao", "J. Han" ], "title": "On the variance of the adaptive learning rate and beyond", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "I. Loshchilov", "F. Hutter" ], "title": "Sgdr: Stochastic gradient descent with warm restarts", "venue": null, "year": 2017 }, { "authors": [ "L. Luo", "Y. Xiong", "Y. Liu", "X. Sun" ], "title": "Adaptive gradient methods with dynamic bound of learning rate", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "K. Lv", "S. Jiang", "J. Li" ], "title": "Learning gradient descent: Better generalization and longer horizons", "venue": "In ICML,", "year": 2017 }, { "authors": [ "D. Maclaurin", "D. Duvenaud", "R. Adams" ], "title": "Gradient-based hyperparameter optimization through reversible learning", "venue": "In ICML,", "year": 2015 }, { "authors": [ "Y. Netzer", "T. Wang", "A. Coates", "A. Bissacco", "B. Wu", "A.Y. Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "In NIPS Workshop on Deep Learning and Unsupervised Feature Learning", "year": 2011 }, { "authors": [ "B. O’donoghue", "E. Candes" ], "title": "Adaptive restart for accelerated gradient schemes", "venue": "Foundations of computational mathematics,", "year": 2015 }, { "authors": [ "S.J. Reddi", "S. Kale", "S. Kumar" ], "title": "On the convergence of adam and beyond", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "M. Rolinek", "G. Martius. L" ], "title": "Practical loss-based stepsize adaptation for deep learning", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "S. Ruder" ], "title": "An overview of gradient descent optimization algorithms", "venue": null, "year": 2016 }, { "authors": [ "N.N. Schraudolph" ], "title": "Local gain adaptation in stochastic gradient descent", "venue": "Ninth International Conference on Artificial Neural Networks ICANN 99. (Conf. Publ. No. 470),", "year": 1999 }, { "authors": [ "R. Sun" ], "title": "Optimization for deep learning: theory and algorithms", "venue": null, "year": 2019 }, { "authors": [ "R.S. Sutton" ], "title": "Gain adaptation beats least squares", "venue": "In Proceedings of the 7th Yale workshop on adaptive and learning systems,", "year": 1992 }, { "authors": [ "T. Tieleman", "G. Hinton" ], "title": "Rmsprop: Divide the gradient by a running average of its recent magnitude. coursera: Neural networks for machine learning", "venue": "Tech. Rep., Technical report,", "year": 2012 }, { "authors": [ "O. Wichrowska", "N. Maheswaranathan", "M.W. Hoffman", "S.G. Colmenarejo", "M. Denil", "N. de Freitas", "J. Sohl-Dickstein" ], "title": "Learned optimizers that scale and generalize", "venue": "In ICML,", "year": 2017 }, { "authors": [ "J. Yu", "D. Aberdeen", "N.N. Schraudolph" ], "title": "Fast online policy gradient learning with smd gain vector adaptation", "venue": "In NeurIPS,", "year": 2006 }, { "authors": [ "M. Zhang", "J. Lucas", "J. Ba", "G.E. Hinton" ], "title": "Lookahead optimizer: k steps forward, 1 step back", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "P. Zhang", "H. Lang", "Q. Liu", "L. Xiao" ], "title": "Statistical adaptive stochastic gradient methods", "venue": null, "year": 2002 } ]
[ { "heading": "1 INTRODUCTION", "text": "The basic optimization algorithm for training deep neural networks is the gradient descent method (GD), which includes stochastic gradient descent (SGD), mini-batch gradient descent, and batch gradient descent. The model parameters are updated according to the first-order gradients of the empirical risks with respect to the parameters being optimized, while back-propagation is implemented for calculating the gradients of parameters (Ruder, 2016). Naïve gradient descent methods apply fixed learning rates without any adaptation mechanisms. However, considering the change of available information during the learning process, SGD with fixed learning rates can result in inefficiency and requires a large amount of computing resources in hyper-parameter searching. One solution is to introduce a learning rate adaptation. This idea can be traced back to the work on gain adaptation for connectionist learning methods (Sutton, 1992) and related extensions for non-linear cases (Schraudolph, 1999; Yu et al., 2006). In recent years, optimizers with adaptive updating rules were developed in the context of deep learning, while the learning rates are still fixed in training. The proposed methods include AdaGrad (Duchi et al., 2011), RMSProp (Tieleman and Hinton, 2012), and Adam (Kingma and Ba, 2015). In addition, there are optimizers aiming to address the convergence issue in Adam (Reddi et al., 2018; Luo et al., 2018) and to rectify the variance of the adaptive learning rate (Liu et al., 2019). Other techniques, such as Lookahead, can also achieve variance reduction and stability improvement with negligible extra computational cost (Zhang et al., 2019).\nEven though the adaptive optimizers with fixed learning rates can converge faster than SGD in a wide range of tasks, the updating rules are designed manually while more hyper-parameters are introduced. Another idea is to use objective function information and update the learning rates as trainable parameters. These methods were introduced as automatic differentiation, where the hyper-parameters can be optimized with backpropagation (Maclaurin et al., 2015; Baydin et al., 2018). As gradient-based hyper-parameter optimization methods, they can be implemented as an online approach (Franceschi et al., 2017). With the idea of auto-differentiation, learning rates can be updated in real-time with the corresponding derivatives of the empirical risk (Almeida et al., 1998), which can be generated to all types of optimizers for deep neural networks (Baydin et al., 2017). Another step size adaptation approach called “L4”, is based on the linearized expansion of the loss functions, which rescales the gradient to make fixed predicted progress on the loss (Rolinek and Martius, 2018). Furthermore, by addressing the issue of poor generalization performance of adaptive methods, dynamically bound for gradient methods was introduced to build a gradual transition between adaptive approach and SGD (Luo et al., 2018).\nAnother set of approaches train an RNN (recurrent neural network) agent to generate the optimal learning rates in the next step given the historical training information, known as “learning to learn”\n(Andrychowicz et al., 2016). This approach empirically outperforms hand-designed optimizers in a variety of learning tasks, but another study has shown that it may not be effective for long horizons (Lv et al., 2017). The generalization ability of this approach can be improved by using meta training samples and hierarchical LSTMs (long short-term memory) (Wichrowska et al., 2017).\nBeyond the adaptive learning rate, learning rate schedules can also improve the convergence of optimizers, including time-based decay, step decay, exponential decay (Li and Arora, 2019). The most fundamental and widely applied one is a piece-wise step-decay learning rate schedule, which could vastly improve the convergence of SGD and even adaptive optimizers(Luo et al., 2018; Liu et al., 2019). It can be further improved by introducing a statistical test to determine when to apply step-decay (Lang et al., 2019; Zhang et al., 2020). Also, there are works on warm-restart (O’donoghue and Candes, 2015; Loshchilov and Hutter, 2017), which could improve the performance of SGD anytime when training deep neural networks.\nWe find that the existing gradient or model-based learning rate adaptation methods including hypergradient descent, L4 and learning to learn only focus on global adaptation, which could be further extended to multi-level cases. That focus aims to introduce locally shared adaptive learning rates such as the layer-wise learning rate and parameter-wise learning rate and considers all levels’ information in determining the updating step-size for each parameter. The main contribution of our study can be summarized as follows:\n• We introduce hierarchical learning rate structures for neural networks and apply hypergradient descent to obtain adaptive learning rates at different levels. • We introduce a set of regularization techniques for learning rates to address the balance of\nglobal and local adaptations and show the relationship with weighted combinations. • We propose an algorithm implementing the combination of adaptive learning rates at multiple\nlevels for model parameter updating." }, { "heading": "2 MULTI-LEVEL ADAPTATION METHODS", "text": "" }, { "heading": "2.1 LAYER-WISE, UNIT-WISE AND PARAMETER-WISE ADAPTATION", "text": "In the paper on hyper-descent (Baydin et al., 2017), the learning rate is set to be a scalar. However, to make the most of learning rate adaptation, in this study, we introduce layer-wise or even parameterwise updating rules, where the learning rate αt in each iteration time step is considered to be a vector (layer-wise) or even a list of matrices (parameter-wise). For the sake of simplicity, we collect all the learning rates in a vector: αt = (α1,t, ..., αN,t)T . Correspondingly, the objective f(θ) is a function of θ = (θ1, θ2, ..., θN )T , collecting all the model parameters. In this case, the derivative of the objective function f with respect to each learning rate can be written as\n∂f(θt−1) ∂αi,t−1 = ∂f(θ1,t−1, ..., θN,t−1) ∂αi,t−1 = N∑ j=1 ∂f(θ1,t−1, ..., θN,t−1) ∂θj,t−1 ∂θj,t−1 ∂αi,t−1 , (1)\nwhere N is the total number of all the model parameters. Eq. (1) can be generalized to groupwise updating, where we associate a learning rate with a special group of parameters, and each parameter group is updated according to its only learning rate. Notice that although there is a dependency between αt−1 and θt−2 with: αt−1 = αt−2 − β∇f(θt−2), where β is the updating rate of hyper-gradient descent, we consider that αt−1 is calculated after θt−2 and thus a change of αt−1 will not result in a change of θt−2. Assume θt = u(Θt−1, α) is the updating rule, where Θt = {θs}ts=0 and α is the learning rate, then the basic gradient descent method for each group i gives θi,t = u(Θt−1, αi,t−1) = θi,t−1 − αi,t−1∇θif(θt−1). Hence for gradient descent\n∂f(θt−1)\n∂αi,t−1 = ∇θif(θt−1)T∇αi,t−1u(Θt−1, αi,t−1) = −∇θif(θt−1)T∇θif(θt−2). (2)\nHere αi,t−1 is a scalar with index i at time step t − 1, corresponding to the learning rate of the ith group, while the shape of ∇θif(θ) is the same as the shape of θi. We particularly consider two special cases: (1) In layer-wise adaptation, θi is the weight matrix of ith layer, and αi is the particular learning rate for this layer. (2) In parameter-wise adaptation, θi corresponds to a certain parameter involved in the model, which can be an element of the weight matrix in a certain layer." }, { "heading": "2.2 REGULARIZATION ON ADAPTIVE LEARNING RATES", "text": "The selection of adaptation level should depend on a case-by-case basis. Global or parameter-wise adaptation is usually not the optimal choice across all circumstances. Recall that for deep neural networks, we typically use a relatively large architecture with regularization. This idea can also be applied to learning rate space with parameter structure. To address over-parameterization in implementing lower-level learning rate adaptation, we introduce regularization on learning rates to control the flexibility. First, for layer-wise adaptation, we can add the following regularization term to the loss function\nLlr_reg_layer = λlayer ∑ l (αl − αg)2, (3)\nwhere l is the indices for each layer, λlayer is the layer-wise regularization coefficient, αl and αg are the layer-wise and global-wise adaptive learning rates. A larger λlayer can push each layer’s learning rate towards the global learning rate across all the layers. Given a particular αg,t, the gradient of the loss function with respect to the learning rate αl in layer l can be written as\n∂Lfull(θ, α) ∂αl,t = ∂Lmodel(θ, α) ∂αl,t + ∂Llr_reg(θ, α) ∂αl,t\n= ∇θlf(θt−1)T∇αl,t−1u(Θt−2, αt−1) + 2λlayer(αl,t − αg,t). (4)\nNotice that the time step index of layer-wise regularization term is t rather than t− 1, which ensures that we push the layer-wise learning rates towards the corresponding global learning rates of the current step t. Denoting by hl,t−1 = −∇θlf(θt−1)T∇θlu(Θt−2, αl,t−1), then the updating rule for learning rates can be written as\nαl,t = αl,t−1 − β ∂Lfull(θ, α)\n∂αl,t = αl,t−1 − β(−hl,t−1 + 2λlayer(αl,t − αg,t)). (5)\nEq. (5) has a close form solution but only applicable in the two-levels case. However, there is an extra hyper-parameter λlayer to be tuned. In addition, when there are more levels, components of learning rates at different levels can be interdependent. To construct a workable updating scheme for Eq. (5), we replace αl,t and αg,t with their relevant approximations. We take the strategy of using their updated version without considering regularization, i.e., α̂l,t = αl,t−1 + βhl,t−1 and α̂g,t = αg,t−1 + βhg,t−1, where hg,t−1 = −∇θf(θt−1)T∇αg,t−1u(Θt−2, αg,t−1) is the global h for all parameters. Here we regard α̂l,t and α̂g,t as the “virtual” layer-wise and global-wise learning rates for time step t and taking them into the right-hand side of Eq. (5) gives the new updating rule as follows\nα∗l,t = αl,t−1 + βhl,t−1 − 2βλlayer(α̂l,t − α̂g,t) = (1− 2βλlayer)α̂l,t + 2βλlayerα̂g,t. (6)\nNotice that in Eq. (6), the two terms are actually a weighted average of the layer-wise learning rate α̂l,t and global learning rate α̂g,t at the current time step. Since we hope to push the layer-wise learning rates towards the global one, the parameters should meet the constraint: 0 < 2βλlayer < 1, and thus they can be optimized using hyper-parameter searching within a bounded interval as well as gradient-based hyper-parameter optimizations. We can also consider the case where three levels of learning rate adaptations are involved, including global-wise, layer-wise, and parameter-wise adaptation. If we introduce two more regularization terms to control the variation of parameter-wise learning rate with respect to layer-wise learning rate and global learning rates, the regularization loss can be written as\nLlr_reg_para = λlayer ∑ l (αl − αg)2 + λpara_layer ∑ l ∑ p (αpl − αl)2 + λpara ∑ l ∑ p (αpl − αg)2,\nwhere αpl is the learning rate for the p-th parameter inside layer l. The second and third terms push each parameter-wise learning rate towards the layer-wise learning rate and the global learning rates, respectively. Like the two-level case, the updating rule with this three-level regularization can be approximated by the weighted combination of three components under “virtual approximation”. The detail of the updating rule for the three-levels case is given by Algorithm 1 in Section 2.3. We also provided a discussion on the bias of implementing “virtual approximation” in Appendix A.1.\nIn general, we can organize all the learning rates in a tree structure. For example, in the three-level case above, αg will be the root node, while {αl} are the children node at level 1 of the tree and {αlp}\nare the children node of αl as leaf nodes at level three of the tree. In a general case, we assume there are L levels in the tree. Denote the set of all the paths from the root node to each of leave nodes as P and a path is denoted by p = {α1, α2, ..., αL} where α1 is the root node, and αL is the left node on the path. On this path, denote ancestors(i) all the ancestor nodes of αi along the path, i.e., ancestors(i) = {α1, ..., αi−1}. We will construct a regularizer to push αi towards each of its parents. Then the regularization can be written as\nLlr_reg = ∑ p∈P ∑ αi∈p ∑ αj∈ancestor(i) λij(αi − αj)2. (7)\nUnder this pair-wise L2 regularization, the updating rule for any leave node learning rate αL can be given by the following theorem whose proof is provided in Appendix A.2. Theorem 1. Under virtual approximation, the effect of applying pair-wise L2 regularization Eq. (7) results in performing a weighted linear combination of virtual learning rates at different levels α∗L = ∑L j=1 γjα̂j with ∑L j=1 γj = 1, where each component α̂j is calculated by assuming no regularization.\nRemarks: Theorem 1 actually suggests that a similar updating rule can be obtained for the learning rate at any level on the path. All these have been demonstrated in Algorithm 1 for the three-level case." }, { "heading": "2.3 PROSPECTIVE OF ADAPTIVE LEARNING RATE COMBINATION", "text": "Motivated by the analytical derivation in Section 2.2, we can consider combining adaptive learning rates at different levels as a substitute and approximation of regularization on the differences in learning rates. This makes the effect of learning rate regularization trainable with gradient-based methods. In a general form, assume that we have L levels, which could include global-level, layer-level, unit-level and parameter-level, etc, Theorem 1 suggests the following updating rule: αt = ∑L j=1 γjα̂j,t. In a more general form, we can implement non-linear models such as neural networks to model the final adaptive learning rates with respect to the learning rates at different levels:αt = g(α̂1,t, α̂2,t...α̂L,t; θ), where θ is the vector of parameters of the non-linear model. We can treat the combination weights {γ1, ..., γL} as trainable parameters, which can also be globally shared or parameter/layer-specific. In this study, we consider the globally shared combination weights. We only need these different levels of learning rate to have a hierarchical relationship to apply this method. For example, we can further introduce “filter level” to replace layer-level for the convolutional neural network if there is no clear layer structure, where the parameters in each filter will share the same learning rate.\nAs the real learning rates implemented in model parameter updating are weighted combinations, the corresponding gradient matrices cannot be directly used for learning rate updating. In this case, we first break down the gradient by the combined learning rate to three levels, use each of them to update the learning rate at each level, and then calculate the combination by the updated learning rates. Especially, hp,t, hl,t and hg,t are calculated by the gradients of model losses without regularization, as is shown in Eq. (8)1.\nhp,t = ∂f(θ, α)\n∂αp,t = −∇θf(θt−1, α)|p · ∇αu(Θt−2, α)|p\nhl,t = ∂f(θ, α)\n∂αl,t = −tr(∇θf(θt−1, α)|Tl ∇αu(Θt−2, α)|l)\nhg,t = ∂f(θ, α)\n∂αt = − n∑ l=1 tr(∇θf(θt−1, α)|Tl ∇αu(Θt−2, α)l)\n(8)\nwhere ht = ∑ l hl,t = ∑ p hp,t and hl,t = ∑ p∈lth layer hp and f(θ, α) corresponds to the model loss Lmodel(θ, α) in Section 2.2. Algorithm 1 is the full updating rules for the newly proposed optimizer with three levels, which can be denoted as combined adaptive multi-level hyper-gradient descent (CAM-HD). In Algorithm 1, we introduce the general form of gradient descent based optimizers (Reddi et al., 2018; Luo et al., 2018): for SGD, φt(g1, ...gt) = gt and ψt(g1, ...gt) = 1, while\n1Here we use trace form to represent the sum of the element-wise product but compute in the simplest way.\nAlgorithm 1: Updating the rule of three-level CAM-HD input: α0, β, δ, T initialization: θ0, γ1,0, γ2,0, γ3,0, αp,0, αl,0, α0, α∗l,0 = γ1,0αp,0 + γ2,0αl,0 + γ3,0α0 for t ∈ 1, 2, ..., T do\ngt = ∇θf(θ, α) Update hp,t, hl,t and hg,t by Eq. (8). αp,t = αp,t−1 − βp ∂f(θt−1)∂α∗p,t−1 ∂α∗p,t−1 ∂αp,t−1 = αp,t−1 − βpγ1,t−1hp,t\nαl,t = αl,t−1 − βl ∑ p ∂f(θt−1) ∂α∗p,t−1 ∂α∗p,t−1 ∂αl,t−1 = αl,t−1 − βlγ2,t−1 ∑ p hp,t = αl,t−1 − βlγ2,t−1hl,t\nαt = αt−1 − βg ∑ l ∑ p ∂f(θ) ∂α∗p,t−1 ∂α∗p,t−1 ∂αt−1\n= αt−1 − βgγ3,t−1hg,t α∗p,t = γ1,t−1αp,t + γ2,t−1αl,t + γ3,t−1αt\nγ1,t = γ1,t−1 − δ ∂L∂γ1,t−1 = γ1,t−1 − δ ∑ p ∂L ∂α∗p,t−1 ∂α∗p,t−1 ∂γ1,t−1 = γ1,t−1 − δαp,t−1 ∑ p ∂L ∂α∗p,t−1\nγ2,t = γ2,t−1 − δ ∂L∂γ2,t−1 = γ2,t−1 − δ ∑ p ∂L ∂α∗p,t−1 ∂α∗p,t−1 ∂γ2,t−1 = γ1,t−1 − δαl,t−1 ∑ p ∂L ∂α∗p,t−1\nγ3,t = γ3,t−1 − δ ∂L∂γ3,t−1 = γ3,t−1 − δ ∑ p ∂L ∂α∗p,t−1 ∂α∗p,t−1 ∂γ3,t−1 = γ3,t−1 − δαt−1 ∑ p ∂L ∂α∗p,t−1 γ1 = γ1/(γ1 + γ2 + γ3), γ2 = γ1/(γ1 + γ2 + γ3), γ3 = γ1/(γ1 + γ2 + γ3) mt = φt(g1, ...gt) Vt = ψt(g1, ...gt) θt = θt−1 − α∗p,tmt/ √ Vt\nend return θT , γ1,T , γ2,T , γ3,T , αp,T , αl,T , αT\nfor Adam, φt(g1, ...gt) = (1 − β1)Σti=1β t−1 1 gi and ψt(g1, ...gt) = (1 − β2)diag(Σti=1β t−1 2 g 2 i ). Meanwhile, the corresponding u(Θt−2, α) should be changed accordingly. Notice that in each updating time step of Algorithm 1, we re-normalize the combination weights γ1, γ2 and γ3 to ensure that their summation is always 1 even after updating with stochastic gradient-based methods. An alternative way of doing this is to implement Softmax function. In addition, the training of γs can also be extended to multi-level cases, which means we can have different combination weights in different layers. For the updating rates βp, βl and βg of the learning rates at different levels, we set: βp = npβ = β, βl = nlβ, βg = nβ, where β is a shared parameter. This setting will make the updating steps of learning rates at different levels be on the same scale considering their difference in the number of parameters. An alternative way is to take the average based on the number of parameters in Eq. (8) at first." }, { "heading": "2.4 CONVERGENCE ANALYSIS AND ALGORITHM COMPLEXITY", "text": "The proposed CAM-HD is not an independent optimization method, which can be applied in any gradient-based updating rules. Its convergence properties highly depend on the base optimizer that is applied. By referring to the discussion on convergence in (Baydin et al., 2017), we introduce κp,t = τ(t)α ∗ p,t + (1− τ(t))α∞, where the function τ(t) is selected to satisfy tτ(t)→ 0 as t→∞, and α∞ is a chosen constant value. Then we can demonstrate the convergence analysis for the three-level case in the following theorem.\nTheorem 2 (Convergence under mild assumptions about f ). Suppose that f is convex and L-Lipschitz smooth with ‖∇pf(θ)‖ < Mp, ‖∇lf(θ)‖ < Ml, ‖∇gf(θ)‖ < Mg for some fixed Mp, Ml, Mg and all θ. Then θt → θ∗ if α∞ < 1/L where L is the Lipschitz constant for all the gradients and t · τ(t)→ 0 as t→∞, where the θt are generated according to (non-stochastic) gradient descent.\nIn the above theorem, ∇p is the gradient of target function w.r.t. a model parameter with index p, ∇l is the average gradient of target function w.r.t. parameters in a layer with index l, and∇g is the global average gradient of target function w.r.t. all model parameters. The proof of this theorem is given in Appendix A.3. Notice that when we introduce κp,t instead of α∗p,t in Algorithm 1, the corresponding gradients ∂L(θ)∂α∗p,t−1 will also be replaced by ∂L(θ) ∂κ∗p,t−1 ∂κ∗p,t−1 ∂α∗p,t−1 = ∂L(θ)∂κ∗p,t−1 τ(t).\nWe also made an analysis on the number of parameters and algorithm complexity (See Appendix A.4). Our method will not increase the number of model parameters but requires extra space complexity ∆T during training. Also it requires an extra time complexity but at least one-order smaller than training with baseline setting, while the absolute ratio is smaller than the inverse of batch size ∆TT << 1/mb." }, { "heading": "3 EXPERIMENTS", "text": "We use the feed-forward neural network models and different types of convolutions neural networks on multiple benchmark datasets to compare with existing baseline optimizers.For each learning task, the following optimizers will be applied: (a) standard baseline optimizers such as Adam and SGD; (b) hyper-gradient descent in (Baydin et al., 2017); (c) L4 stepsize adaptation for standard optimizers (Rolinek and Martius, 2018); (d) Adabound optimizer (Luo et al., 2018); (e) RAdam optimizer (Liu et al., 2019); and (f) the proposed adaptive combination of different levels of hyper-descent. The implementation of (b) is based on the code provided with the original paper. For each experiment, we provide the average curve and standard error bar in each time step with ten runs." }, { "heading": "3.1 HYPER-PARAMETER TUNING", "text": "To compare the effect of CAM-HD with baseline optimizers, we first do hyperparameter tuning for the model training process with baseline optimizers by referring to related papers (Kingma and Ba, 2015; Baydin et al., 2017; Rolinek and Martius, 2018; Luo et al., 2018) as well as implementing an independent grid search. We mainly consider the hyper-parameters of batch size, learning rate, and momentum for models with different architecture. The search space for batch size is the set of {2n}n=3,...,9, while the search space for learning rate, hyper-gradient updating rate and combination weight updating rate (CAM-HD-lr) are {10−1, ..., 10−4}, {10−1, ..., 10−10} and {10−1, ..., 10−4} respectively. The selection criterion is the 5-fold cross-validation loss by early-stopping at the patience of 3. The optimized hyper-parameters for the tasks in this paper are given in Table 1. For the learning tasks with recommended learning rate schedules, we will apply these schedules as well." }, { "heading": "3.2 COMBINATION RATIO AND MODEL PERFORMANCES", "text": "First, we perform a study on the combination of different level learning rates. The simulations are based on image classification tasks on MNIST and CIFAR10 (LeCun et al., 1998; Krizhevsky and Hinton, 2012). One feed-forward neural network with three hidden layers of size [100, 100, 100] and two convolutional network models, including LeNet-5 (LeCun et al., 2015) and ResNet-18 (He et al., 2016), are implemented. We use full training sets of MNIST and CIFAR10 for training and full test sets for validation. In each case, two levels of learning rates are considered, which are the global and layer-wise adaptation for FFNN, and global and filter-wise adaptation for CNNs. Adam-CAM-HD optimizer is implemented in all three simulations. We change the combination weights of two levels in each case to see the change of model performance in terms of test classification accuracy at epoch 10, with the corresponding updating rate δ = 0. Another hyper-parameter setting follows Table 1. We conduct ten runs at each combination weights with different parameter initializations for all three simulations and draw the error bars for standard errors. The result is given in Figure 1. We can see that in all three cases, the optimal performance is neither at full global level nor full layer/filter level, but a combination of two levels of adaptive learning rates. Still, the differences between the endpoints\nand the optimal combination in terms of model performance have some statistical significance level. This supports our analysis in Section 2.2. Also, in real training processes, it is possible that the learning in favor of different combination weights in various stages and this requires the online updating of the combination weights." }, { "heading": "3.3 FEED FORWARD NEURAL NETWORK FOR IMAGE CLASSIFICATION", "text": "This experiment is conducted with feed-forward neural networks for image classification on MNIST, including 60,000 training examples and 10,000 test examples. We use the full training set for training and the full test set for validation. Three FFNN with three different hidden layer configurations are implemented, including [100, 100], [1000, 100], and [1000, 1000]. Adaptive optimizers including Adam, Adam-HD with two hyper-gradient updating rates, and proposed Adam-CAM-HD are applied. For Adam-CAM-HD, we apply three-level parameter-layer-global adaptation with initialization of γ1 = γ2 = 0.3 and γ3 = 0.4, and two-level layer-global adaptation with γ1 = γ2 = 0.5. Figure 2 shows the validation accuracy for different optimizers during the training process of 30 epochs. We can learn that both the two-level and three-level Adam-CAM-HD outperform the baseline Adam optimizer with optimized hyper-parameters significantly. For Adam-HD, we find that the default hyper-gradient updating rate (10−7) for Adam applied in (Baydin et al., 2017) is not optimal in our experiments, while an optimized one of 10−9 can outperform Adam but still worse than Adam-CAM-HD with default hyper-gradient updating rate (10−7)." }, { "heading": "3.4 LENET-5 FOR IMAGE CLASSIFICATION", "text": "The second experiment is done with LeNet-5, an early-year convolutional neural network without involving many building and training tricks. We compare a set of adaptive Adam optimizers including Adam, Adam-HD, Adam-CAM-HD, and L4 for the image classification learning task of MNIST, CIFAR10 and SVHN (Netzer et al., 2011). For Adam-CAM-HD, we apply a two-level setting with filter-wise and global learning rates adaptation and initialize γ1 = 0.2, γ2 = 0.8. We also implement an exponential decay function τ(t) = exp(−rt) as was discussed in Section 2.4 with rate r = 0.002, while t is the number of iterations. For L4, we implement the recommended L4 learning rate of 0.15. For Adabound and RAdam, we also apply the recommended hyper-parameters in the original\npapers. The other hyper-parameter settings are optimized in Table 1. As we can see in Figure 3,\nAdam-CAM-HD again shows the advantage over other methods in all the three sub-experiments, except MNIST L4 that could perform better in a later stage. The experiment on CIFAR10 and SVHN indicates that the recommended hyper-parameters for Adabound, RAdam and L4 could fail in some cases with unstable accuracy curves. On the other hand, Adam-HD can not significantly outperform Adam with the recommended and optimized hyper-gradient updating rate shared with Adam-CAM-HD. The corresponding summary of test performance is given in Table 2, in which the test accuracy of Adam-CAM-HD outperform other optimizers on both CIFAR10 and SVHN. Especially, it gives significantly better results than Adam and Adam-HD for all the three datasets." }, { "heading": "3.5 RESNET FOR IMAGE CLASSIFICATION", "text": "In the third experiment, we apply ResNets for image classification task on CIFAR10. We compare Adam and its adaptive optimizers, as well as SGD with Nestorov momentum (SGDN) and corresponding adaptive optimizers for training both ResNet-18 and ResNet-34. For SGDN methods, we apply a learning rate schedule, in which the learning rate is initialized to a default value of 0.1 and reduced to 0.01 or 10% (for SGDN-CAM-HD) after epoch 150. The momentum is set to be 0.9 for all SGDN methods. For Adam-CAM-HD SGDN-CAM-HD, we apply two-level CAM-HD with the same setting as the second experiment. In addition, we apply an exponential decay function with a decay rate r = 0.001. The validation accuracy results, training loss, and validation loss are shown in Figure 4. We can see that the validation accuracy of Adam-CAM-HD reaches about 90% in 40 epochs and consistently outperforms Adam, L4 and Adam-HD optimizers in a later stage. The L4 optimizer with recommended hyper-parameter and an optimized weight-decay rate of 0.0005 (instead of 1e-4 applied in other Adam-based optimizers) can outperform baseline Adam for both ResNet-18 and ResNet-34, while its training loss outperforms all other methods but with potential over-fitting. Adam-HD achieves better training loss than Adam after epoch 100. However, we find that the validation performance is not good with a default hyper-gradient coefficient of 10−8 (shared with Adam-CAM-HD). Instead, an optimized coefficient of 10−9 can make a safe but small improvement from Adam. RAdam performs slightly better than Adam-CAM-HD in terms of validation accuracy, but the validation cross-entropy of both RAdam and Adabound are worse than our method. Also, we find that in training ResNet-18/34, the validation accuracy and validation loss of SGDN-CAM-HD slightly outperform SGDN in most epochs even after the resetting of the learning rate at epoch 150.\nThe test performances of different optimizers for ResNet-18 and ResNet-34 after 200 epoch of training are shown in Table 3. Notice that the results of SGDN and SGDN-CAM-HD are achieved with a piece-wise constant learning rates schedule, while the results of Adam-based optimizers are achieved without a learning rate schedule. We can learn that the proposed CAMHD method can improve the corresponding baseline method (Adam, Adam-HD and SGDN)\nwith statistical significance in almost every case. Adam-CAM-HD performs a bit worse than RAdam but comparable to Adabound in terms of the average test accuracy for both ResNet-18 and ResNet-34. As a higher-level adaptation method, CAM-HD can be applied on top of RAdam/Adabound/L4 for further improvement." }, { "heading": "4 CONCLUSION", "text": "In this study, we propose a gradient-based learning rate adaptation strategy by introducing hierarchical multiple-level learning rates in deep neural networks. By considering the relationship between regularization and the combination of adaptive learning rate at different levels, we further propose a joint algorithm for adaptively learning each level’s combination weight. Experiments on FFNN, LeNet-5, and ResNet-18/34 indicate that the proposed methods can outperform the standard ADAM/SGDN and other baseline methods with statistical significance. Although the advantage is not fully guaranteed, our method achieves a higher adaptation level and can be continuously reduced to baseline methods under a specific set of hyper-parameters. This could bring more thoughts and further study on implementing a hierarchical learning rate system for deep neural networks." }, { "heading": "A APPENDIX FOR PAPER: ADAPTIVE MULTI-LEVEL HYPER-GRADIENT DESCENT", "text": "" }, { "heading": "A.1 ANALYSIS ABOUT VIRTUAL APPROXIMATION", "text": "Consider the difference between Eq. (5) and Eq. (6) in the paper: α∗l,t − αl,t = −2βλlayer((α̂l,t − α̂g,t)− (αl,t − αg,t)). (9)\nBased on the setting of multi-level adaptation, on the right-hand side of Eq. (9), global learning rate is updated without regularization α̂g,t = αg,t. For the layer-wise learning rates, the difference is given by α̂l,t − αl,t = 2βλlayer(αl,t − αg,t), which corresponds to the gradient with respect to the regularization term. Thus, Eq. (9) can be rewritten as:\nα∗l,t − αl,t = −2βλlayer(2βλlayer(αl,t − αg,t)) = −4β2λ2l (1− αg,t αl,t )αl,t (10)\nwhich is the error of the virtual approximation introduced in Eq. (6) in the paper. If 4β2λ2l << 1 orαg,t αl,t → 1, this approximation becomes accurate. Another way for handling Eq. (4) in the paper is to implement the previous-step learning rates in the regularization term. αl,t ≈ αl,t−1 − β(−hl,t−1 + 2λlayer(αl,t−1 − αg,t−1)). (11)\nSince we have αl,t = α̂l,t−2βλlayer(αl,t−αg,t) and α̂l,t = αl,t−1 +βhl,t−1, using the learning rates in the last step for regularization will introduce a higher variation from term βhl,t−1, with respect to the true learning rates in the current step. Thus, we consider the proposed virtual approximation works better than last-step approximation." }, { "heading": "A.2 PROOF OF THEOREM 1:", "text": "Proof. Consider the learning regularizer Llr_reg(α) = ∑ p∈P ∑ αi∈p ∑ αj∈parents(i) λij(αi − αj)2. (12)\nTo apply hyper-gradient descent method to update the learning rate αL at level L, we need to work out the derivative of Llr_reg with respect to αL. The terms in Eq. (12) involving αL are only (αi − αj)2 where αj is an ancestor on the path from the root to the leave node αL. Hence\n∂Lfull(θ, α) ∂αL,t = ∂Lmodel(θ, α) ∂αL,t + ∂Llr_reg(α) ∂αL,t\n= −∇θLf(θt−1)T∇θLu(Θt−2, αt−1) + ∑\nαj∈acenstors(L)\n2λLj(αL,t − αj,t). (13)\nAs there are exactly L − 1 ancestors on the path, we can simply use the index j = 1, 2, ..., L − 1. The corresponding updating function for αn,t is:\nαL,t = αn,t−1 − β(hL + L−1∑ j=1 2λLj(αL,t − αj,t))\n≈ α̂L,t(1− 2β L−1∑ j=1 λLjαn,t) + L−1∑ j=1 (2βλLjα̂j,t))\n= L∑ j=1 γjα̂j,t.\n(14)\nwhere\nγL = 1− 2β L−1∑ j=1 λLj ,\nγj = 2βλLj , for j = 1, 2, ..., L− 1.\n(15)\nThis form satisfies α∗L = ∑L j=1 γjα̂j with ∑L j=1 γj = 1. This completes the proof." }, { "heading": "A.3 PROOF OF THEOREM 2:", "text": "Proof. We take three-level’s case discussed in Section 2 for example, which includes global level, layer-level and parameter-level. Suppose that the target function f is convex, L-Lipschitz smooth at all levels, which means for all θ1 and θ2:\n||∇pf(θ1)−∇pf(θ2)|| ≤ Lp||θ1 − θ2|| ||∇lf(θ1)−∇lf(θ2)|| ≤ Ll||θ1 − θ2|| ||∇gf(θ1)−∇gf(θ2)|| ≤ Lg||θ1 − θ2|| L = max{Lp, Ll, Lg}\n(16)\nand its gradient with respect to parameter-wise, layer-wise, global-wise parameter groups satisfy ‖∇pf(θ)‖ < Mp, ‖∇lf(θ)‖ < Ml, ‖∇gf(θ)‖ < Mg for some fixed Mp, Ml, Mg and all θ. Then the effective combined learning rate for each parameter satisfies:\n|α∗p,t| = |γp,t−1αp,t + γl,t−1αl,t + γg,t−1αt|\n≤ (γp,t−1 + γl,t−1 + γg,t−1)α0 + β t−1∑ i=0 ( γp,t−1np max p {|∇f(θp,i+1)T∇f(θp,i)|}\n+γl,t−1nl max l {|∇f(θl,i+1)T∇f(θl,i)|}+ γg,t−1|∇f(θg,i+1)T∇f(θg,i)| ) ≤ α0 + β\nt−1∑ i=0 ( γp,t−1np max p {‖∇f(θp,i+1)‖‖∇f(θp,i)‖}\n+γl,t−1nl max l {‖∇f(θl,i+1)‖‖∇f(θl,i)‖}+ γg,t−1‖∇f(θg,i+1)‖‖∇f(θg,i)‖ ) ≤ α0 + tβ(npM2p + nlM2l +M2g )\n(17)\nwhere θp,i refers to the value of parameter indexed by p at time step i, θl,i refers to the set/vector of parameters in layer with index l at time step i, and θg,i refers to the whole set of model parameters at time step i. In addition, np and nl are the total number of parameters and number of the layers, and we have applied 0 < γp, γl, γg < 1. This gives an upper bound for the learning rate in each particular time step, which is O(t) as t → ∞. By introducing κp,t = τ(t)α∗p,t + (1 − τ(t))α∞, where the function τ(t) is selected to satisfy tτ(t) → 0 as t → ∞, so we have κp,t → α∞ as t → ∞. If α∞ < 1 L , for larger enough t, we have 1/(L+ 1) < κp,t < 1/L, and the algorithm converges when the corresponding gradient-based optimizer converges for such a learning rate under our assumptions about f . This follows the discussion in (Karimi et al., 2016; Sun, 2019)." }, { "heading": "A.4 ALGORITHM COMPLEXITY", "text": "" }, { "heading": "A.4.1 NUMBER OF PARAMETERS AND SPACE COMPLEXITY", "text": "The proposed adaptive optimizer is for efficiently updating the model parameters, while the final model parameters will not be increase by introducing CAM-HD optimizer. However, during the training process, several extra intermediate variables are introduced. For example, in the discussed three-level’s case for feed-forward neural network with nlayer layers, we need to restore hp,t, hl,t and hg,t, which have the sizes of S(hp,t) = ∑nlayer−1 l=1 (nl + 1)nl+1, S(hl,t) = nlayer and S(hg,t) = 1, respectively, where ni is the number of units in ith layer. Also, learning rates αp,t, αl,t, αg,t and take the sizes of S(ap,t) = ∑nlayer−1 l=1 (nl + 1)nl+1, S(al,t) = nlayer, S(ag,t) = 1, S(ag,t) = 1, and\nS(a∗p,t) = ∑nlayer−1 l=1 (nl + 1)nl+1, respectively. Also we need a small set of scalar parameters to restore γ1, γ2 and γ3 and other coefficients.\nConsider the fact that in training the baseline models, we need to restore model parameters, corresponding gradients, as well as the intermediate gradients during the implementation of chain rule, CAM-HD will take twice of the space for storing intermediate variables in the worst case. For two-level learning rate adaptation considering global and layer-wise learning rates, the extra space complexity by CAM-HD will be one to two orders’ smaller than that of baseline model during training." }, { "heading": "A.4.2 TIME COMPLEXITY", "text": "In CAM-HD, we need to calculate gradient of loss with respect to the learning rates at each level, which are hp,t, hl,t and hg,t in three-level’s case. However, the gradient of each parameter is already known during normal model training, the extra computational cost comes from taking summations and updating the lowest-level learning rates. In general, this cost is in linear relation with the number of differentiable parameters in the original models. Here we discuss the case of feed-forward networks and convolutional networks.\nRecall that for feed-forward neural network the whole computational complexity is:\nT (n) = O(m · niter · nlayer∑ l=2 nl · nl−1 · nl−2) (18)\nwhere m is the number of training examples, niter is the iterations of training, and nl is the number of units in the l-th layer. On the other hand, when using three-level CAM-HD with, where the lowest level is parameter-wise, we need nlayer element products to calculate hp,t for all layers, one nlayer matrix element summations to calculate hl,t for all layers, as well as a list summation to calculate hg,t. In addition, two element-wise summations will also be implemented for calculating αp,t and α∗p. Therefore, the extra computational cost of using CAM-HD is ∆T (n) = O(nb · niter ∑nlayer l=2 (nl · nl−1 + nl)), where nb is the number of mini-batches for training. Notice that mb = m/nb is the batch size, which is usually larger than 100. This extra cost is more than one-order smaller than the computational complexity of training a model without learning rate adaptation. For the cases when the lowest level is layer-wise, only one element-wise matrix product is needed in each layer to calculate hl,t. For convolutional neural networks, we have learned that the total time complexity of all convolutional layers is (He and Sun, 2015):\nO(nb · niter · nconv_layer∑\nl=1\n(nl−1 · s2l · nl ·m2l )) (19)\nwhere l is the index of a convolutional layer, and nconv_layer is the depth (number of convolutional layers). nl is the number of filters in the l-th layer, while nl−1 is known as the number of input channels of the l-th layer. sl is the spatial size of the filter. ml is the spatial size of the output feature map. If we consider convolutional filters as layers, the extra computational cost for CAM-HD in this case is ∆T (n) = O(nb · niter ∑nconv_layer l=1 ((nl−1 · s2l + 1) · nl)), which is still more than one order smaller than the cost of model without learning rate adaptation.\nTherefore, for large networks, applying CAM-HD will not significantly increase the computational cost from the theoretical prospective." }, { "heading": "A.5 SUPPLEMENTARY EXPERIMENTAL RESULTS", "text": "" }, { "heading": "A.5.1 LEARNING OF COMBINATION WEIGHTS", "text": "The following figures including Figure 5, Figure 6, Figure 7 and Figure 8 give the learning curves of combination weights with respect to the number of training iterations in each experiments, in which each curve is averaged by 5 trials with error bars. Through these figures, we can compare the updating curves with different models, different datasets and different CAM-HD optimizers.\nFigure 5 corresponds to the experiment of FFNN on MNIST in Section 3.3, which is a three-level case. We can see that for different FFNN architecture, the learning behaviors of γs also show different patterns, although trained on a same dataset. Meanwhile, the standard errors for multiple trials are much smaller relative to the changes of the average combination weight values.\nFigure 6 corresponds to the learning curves of γs in the experiments of LeNet-5 for MNIST image classification with SGD, SGDN and Adam, which are trained on 10% of original training dataset.\nIn addition, Figure 7 corresponds to the learning curves of γs in the experiments of LeNet-5 for CIFAR10 and SVHN image classification with Adam-CAM-HD.\nAs is shown in Figure 6, for SGD-CAM-HD, SGDN-CAM-HD and Adam-CAM-HD, the equilibrium values of combination weights are different from each other. Although the initialization γ1 = 0.2, γ2 = 0.8 and the updating rate δ = 0.03 are set to be the same for the three optimizers, the values of γ1 and γ2 only change in a small proportion when training with Adam-CAM-HD, while the change is much more significant towards larger filter/layer-wise adaptation when SGD-CAM-HD or SGDN-CAM-HD is implemented. The numerical results show that for SGDN-CAM-HD, the average value of weight for layer-wise adaptation γ1 jumps from 0.2 to 0.336 in the first epoch, then drop back to 0.324 before keeping increasing till about 0.388. For Adam-CAM-HD, the average γ1 moves from 0.20 to 0.211 with about 5% change. In Figure 7, both the two subplots are models trained with Adam-CAM-HD. For the updating curves in Figure 7(a), which is trained on CIFAR10 with Adam-CAM-HD, the combination weight for filter-wise adaptation moves from 0.20 to 0.188. Meanwhile, for the updating curves in Figure 7(b), which is trained on SVHN, the combination weight for filter-wise adaptation moves from 0.20 to 0.195.\nThe similar effect can also be observed from the learning curves of γs for ResNet-18, which is given in Figure 8 and we only take the first 8,000 iterations. Again, we find that in training ResNet-18 on CIFAR10, the combination weights of SGD/SGDN-CAM-HD change much faster than that of Adam-CAM-HD. There are several reasons for this effect: First, in the cases when γs do not move significantly, we apply Adam-CAM-HD, where the main learning rate (1e-3) is only about 1%-6% of the learning rate of SGD or SGDN (1e-1). In Algorithm 1, we can see that the updating rate of γs is in proportion of alpha given other terms unchanged. Thus, for the same tasks, if the same value of updating rate δ is applied, the updating scale of γs for Adam-CAM-HD can be much smaller than that for SGDN-CAM-HD. Second, this does not mean that if we apply a much larger δ for Adam-CAM-HD, the combination weights will still not change significantly or the performance will not be improved. It simply means that using a small δ can also achieve good performance due to the goodness of initialisation points. Third, it is possible that Adam requires lower level of combination ratio adaptation for the same network architecture compared with SGD/SGDN due to the fact that Adam itself involves stronger adaptiveness." }, { "heading": "A.5.2 OTHER EXPERIMENTAL RESULTS", "text": "In Figure 2, Figure 3 and Figure 4 of the paper, we have shown the curves of validation accuracies to compare different adaptive optimizers in a variety of learning tasks. Here we further provide the training and validation cross-entropy loss curves for corresponding methods in these tasks. Figure 8 is the full results of FFNNs, and Figure 9 is the results of LeNet-5." } ]
2,020
null
SP:0007eeef2280b8cd027be08249b27e2116328ab8
[ "This paper studies the interesting property of generalized mirror descent (GMD) and its stochastic variant for nonconvex optimization problems. First, for GMD this paper shows the linear convergence under PL* condition (in Lemma 1) and finds out a new sufficient condition for the linear convergence (in Theorem 2). Next, this work tried to extend this result to a stochastic setting (in Theorem 3). Moreover, the implicit regularization of GMD is studied, which is an extension of the previous studies by [Azizan et al.]." ]
The following questions are fundamental to understanding the properties of overparameterization in modern machine learning: (1) Under what conditions and at what rate does training converge to a global minimum? (2) What form of implicit regularization occurs through training? While significant progress has been made in answering both of these questions for gradient descent, they have yet to be answered more completely for general optimization methods. In this work, we establish sufficient conditions for linear convergence and obtain approximate implicit regularization results for generalized mirror descent (GMD), a generalization of mirror descent with a possibly time-dependent mirror. GMD subsumes popular first order optimization methods including gradient descent, mirror descent, and preconditioned gradient descent methods such as Adagrad. By using the Polyak-Lojasiewicz inequality, we first present a simple analysis under which non-stochastic GMD converges linearly to a global minimum. We then present a novel, Taylor-series based analysis to establish sufficient conditions for linear convergence of stochastic GMD. As a corollary, our result establishes sufficient conditions and provides learning rates for linear convergence of stochastic mirror descent and Adagrad. Lastly, we obtain approximate implicit regularization results for GMD by proving that GMD converges to an interpolating solution that is approximately the closest interpolating solution to the initialization in `2-norm in the dual space.
[ { "affiliations": [], "name": "TIME-DEPENDENT MIRRORS" } ]
[ { "authors": [ "Navid R. Azizan", "Babak Hassibi" ], "title": "Stochastic Gradient/Mirror Descent: Minimax Optimality and Implicit Regularization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Navid R. Azizan", "Sahin Lale", "Babak Hassibi" ], "title": "Stochastic Mirror Descent on Overparameterized Nonlinear Models", "venue": "In International Conference on Machine Learning (ICML) Generalization Workshop,", "year": 2019 }, { "authors": [ "Raef Bassily", "Mikhail Belkin", "Siyuan Ma" ], "title": "On exponential convergence of sgd in non-convex over-parametrized learning", "venue": "arXiv preprint arXiv:1811.02564,", "year": 2018 }, { "authors": [ "Amir Beck", "Marc Teboulle" ], "title": "Mirror descent and nonlinear projected subgradient methods for convex optimization", "venue": "Operations Research Letters,", "year": 2003 }, { "authors": [ "Mikhail Belkin", "Daniel Hsu", "Siyuan Ma", "Soumik Mandal" ], "title": "Reconciling modern machinelearning practice and the classical bias–variance trade-off", "venue": "Proceedings of the National Academy of Sciences,", "year": 2019 }, { "authors": [ "Simon S Du", "Xiyu Zhai", "Barnabas Poczos", "Aarti Singh" ], "title": "Gradient descent provably optimizes over-parameterized neural networks, 2018", "venue": null, "year": 2054 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive Subgradient Methods forOnline Learning and Stochastic Optimization", "venue": "Journal of Machine Learning Research (JMLR),", "year": 2011 }, { "authors": [ "Suriya Gunasekar", "Jason Lee", "Daniel Soudry", "Nathan Srebro" ], "title": "Characterizing implicit bias in terms of optimization geometry", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Suriya Gunasekar", "Blake Woodworth", "Nathan Srebro" ], "title": "Mirrorless Mirror Descent: A More Natural Discretization of Riemannian Gradient Flow", "venue": "arXiv preprint arXiv:2004.01025,", "year": 2020 }, { "authors": [ "Hamed Karimi", "Julie Nutini", "Mark Schmidt" ], "title": "Linear Convergence of Gradient and ProximalGradient Methods Under the Polyak-Lojasiewicz Condition", "venue": "In Joint European Conference on Machine Learning and Knowledge Discovery in Databases,", "year": 2016 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Yuanzhi Li", "Yingyu Liang" ], "title": "Learning overparameterized neural networks via stochastic gradient descent on structured data", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Chaoyue Liu", "Libin Zhu", "Mikhail Belkin" ], "title": "Toward a theory of optimization for overparameterized systems of non-linear equations: the lessons of deep learning", "venue": "arXiv preprint arXiv:2003.00307,", "year": 2020 }, { "authors": [ "Stanislaw Lojasiewicz" ], "title": "A topological property of real analytic subsets (in French)", "venue": "Les équations aux dérivées partielles.,", "year": 1963 }, { "authors": [ "Arkadi S. Nemirovsky", "David B. Yudin" ], "title": "Problem Complexity and Method", "venue": "Efficiency in Optimization. SIAM,", "year": 1983 }, { "authors": [ "Francesco Orabona", "Koby Crammer", "Nicolò Cesa-Bianchi" ], "title": "Mirror descent and nonlinear projected subgradient methodsfor convex optimization", "venue": "Machine Learning,", "year": 2015 }, { "authors": [ "Boris Polyak" ], "title": "Gradient methods for minimizing functionals (in Russian)", "venue": "Zh. Vychisl. Mat. Mat. Fiz.,", "year": 1963 }, { "authors": [ "Mahdi Soltanolkotabi", "Adel Javanmard", "Jason D. Lee" ], "title": "Theoretical insights into the optimization landscape of over-parameterized shallow neural networks", "venue": "IEEE transaction on Information Theory", "year": 2018 }, { "authors": [ "Sharan Vaswani", "Francis Bach", "Mark Schmidt" ], "title": "Fast and Faster Convergence of SGD for OverParameterized Models and an Accelerated Perceptron", "venue": "In International Conference on Artificial Intelligence and Statistics (AISTATS),", "year": 2019 }, { "authors": [ "Rachel Ward", "Xiaoxia Wu", "Léon Bottou" ], "title": "AdaGrad Stepsizes: Sharp Convergence Over Nonconvex Landscapes", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Xiaoxia Wu", "Simon S Du", "Rachel Ward" ], "title": "Global convergence of adaptive gradient methods for an over-parameterized neural network", "venue": null, "year": 1902 }, { "authors": [ "Yuege Xie", "Xiaoxia Wu", "Rachel Ward" ], "title": "Linear Convergence of Adaptive Stochastic Gradient Descent", "venue": "In International Conference on Artificial Intelligence and Statistics (AISTATS),", "year": 2020 }, { "authors": [ "Bing Xu", "Naiyan Wang", "Tianqi Chen", "Mu Li" ], "title": "Empirical evaluation of rectified activations in convolution", "venue": null, "year": 2015 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Xingyu Zhou" ], "title": "On the Fenchel Duality between Strong Convexity and Lipschitz Continuous Gradient", "venue": "arXiv preprint arXiv:1803.06573,", "year": 2018 }, { "authors": [ "Liu" ], "title": "φ(w)− φ(w)‖ then, ‖φ(w∗)− φ(w(∞))‖≤ 2R", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recent work has established the optimization and generalization benefits of over-parameterization in machine learning (Belkin et al., 2019; Liu et al., 2020; Zhang et al., 2017). In particular, several works including Vaswani et al. (2019); Du et al. (2018); Liu et al. (2020); Li & Liang (2018) have demonstrated that over-parameterized models converge to a global minimum when trained using stochastic gradient descent and that such convergence can occur at a linear rate. Independently, other work, such as Gunasekar et al. (2018), have characterized implicit regularization of overparameterized models, i.e., the properties of the solution selected by a given optimization method, without proving convergence.\nRecently, Azizan & Hassibi (2019); Azizan et al. (2019) simultaneously proved convergence and analyzed approximate implicit regularization for mirror descent (Beck & Teboulle, 2003; Nemirovsky & Yudin, 1983). In particular, by using the fundamental identity of stochastic mirror descent (SMD), they proved that SMD converges to an interpolating solution that is approximately the closest one to the initialization in Bregman divergence. However, these works do not provide a rate of convergence for SMD and assume that there exists an interpolating solution within in Bregman divergence from the initialization. In this work, we provide sufficient conditions for linear convergence and obtain approximate implicit regularization results for generalized mirror descent (GMD), an extension of mirror descent that introduces (1) a potential-free update rule and (2) a time-dependent mirror; namely, GMD with invertible φ : Rd → Rd and learning rate η is used to minimize a real valued loss function, f , according to the update rule:\nφ(t)(w(t+1)) = φ(t)(w(t))− η∇f(w(t)). (1)\nWe discuss the stochastic version of GMD (SGMD) in Section 3. GMD generalizes both mirror descent and preconditioning methods. Namely, if for all t, φ(t) = ∇ψ for some strictly convex function ψ, then GMD corresponds to mirror descent with potential ψ; if φ(t) = G(t) for some invertible matrix G(t) ∈ Rd×d, then the update rule in equation (1) reduces to\nw(t+1) = w(t) − ηG(t) −1 ∇f(w(t))\nand hence represents applying a pre-conditioner to gradient updates. The following is a summary of our results:\n1. We provide a simple proof for linear convergence of GMD under the Polyak-Lojasiewicz inequality (Theorem 1).\n2. We provide sufficient conditions under which SGMD converges linearly under an adaptive learning rate (Theorems 2 and 3)1.\n3. As corollaries to Theorems 1 and 3, in Section 5 we provide sufficient conditions for linear convergence of stochastic mirror descent as well as stochastic preconditioner methods such as Adagrad (Duchi et al., 2011).\n4. We prove the existence of an interpolating solution and linear convergence of GMD to this solution for non-negative loss functions that locally satisfy the PL* inequality (Liu et al., 2020). This result (Theorem 4) provides approximate implicit regularization results for GMD: GMD converges linearly to an interpolating solution that is approximately the closest interpolating solution to the initialization in `2 norm in the dual space induced by φ(t)." }, { "heading": "2 RELATED WORK", "text": "Recent work (Azizan et al., 2019) established convergence of stochastic mirror descent (SMD) for nonlinear optimization problems. It characterized the implicit bias of mirror descent by demonstrating that SMD converges to a global minimum that is within epsilon of the closest interpolating solution in Bregman divergence. The analysis in Azizan et al. (2019) relies on the fundamental identity of SMD and does not provide explicit learning rates or establish a rate of convergence for SMD in the nonlinear setting. The work in Azizan & Hassibi (2019) provided explicit learning rates for the convergence of SMD in the linear setting under strongly convex potential, again without a rate of convergence. While these works established convergence of SMD, prior work by Gunasekar et al. (2018) analyzed the implicit bias of SMD without proving convergence.\nA potential-based version of generalized mirror descent with time-varying regularizes was presented for online problems in Orabona et al. (2015). That work is primarily concerned with establishing regret bounds for the online learning setting, which differs from our setting of minimizing a loss function given a set of known data points. A potential-free formulation of GMD for the flow was presented in Gunasekar et al. (2020).\nThe Polyak-Lojasiewicz (PL) inequality (Lojasiewicz, 1963; Polyak, 1963) serves as a simple condition for linear convergence in non-convex optimization problems and is satisfied in a number of settings including over-parameterized neural networks (Liu et al., 2020). Work by Karimi et al. (2016) demonstrated linear convergence of a number of descent methods (including gradient descent) under the PL inequality. Similarly, Vaswani et al. (2019) proved linear convergence of stochastic gradient descent (SGD) under the PL inequality and the strong growth condition (SGC), and Bassily et al. (2018) established the same rate for SGD under just the PL inequality. Soltanolkotabi et al. (2019) also used the PL inequality to establish a local linear convergence result for gradient descent on 1 hiddden layer over-parameterized neural networks.\nRecently, Xie et al. (2020) established linear convergence for a norm version of Adagrad (AdagradNorm) using the PL inequality, while Wu et al. (2019) established linear convergence for AdagradNorm in the particular setting of over-parameterized neural networks with one hidden layer. An alternate analysis for Adagrad-Norm for smooth, non-convex functions was presented in Ward et al. (2019), resulting in a sub-linear convergence rate.\n1We also provide a fixed learning rate for monotonically decreasing gradients∇f(w(t)).\nInstead of focusing on a specific method, the goal of this work is to establish sufficient conditions for linear convergence by applying the PL inequality to a more general setting (SGMD). We arrive at linear convergence for specific methods such as mirror descent and preconditioned gradient descent methods as corollaries. Moreover, our local convergence results provide an intuitive formulation of approximate implicit regularization for GMD and thus mirror descent. Namely, instead of resorting to Bregman divergence, we prove that GMD converges to an interpolating solution that is approximately the closest interpolating solution to the initialization in `2 norm in the dual space induced by φ(t)." }, { "heading": "3 ALGORITHM DESCRIPTION AND PRELIMINARIES", "text": "We begin with a formal description of SGMD. Let fi : Rd → R denote real-valued, differentiable loss functions and let f(x) = 1n ∑n i=1 fi(x). In addition, let φ\n(t) : Rd → Rd be an invertible function for all non-negative integers t. We solve the optimization problem\narg min x∈Rd f(x)\nusing stochastic generalized mirror descent with learning rate η2:\nφ(t)(w(t+1)) = φ(t)(w(t))− η∇fit(w(t)), (2)\nwhere it ∈ [n] is chosen uniformly at random. As described in the introduction, the above algorithm generalizes both gradient descent (where φ(x) = x) and mirror descent (where φ(t)(x) = ∇ψ(x) for some strictly convex potential function ψ). In the case where φ(t)(x) = G(t)x for an invertible matrix G(t) ∈ Rd×d, the update rule in equation (2) reduces to:\nw(t+1) = w(t) − ηG(t) −1 ∇fit(w(t))\nHence, when φ(t) is an invertible linear transformation, Equation (2) is equivalent to pre-conditioned gradient descent. We now present the Polyak-Lojasiewicz inequality and lemmas from optimization theory that will be used in our proofs3.\nPolyak-Lojasiewicz (PL) Inequality. A function f : Rd → R is µ-PL if for some µ > 0: 1\n2 ‖∇f(x)‖2≥ µ(f(x)− f(x∗)) ∀x ∈ Rd, (3)\nwhere x∗ ∈ Rd is a global minimizer for f . A useful variation of the PL inequality is the PL* inequality introduced in Liu et al. (2020) which does not require knowledge of f(x∗).\nDefinition. A function f : Rd → R is µ-PL* if for some µ > 0: 1\n2 ‖∇f(x)‖2≥ µf(x) ∀x ∈ Rd, (4)\nA function that is µ-PL* is also µ-PL when f is non-negative. Additionally, we will typically assume that f is L-smooth (with L-Lipschitz continuous derivative).\nDefinition. A function f : Rd → R is L-smooth for L > 0 if for all x, y ∈ Rd:\n‖∇f(x)−∇f(y)‖≤ L‖x− y‖.\nIf φ(t)(x) = x for any t and x ∈ Rd then SGMD reduces to SGD. If f is L-smooth and satisfies the PL-Inequality, then SGD converges linearly to a global minimum (Bassily et al., 2018; Karimi et al., 2016; Vaswani et al., 2019). Moreover, the following lemma (proven in Appendix A) shows that the PL* condition implies the existence of a global minimum x∗ for non-negative, L-smooth f .\nLemma 1. If f : Rd → R is µ-PL*, L-smooth and f(x) ≥ 0 for all x ∈ Rd, then gradient descent with learning rate η < 2L converges linearly to x ∗ satisfying f(x∗) = 0.\n2The framework also allows for adaptive learning rates by using η(t) to denote a time-dependent step size. 3We assume all norms are the 2-norm unless stated otherwise.\nHence, in cases where the loss function is nonnegative (for example the squared loss), we can remove the usual assumption about the existence of a global minimum, x∗, and instead assume that f satisfies the PL* inequality. We now reference standard properties of L-smooth functions (Zhou, 2018), which will be used in our proofs.\nLemma 2. If f : Rd → R is L-smooth, then for all x, y ∈ Rd:\n(a) f(y) ≤ f(x) + 〈∇f(x), y − x〉+ L 2 ‖y − x‖2, (b) ‖∇f(x)‖2≤ 2L(f(x)− f(x∗)).\nThe following lemma relates µ and L (the proof is in Appendix B).\nLemma 3. If f : Rd → R is µ-PL and L-smooth, then µ ≤ L.\nUsing Lemma 2b in place of the strong growth condition (i.e. Ei[‖∇fi(x)‖2] ≤ ρ‖∇f(x)‖2) yields slightly different learning rates when establishing convergence of stochastic descent methods (as is apparent from the different learning rates between Bassily et al. (2018) and Vaswani et al. (2019)). The following simple lemma will be used in the proof of Theorem 3.\nLemma 4. If f(x) = 1n ∑n i=1 fi(x) where fi : Rd → R are Li-smooth , then f is supi Li-smooth. Note that there could exist some other constant L′ < supi Li for which f is L ′-smooth, but this upper bound suffices for our proof of Theorem 3. Lastly, we define and reference standard properties of strongly convex functions (Zhou, 2018), which will be useful in demonstrating how our GMD results generalize those for mirror descent.\nDefinition. For α > 0, a differentiable function, ψ : Rd → R, is α-strongly convex if for all x, y,\nψ(y) ≥ ψ(x) + 〈∇ψ(x), y − x〉+ α 2 ‖y − x‖2.\nLemma 5. If ψ : Rd → R is α-strongly convex, then for all x, y:\nψ(y) ≤ ψ(x) + 〈∇ψ(x), y − x〉+ 1 2α ‖∇ψ(y)−∇ψ(x)‖2.\nWith these preliminaries in hand, we now present our proofs for linear convergence of SGMD using the PL-Inequality." }, { "heading": "4 SUFFICIENT CONDITIONS FOR LINEAR CONVERGENCE OF SGMD", "text": "In this section, we provide sufficient conditions to establish (expected) linear convergence for (stochastic) GMD. We first provide simple conditions under which GMD converges linearly by extending the proof strategy from Karimi et al. (2016). We then present alternate conditions for linear convergence of GMD, which can be naturally extended to the stochastic setting." }, { "heading": "4.1 SIMPLE CONDITIONS FOR LINEAR CONVERGENCE OF GMD", "text": "We begin with a simple set of conditions under which (non-stochastic) GMD converges linearly (the full proof is presented in Appendix C). The main benefit of this analysis is that it is a straightforward extension of the proof of linear convergence for gradient descent under the PL-Inequality presented in Karimi et al. (2016).\nTheorem 1. Suppose f : Rd → R is L-smooth and µ-PL and φ(t) : Rd → Rd is an invertible, α (t) u -Lipschitz function where lim\nt→∞ α (t) u <∞. If for all x, y ∈ Rd and for all timesteps t there exist\nα (t) l > 0 such that\n〈φ(t)(x)− φ(t)(y), x− y〉 ≥ α(t)l ‖x− y‖ 2,\nand lim t→∞\nα (t) l > 0, then generalized mirror descent converges linearly to a global minimum for any\nη(t) < 2α\n(t) l\nL .\nRemark. Theorem 1 yields a fixed learning rate provided that α(t)l is uniformly bounded. In addition, note that Theorem 1 applies also under weaker assumptions, namely when φ(t) is locally Lipschitz. Finally, the provided learning rate can be computed exactly for settings such as linear\nregression, since it only requires knowledge of L and α(t)l (see Section 7). When η = α\n(t) l\nL and given w∗ a minimizer of f , the proof of Theorem 1 implies that:\nf(w(t+1))− f(w∗) ≤ 1− µα(t)l 2 Lα (t) u 2 (f(w(t))− f(w∗)). Letting κ(t) = Lα (t) u 2\nµα (t) l\n2 thus generalizes the condition number introduced in Definition 4.1 of Liu\net al. (2020) for gradient descent. Provided that κ = lim t→∞ κ(t) > 0, then Theorem 1 guarantees linear convergence to a global minimum. When α (t) l\nα (t) u\nis decreasing in t, the rate is given by: f(w(t+1))− f(w∗) ≤ ( 1− 1\nκ\n)t+1 (f(w(0))− f(w∗))." }, { "heading": "4.2 TAYLOR SERIES ANALYSIS FOR LINEAR CONVERGENCE IN GMD", "text": "Although the proof of Theorem 1 is succinct, it is nontrivial to extend to the stochastic setting4. In order to develop a convergence result for the stochastic setting, we turn to an alternate set of conditions for linear convergence by using the Taylor expansion of φ−1. We use Jφ to denote the Jacobian of φ. For ease of notation, we consider non-time-dependent αl, αu, but our results are trivially extendable to the setting when these quantities are time-dependent.\nTheorem 2. Suppose f : Rd → R is L-smooth and µ-PL and φ : Rd → Rd is an infinitely differentiable, analytic function with analytic inverse, φ−1. If there exist αl, αu > 0 such that\n(a) αlI 4 Jφ 4 αuI,\n(b) |∂i1,...ikφ −1 j (x)|≤\nk!\n2αud ∀x ∈ Rd, i1, . . . ik ∈ [d], j ∈ [d], k ≥ 2, then generalized mirror descent converges linearly for any η(t) < min (\n4α2l 5Lαu , 1 2 √ d‖∇f(w(t))‖\n) .\nThe full proof is provided in Appendix D. Importantly, the adaptive component of the learning rate is only used to ensure that the sum of the higher order terms for the Taylor expansion converges. In particular, if φ(t) is a linear function, then our learning rate no longer needs to be adaptive. Note that alternatively, we can establish linear convergence for a fixed learning rate given that the gradients monotonically decrease or if f is non-negative and µ-PL*. We analyze this case in Appendix E and provide an explicit condition on µ and L under which this holds." }, { "heading": "4.3 TAYLOR SERIES ANALYSIS FOR LINEAR CONVERGENCE IN STOCHASTIC GMD", "text": "The main benefit of the above Taylor series analysis is that it naturally extends to the stochastic setting as demonstrated in the following result (with proof presented in Appendix F).\nTheorem 3. Suppose f(x) = 1n ∑n i=1 fi(x) where fi : Rd → R are non-negative, Li-smooth functions with L = supi∈[n] Li and f is µ-PL *. Let φ : Rd → Rd be an infinitely differentiable, analytic function with analytic inverse, φ−1. SGMD is used to minimize f according to the updates:\nφ(w(t+1)) = φ(w(t))− η(t)∇fit(w(t)), where it ∈ [n] is chosen uniformly at random and η(t) is an adaptive step size. If there exist αl, αu > 0 such that:\n(a) αlI 4 Jφ 4 αuI,\n(b) |∂i1,...ikφ −1 j (x)|≤\nk!µ\n2αudL ∀x ∈ Rd, i1, . . . ik ∈ [d], j ∈ [d], k ≥ 2, then SGMD with η(t) < min (\n4µα2l 5L2αu , 1 2 √ dmaxi‖∇fi(w(t))‖\n) converges linearly to a global minimum.\n4The main difficulty is relating w(t+1) − w(t) to the gradient at timestep t.\nRemark. Note that there is a slight difference between the learning rate in Theorem 2 and Theorem 3 due to a multiplicative factor of µ. Consistent with the difference in learning rates between Bassily et al. (2018) and Vaswani et al. (2019), we can make the learning rate between the two theorems match if we assume the strong growth condition (i.e. Ei[‖∇fi(x)‖2] ≤ ρ‖∇f(x)‖2) with ρ = µ instead of using Lemma 2b. Moreover, as maxi‖∇fi(w(t))‖≤ √ 2nLf(w(0)), we establish linear\nconvergence for a fixed step size η < min (\n4µα2l 5L2αu , 1 2 √ 2dnLf(w(0))\n) as well." }, { "heading": "5 COROLLARIES OF LINEAR CONVERGENCE IN SGMD", "text": "We now present how the linear convergence results established by Theorems 1, 2, and 3 apply to commonly used optimization algorithms including mirror descent and Adagrad. In this section, we primarily extend the analysis from Theorem 1 for the non-stochastic case. However, our results can be extended analogously to give expected linear convergence in the stochastic case by using the extension provided in Theorem 3.\nGradient Descent. For the case of gradient descent, φ(x) = x and so αl = αu = 1. Hence, we see that gradient descent converges linearly under the conditions of Theorem 1 with η < 2L , which is consistent with the analysis in Karimi et al. (2016).\nMirror Descent. Let ψ : Rd → R be a strictly convex potential. Thus, φ(x) = ∇ψ(x) is an invertible function. If ψ is αl-strongly convex and (locally) αu-Lipschitz and f is L-smooth and µ-PL, then the conditions of Theorem 1 are satisfied. Moreover, since the αu-Lipschitz condition holds locally for most potentials considered in practice, our result implies linear convergence for mirror descent with αl-strongly convex potential ψ.\nAdagrad. Let φ(t) = G(t) 1 2 where G(t) is a diagonal matrix such that\nG(t)i,i = t∑\nk=0\n∇fi(w(k))2.\nThen GMD corresponds to Adagrad. In this case, we can apply Theorem 1 to establish linear convergence of Adagrad under the PL-Inequality provided that φ(t) satisfies the condition of Theorem 1. The following corollary proves that this condition holds and hence that Adagrad converges linearly.\nCorollary 1. Let f : Rd → R be an L-smooth function that is µ-PL. Let α(t)l 2 = mini∈[d] G (t) i,i and α(t)u 2 = maxi∈[d] G\n(t) i,i . If limt→∞\nα (t) l α (t) u 6= 0, then Adagrad converges linearly for adaptive step size\nη(t) = α\n(t) l L .\nThe proof is presented in Appendix H. While Corollary 1 can be extended to the stochastic setting via Theorem 3, it requires knowledge of µ to setup the learning rate, and the resulting learning rate provided is typically smaller than what we can use in practice. We analyze this case further\nin Section 7. Additionally, since the condition limt→∞ α\n(t) l α (t) u\n6= 0 is difficult to verify in practice, we provide Corollary 2 in Appendix H, which presents a verifiable condition under which Adagrad converges linearly." }, { "heading": "6 LOCAL CONVERGENCE AND IMPLICIT REGULARIZATION IN GMD", "text": "In the previous sections, we established linear convergence for GMD for real-valued loss, f : Rd → R, that is µ-PL for all x ∈ Rd. In this section, we show that f need only satisfy the PL inequality locally (i.e. within a ball of fixed radius around the initialization) in order to establish linear convergence. The following theorem (proof in Appendix G) extends Theorem 4.2 from Liu et al. (2020) to GMD and uses the PL* condition to establish both the existence of a global minimum and linear convergence to this global minimum under GMD5. We use B(x,R) = {z ; z ∈ Rd, ‖x− z‖2≤ R} to denote the ball of radius R centered at x.\n5We require additional assumptions on φ(t) for the case of time-dependent mirrors (see Appendix G.)\nTheorem 4. Suppose φ : Rd → Rd is an invertible, αu-Lipschitz function and that f : Rd → R is non-negative, L-smooth, and µ-PL* on B̃ = {x ; φ(x) ∈ B(φ(w(0)), R)} with R = 2 √ 2L √ f(w(0))α2u αlµ . If for all x, y ∈ Rd there exists αl > 0 such that\n〈φ(x)− φ(y), x− y〉 ≥ αl‖x− y‖2,\nthen,\n(1) There exists a global minimum w(∞) ∈ B̃.\n(2) GMD converges linearly to w(∞) for η = αl L . (3) If w∗ = argmin w∈B̃ ; f(w)=0 ‖φ(w)− φ(w(0))‖, then, ‖φ(w∗)− φ(w(∞))‖≤ 2R.\nApproximate Implicit Regularization in GMD. When R is small, we can view the result of Theorem 4 as a characterization of the solution selected by GMD, thereby obtaining approximate implicit regularization results for GMD. Namely, for δ = R2 , we have ‖φ(w\n∗) − φ(w∞)‖≤ δ. Hence provided that R is small (which holds for small f(w(0))), GMD selects an interpolating solution that is close to w∗ in `2-norm in the dual space induced by φ. This view is consistent with the characterization of approximate implicit regularization in Azizan et al. (2019), as is shown by Corollary 3 in Appendix I). In particular, Corollary 3 implies the assumptions used in Azizan et al. (2019) for the full batch case by proving (1) the existence of such a w(∞), (2) linear convergence of w(0) to w(∞), and (3) providing explicit forms for (where = R2 above). Importantly, the approximate implicit regularization result for mirror descent does not need to be stated in terms of Bregman divergence, but can be viewed more naturally as ‖∇ψ(w(∞))−∇ψ(w∗)‖2 being small." }, { "heading": "7 EXPERIMENTAL VERIFICATION OF OUR THEORETICAL RESULTS", "text": "We now present a simple set of experiments under which we can explicitly compute the learning rates in our theorems. We will show that in accordance with our theory, both fixed and adaptive versions of these learning rates yield linear convergence. We focus on computing learning rates for Adagrad in the noiseless regression setting used in Xie et al. (2020). Namely, we are given (X, y) ∈ Rn×d × Rn such that there exists a w∗ ∈ Rd such that Xw∗ = y. If n < d, then the system is over-parameterized, and if n ≥ d, the system is sufficiently parameterized and has a unique solution.\nIn this setting, the squared loss (MSE) is L-smooth with L = λmax(XXT ), and it is µ-PL with µ = λmin(XX\nT ) where λmax and λmin refer to the largest and smallest non-zero eigenvalues, respectively6. Moreover, for Adagrad, we can compute α(t)l = mini∈[d]( ∑t k=0∇fi(w(k)))2 and α (t) u = maxi∈[d]( ∑t k=0∇fi(w(k)))2 at each timestep. Hence for Adagrad in the noiseless linear regression setting, we can explicitly compute the learning rate provided in Theorem 3 for the stochastic setting and in Corollary 1 for the full batch setting.\nFigure 1 demonstrates that in both, the over-parameterized and sufficiently parameterized settings, our provided learning rates yield linear convergence. In the stochastic setting, the theory for fixed learning rates suggests a very small rate (≈ 10−9 for Figure 1d) and hence we chose to only present the more reasonable adaptive step size as a comparison. In the full batch setting, the learning rate obtained from our theorems out-performs using the standard fixed learning rate of 0.1, while performance is comparable for the stochastic setting. Interestingly, our theory suggests an adaptive learning rate that is increasing (in contrast to the usual decreasing learning rate schedules). In particular, while the suggested learning rate for Figure 1a starts at 0.99, it increases to 1.56 at the end of training.\nIn Appendix J, we present experiments on over-parameterized neural networks. While the PLcondition holds in this setting (Liu et al., 2020), it can be difficult to compute the smoothness parameter L (which was the motivation for developing Adagrad-Norm). Interestingly, our experiments\n6We take µ as the smallest non-zero eigenvalue since Adagrad updates keep parameters in the span of the data.\nConvergence of Adagrad\nConvergence of Stochastic Adagrad Noiseless Regression\ndemonstrate that our increasing adaptive learning rate from Theorem 1, using an approximation for L, provides convergence for Adagrad in over-parameterized networks. The link to the code is provided in Appendix J." }, { "heading": "8 CONCLUSION", "text": "In this work, we presented stochastic generalized mirror descent, which generalizes both mirror descent and pre-conditioner methods. By using the PL-condition and a Taylor-series based analysis, we provided sufficient conditions for linear convergence of SGMD in the non-convex setting. As a corollary, we obtained sufficient conditions for linear convergence of both mirror descent and preconditioner methods such as Adagrad. Lastly, we prove the existence of an interpolating solution and linear convergence of GMD to this solution for non-negative loss functions that are locally PL*. Importantly, our local convergence results allow us to obtain approximate implicit regularization results for GMD. Namely, we prove that GMD linearly converges to an interpolating solution that is approximately the closest interpolating solution to the initialization in `2 norm in the dual space. For the full batch setting, this result provides a more natural characterization of implicit regularization in terms of `2 norm in the dual space, as opposed to Bregman divergence.\nLooking ahead, we envision that the generality of our analysis (and the PL-condition) could provide useful in the analysis of other commonly used adaptive methods such as Adam (Kingma & Ba, 2015). Moreover, since the PL-condition holds in varied settings including over-parameterized neural networks (Liu et al., 2020), it would be interesting to analyze whether the learning rates obtained here provide an improvement for convergence in these modern settings." }, { "heading": "A PROOF OF LEMMA 1", "text": "We restate the lemma below.\nLemma. If f : Rd → R is µ-PL*, L-smooth and f(x) ≥ 0 for all x ∈ Rd, then gradient descent with learning rate η < 2L converges linearly to x ∗ satisfying f(x∗) = 0.\nProof. The proof follows exactly from Theorem 1 of Karimi et al. (2016). Since f is L-smooth, by Lemma 2a it holds that:\nf(w(t+1))− f(w(t)) ≤ 〈∇f(w(t)), w(t+1) − w(t)〉+ L 2 ‖w(t+1) − w(t)‖2.\n=⇒ f(w(t+1) − f(w(t)) ≤ −η‖∇f(w(t))‖2+L 2 η2‖∇f(w(t))‖2\n=⇒ f(w(t+1) − f(w(t)) ≤ ( −η + η 2L\n2\n) 2µf(w(t))\n=⇒ f(w(t+1) ≤ ( 1− 2µη + µη2L ) f(w(t))\nHence, if η < 2L , then C = ( 1− 2µη + µη2L ) < 1. Thus, we have f(w(t+1)) ≤ Cf(w(t)) for C < 1. Thus, as f is bounded below by 0 and the sequence {f(w(t))}t∈N monotonically decreases with infimum 0, the monotone convergence theorem implies lim\nt→∞ f(w(t)) = 0." }, { "heading": "B PROOF OF LEMMA 3", "text": "Proof. From Lemma 2 and from the PL condition, we have:\n2µ(f(x)− f(x∗)) ≤ ‖∇f(x)‖2≤ 2L(f(x)− f(x∗)) =⇒ µ ≤ L" }, { "heading": "C PROOF OF THEOREM 1", "text": "Proof. Since f is L-smooth, by Lemma 2a it holds that:\nf(w(t+1))− f(w(t)) ≤ 〈∇f(w(t)), w(t+1) − w(t)〉+ L 2 ‖w(t+1) − w(t)‖2. (5)\nNow by the condition on φ(t) in Theorem 1, we bound the first term on the right as follows:\n〈φ(t)(w(t+1))− φ(t)(w(t)), w(t+1) − w(t)〉 ≥ α(t)l ‖w (t+1) − w(t)‖2\n=⇒ 〈−η∇f(w(t)), w(t+1) − w(t)〉 ≥ α(t)l ‖w (t+1) − w(t)‖2 using Equation (2)\n=⇒ 〈∇f(w(t)), w(t+1) − w(t)〉 ≤ − α (t) l\nη ‖w(t+1) − w(t)‖2.\nSubstituting this bound back into the inequality in (5), we obtain\nf(w(t+1))− f(w(t)) ≤ ( − α (t) l\nη + L 2\n) ‖w(t+1) − w(t)‖2.\nSince the learning rate is selected so that the coefficient of ‖w(t+1)−w(t)‖2 on the right is negative, we obtain\nf(w(t+1))− f(w(t)) ≤ ( − α (t) l\nη + L 2\n) ‖w(t+1) − w(t)‖2\n≤ ( − α (t) l\nη + L 2\n) 1\nα (t) u\n2 ‖φ (t)(w(t+1))− φ(t)(w(t))‖2\n= ( − α (t) l\nη + L 2\n) 1\nα (t) u\n2 ‖−η∇f(w (t))‖2 using Equation (1)\n≤ ( − α (t) l\nη + L 2\n) 2µ η2\nα (t) u\n2 (f(w (t))− f(w∗)) as f is µ-PL\n=⇒ f(w(t+1))− f(w∗) ≤ ( 1− 2µ ηα (t) l\nα (t) u\n2 + µ Lη2\nα (t) u\n2\n) (f(w(t))− f(w∗)),\nwhere the second inequality follows since φ(t) is α(t)u -Lipschitz. For linear convergence, we need.\n0 < 1− 2µ ηα\n(t) l\nα (t) u\n2 + µ Lη2\nα (t) u\n2 < 1. (6)\nFrom Lemma 3, µ < α (t) u 2 L\nα (t) l\nalways holds and implies that the left inequality in (6) is satisfied for all\nη(t). The right inequality holds by our assumption that η(t) < 2α (t) l\nL , which completes the proof." }, { "heading": "D PROOF OF THEOREM 2", "text": "We repeat the theorem below for convenience.\nTheorem. Suppose f : Rd → R is L-smooth and µ-PL and φ : Rd → Rd is an infinitely differentiable, analytic function with analytic inverse, φ−1. If there exist αl, αu > 0 such that:\n(a) αlI 4 Jφ 4 αuI,\n(b) |∂i1,...ikφ −1 j (x)|≤\nk!\n2αud ∀x ∈ Rd, i1, . . . ik ∈ [d], j ∈ [d], k ≥ 2,\nthen generalized mirror descent converges linearly for η(t) < min (\n4α2l 5Lαu , 1 2 √ d‖∇f(w(t))‖\n) .\nProof. Since f is L-smooth, it holds by Lemma that 2:\nf(w(t+1))− f(w(t)) ≤ 〈∇f(w(t)), w(t+1) − w(t)〉+ L 2 ‖w(t+1) − w(t)‖2.\nNext, we want to bound the two quantities on the right hand side by a multiple of ‖∇f(w(t))‖2. We do so by expanding w(t+1) − w(t) using the Taylor series for φ−1 as follows:\nw(t+1) − w(t) = φ−1(φ(w(t))− η∇f(w(t)))− w(t)\n= −ηJφ−1(φ(w(t)))∇f(w(t))\n+ ∞∑ k=2 1 k! [∑d i1,i2...ik=1 (−η)k∂i1,...ikφ −1 j (φ(w (t)))(∇f(w(t))i1 . . .∇f(w(t))ik) ] .\nThe quantity in brackets is a column vector where we only wrote out the jth coordinate for j ∈ [d]. Now we bound the term 〈∇f(w(t)), w(t+1) − w(t)〉:\n〈∇f(w(t)), w(t+1) − w(t)〉 = −η∇f(w(t))TJ−1φ (w (t))∇f(w(t))\n+∇f(w(t))T ∞∑ k=2 1 k! [ d∑ i1,i2...ik=1 (−η)k∂i1,...ikφ −1 j (φ(w (t)))(∇f(w(t))i1 . . .∇f(w(t))ik) ] .\nWe have separated the first order term from the other orders because we will bound them separately using conditions (a) and (b) respectively. Namely, we first have:\n−η∇f(w(t))TJ−1φ (w (t))∇f(w(t)) ≤ − η\nαu ‖∇f(w(t))‖2.\nNext, we use the Cauchy-Schwarz inequality on inner products to bound the inner product of ∇f(w(t)) and the higher order terms. In the following, we use α to denote 12αud .\n∇f(w(t))T ∞∑ k=2 1 k! [∑d i1,i2...ik=1 (−η)k∂i1,...ikφ −1 j (φ(w (t)))(∇f(w(t))i1 . . .∇f(w(t))ik) ]\n≤‖∇f(w(t))‖ ∞∑ k=2 1 k! ∥∥∥[∑di1,i2...ik=1(−η)k∂i1,...ikφ−1j (φ(w(t)))(∇f(w(t))i1 . . .∇f(w(t))ik)]∥∥∥ ≤‖∇f(w(t))‖\n∞∑ k=2 αk! k! (η)k ∥∥∥[∑di1,i2...ik=1(|∇f(w(t))i1 |. . . |∇f(w(t))ik |)]∥∥∥ =‖∇f(w(t))‖α\n∞∑ k=2 √ d(η)k(|∇f(w(t))1|+ . . . |∇f(w(t)))d|)k\n=‖∇f(w(t))‖α ∞∑ k=2 (η)k √ d|〈\n |∇f(w(t))1|\n... |∇f(w(t))d| ,1〉|k ≤‖∇f(w(t))‖α\n∞∑ k=2 (η)k √ d‖∇f(w(t))‖k( √ d)k\n=α ∞∑ k=2 ( √ d)k+1(η)k‖∇f(w(t))‖k+1\n=α( √ d)3(η)2‖∇f(w(t))‖3 ∞∑ k=0 ( √ d)k(η)k‖∇f(w(t))‖k= α( √ d)3(η)2‖∇f(w(t))‖3 1− √ dη‖∇f(w(t))‖ .\nHence we can select η < 1 2 √ d‖∇f(w(t))‖ such that:\nα( √ d)3(η)2‖∇f(w(t))‖3\n1− √ dη‖∇f(w(t))‖ ≤ α( √ d)3(η)2‖∇f(w(t))‖3√ dη‖∇f(w(t))‖ = dαη‖∇f(w(t))‖2.\nThus, we have established the following bound: 〈∇f(w(t)), w(t+1) − w(t)〉 ≤ ( − η αu + dαη ) ‖∇f(w(t))‖2= ( − η 2αu ) ‖∇f(w(t))‖2.\nProceeding analogously as above, we establish a bound on ‖w(t+1) − w(t)‖2: ‖w(t+1) − w(t)‖2≤ ( η2\nα2l + α2d2η2\n) ‖∇f(w(t))‖2= ( η2\nα2l +\nη2\n4α2u\n) ‖∇f(w(t))‖2.\nPutting the bounds together we obtain: f(w(t+1))− f(w(t)) ≤ ( − η 2αu + Lη2 2α2l + Lη2 8α2u ) ‖∇f(w(t))‖2.\nWe select our learning rate to make the coefficient of ‖∇f(w(t)‖2 negative, and thus by the PLinequality (4), we have:\nf(w(t+1))− f(w(t)) ≤ ( − η 2αu + Lη2 2α2l + Lη2 8α2u ) 2µ(f(w(t))− f(w∗))\n=⇒ f(w(t+1))− f(w∗) ≤ ( 1− µη\nαu + µLη2 α2l + µLη2 4α2u\n) (f(w(t))− f(w∗)).\nHence, w(t) converges linearly when:\n0 < 1− µη αu + µLη2 α2l + µLη2 4α2u < 1.\nTo show that the left hand side is true, we analyze when the discriminant is negative. Namely, we have that the left side holds if:\nµ2\nα2u − 4µL α2l − µL α2u < 0\n=⇒ µ α2u < 4L α2l + L α2u =⇒ µ < 4Lα 2 u\nα2l + L.\nSince µ < L by Lemma 3, this is always true. The right hand side holds when η < 4α 2 l\n5Lαu , which\nholds by the assumption of the theorem, thereby completing the proof.\nNote that if f is non-negative and µ-PL*, then we have:\nη(t) ≤ 1 2 √ 2Ld √ f(w(0)) ≤ 1 2 √ 2Ld √ f(w(t)) ≤ 1 2 √ d‖∇f(w(t))‖\nHence, we can use a fixed learning rate of η = min (\n4α2l 5Lαu , 1 2 √ 2Ld √ f(w(0))\n) in this setting." }, { "heading": "E CONDITIONS FOR MONOTONICALLY DECREASING GRADIENTS", "text": "As discussed in the remarks after Theorem 2, we can provide a fixed learning rate for linear convergence provided that the gradients are monotonically decreasing. As we show below, this requires special conditions on the PL constant, µ, and the smoothness constant, L, for f .\nProposition 1. Suppose f : Rd → R is L-smooth and µ-PL and φ : Rd → Rd is an infinitely differentiable, analytic function with analytic inverse, φ−1. If there exist αl, αu > 0 such that:\n(a) αlI 4 Jφ 4 αuI,\n(b) |∂i1,...ikφ −1 j (x)|≤\nk!\n2αud ∀x ∈ Rd, i1, . . . ik ∈ [d], j ∈ [d], k ≥ 2,\n(c) µ\nL >\n4α2u + α 2 l\n4α2u + 2α 2 l\n,\nthen generalized mirror descent converges linearly for any η < min (\n4α2l 5Lαu , 1 2 √ d‖∇f(w(0))‖\n) .\nProof. Let C = 1 − µηαu + µLη2 α2l + µLη 2 4α2u . We follow exactly the proof of Theorem 2 except that at each timestep we need C < µL (which is less than 1 by Lemma 3) in order for the gradients to converge monotonically since:\n‖∇f(w(t+1))‖2 ≤ 2L(f(w(t+1))− f(w∗)) See Lemma 2 ≤ 2LC(f(w(t))− f(w∗))\n≤ LC µ ‖∇f(w(t))‖2 As f is µ-PL.\nHence in order for ‖∇f(w(t+1))‖2< ‖∇f(w(t))‖2, we need C < µL . Thus, we select our learning rate such that:\n0 < 1− µη αu + µLη2 α2l + µLη2 4α2u < µ L .\nNow, in order to have a solution to this system, we must ensure that the discriminant of the quadratic equation in η when considering the right hand side inequality is larger than zero. In particular we require:\nµ2 α2u − 4\n( 1− µ\nL )(µL α2l + µL 4α2u ) > 0\n=⇒ µ L >\n4α2u + α 2 l\n4α2u + 2α 2 l\n,\nwhich completes the proof." }, { "heading": "F PROOF OF THEOREM 3", "text": "We repeat the theorem below for convenience.\nTheorem. Suppose f(x) = 1n ∑n i=1 fi(x) where fi : Rd → R are non-negative, Li-smooth functions with L = supi∈[n] Li and f is µ-PL *. Let φ : Rd → Rd be an infinitely differentiable, analytic function with analytic inverse, φ−1. SGMD is used to minimize f according to the updates:\nφ(w(t+1)) = φ(w(t))− η(t)∇fit(w(t)),\nwhere it ∈ [n] is chosen uniformly at random and η(t) is an adaptive step size. If there exist αl, αu > 0 such that:\n(a) αlI 4 Jφ 4 αuI,\n(b) |∂i1,...ikφ −1 j (x)|≤\nk!µ\n2αudL ∀x ∈ Rd, i1, . . . ik ∈ [d], j ∈ [d], k ≥ 2,\nthen SGMD converges linearly to a global minimum for any η(t) < min ( 4µα2l 5L2αu , 1 2 √ dmaxi‖∇fi(w(t))‖ ) .\nProof. We follow the proof of Theorem 2. Namely, Lemma 4 implies that f is L-smooth and hence\nf(w(t+1))− f(w(t)) ≤ 〈∇f(w(t)), w(t+1) − w(t)〉+ L 2 ‖w(t+1) − w(t)‖2.\nAs before, we want to bound the two quantities on the right by ‖∇f(w(t))‖2. Following the bounds from the proof of Theorem 2, provided η(t) < 1\n2 √ d‖∇fi(w(t))‖ , we have\n∇f(w(t))T ∞∑ k=2 1 k! [∑d i1,i2...ik=1 (−η)k∂l1,...lkφ −1 j (φ(w (t)))(∇fit(w(t))l1 . . .∇fit(w(t))lk) ]\n≤ η (t)µ\n2αuL ‖∇f(w(t))‖‖∇fit(w(t))‖.\nTo remove the dependence of η(t) on it, we take η(t) < 12√dmaxi‖∇fi(w(t))‖ . Since f is µ−PL * and fi is non-negative for all i ∈ [n], ‖∇fi(w(t)‖≤ √ 2Lfi(w(t)). Thus, we can take\nη(t) < 1\n2 √ 2dLn √ f(w(t)) ≤ 1 2 √ dmaxi‖∇fi(w(t))‖\nThis implies the following bounds:\n〈∇f(w(t)), w(t+1) − w(t)〉 ≤ −η(t)∇f(w(t)) T J−1φ (w (t))∇fit(w(t)) + ( η(t)µ\n2αuL\n) ‖∇f(w(t))‖‖∇fit(w(t))‖,\n‖w(t+1) − w(t)‖2≤\n( η(t) 2\nα2l + η(t)\n2\n4α2u\n) ‖∇fit(w(t))‖2.\nPutting the bounds together we obtain:\nf(w(t+1))− f(w(t)) ≤ −η(t)∇f(w(t)) T J−1φ (w (t))∇fit(w(t)) + ( η(t)µ\n2αuL\n) ‖∇f(w(t))‖‖∇fit(w(t))‖\n+\n( η(t) 2\nα2l + η(t)\n2\n4α2u\n) ‖∇fit(w(t))‖2\n≤ −η(t)∇f(w(t)) T J−1φ (w (t))∇fit(w(t)) + ( η(t)µ\n2αuL\n) 2L √ f(w(t))fit(w (t))\n+\n( η(t) 2\nα2l + η(t)\n2\n4α2u\n) ‖∇fit(w(t))‖2\nNow taking expectation over it, we obtain E[f(w(t+1))]− f(w(t)) ≤ ( −η (t)\nαu\n) ‖∇f(w(t))‖2+ ( η(t)µ\nαu\n)√ f(w(t))E [√ fit(w (t)) ] + ( Lη(t) 2\n2α2l + Lη(t)\n2\n8α2u\n) E[‖∇fit(w(t))‖2]\n≤ ( −η (t)\nαu\n) ‖∇f(w(t))‖2+ ( η(t)µ\nαu\n) f(w(t))\n+\n( Lη(t) 2\n2α2l + Lη(t)\n2\n8α2u\n) E[‖∇fit(w(t))‖2]\n≤ ( −2µη (t)\nαu\n) f(w(t)) + ( η(t)µ\nαu\n) f(w(t))\n+\n( Lη(t) 2\n2α2l + Lη(t)\n2\n8α2u\n) E[2L(fit(w(t))− fit(w∗))]\n≤ ( −µη (t)\nαu + L2η(t)\n2\nα2l + L2η(t)\n2\n4α2u\n) (f(w(t))).\nwhere the second inequality follows from Jensen’s inequality and the third inequality follows from Lemma 2. Hence, we have:\nE[f(w(t+1))] ≤ ( 1− µη (t)\nαu + L2η(t)\n2\nα2l + L2η(t)\n2\n4α2u\n) (f(w(t))).\nNow let C = ( −µη (t)\nαu + L\n2η(t) 2\nα2l + L\n2η(t) 2 4α2u\n) . Then taking expectation with respect to it, it−1, . . . i1,\nyields\nEit,...,i1 [f(w(t+1))] ≤ (1 + C)(Eit,...,i1 [f(w(t))] = (1 + C)(Eit−1,...,i1 [Eit|it−1,...i1 [f(w (t))]])\n= (1 + C)(Eit−1,...,i1f(w(t))]).\nHence, we can proceed inductively to conclude that\nEit,...,i1 [f(w(t+1))] ≤ (1 + C)t+1(f(w(0)))). Thus if 0 < 1+C < 1, we establish linear convergence. The left hand side is satisfied since µ < L, and the right hand side is satisfied for η(t) < 4µα 2 l\n5L2αu , which holds by the theorem’s assumption,\nthereby completing the proof." }, { "heading": "G PROOF OF THEOREM 4", "text": "We restate the theorem below.\nTheorem. Suppose φ : Rd → Rd is an invertible, αu-Lipschitz function and that f : Rd → R is non-negative, L-smooth, and µ-PL* on B̃ = {x ; φ(x) ∈ B(φ(w(0)), R)} with R = 2 √ 2L √ f(w(0))α2u αlµ . If for all x, y ∈ Rd there exists αl > 0 such that\n〈φ(x)− φ(y), x− y〉 ≥ αl‖x− y‖2,\nthen,\n(1) There exists a global minimum w(∞) ∈ B̃.\n(2) GMD converges linearly to w(∞) for η = αl L . (3) If w∗ = argmin w∈B̃ ; f(w)=0 ‖φ(w)− φ(w(0))‖ then, ‖φ(w∗)− φ(w(∞))‖≤ 2R.\nProof. The proof follows from the proofs of Lemma 1, Theorem 1, and Theorem 4.2 from Liu et al. (2020). Namely, we will proceed by strong induction. Let κ = Lαu 2\nµαl2 . At timestep 0, we trivially\nhave that w(0) ∈ B̃ and f(w(0)) ≤ f(w(0)). At timestep t, we assume that w(0), w(1), . . . w(t) ∈ B̃ and that f(w(i)) ≤ (1 − κ−1)f(w(i−1)) for i ∈ [t]. Then at timestep t + 1, from the proofs of Lemma 1 and Theorem 1, we have:\nf(w(t+1)) ≤ (1− κ−1)f(w(t))\nNext, we need to show that w(t+1) ∈ B̃. We have that:\n‖φ(w(t+1))− φ(w(0))‖ = ∥∥∥∥∥ t∑ i=0 −η∇f(w(i)) ∥∥∥∥∥ ≤ η\nt∑ i=0 ‖∇f(w(i))‖ By the Triangle Inequality\n≤ η √ 2 Lα2u α2l t∑ i=0 √ f(w(t))− f(w(t+1)) (7)\n≤ η √ 2 Lα2u α2l t∑ i=0 √ f(w(t))\n≤ η √ 2L αu αl t∑ i=0 √ (1− κ−1)i √ f(w(0))\n= η √\n2Lf(w(0)) αu αl t∑ i=0 (1− κ−1) i2\n≤ η √\n2Lf(w(0)) αu αl\n1\n1− √ 1− κ−1 ≤ η √\n2Lf(w(0)) αu αl 2 κ−1\n= αl L\n√ 2Lf(w(0))\nαu αl 2 αuL αlµ\n= 2 √ 2L √ f(w(0))α2u αlµ = R\nThe identity in (7) follows from the proof of f(w(t+1)) ≤ (1− κ−1)f(w(t)). Namely,\nf(w(t+1))− f(w(t)) ≤ − L 2α2u ‖−η∇f(w(t))‖2\n=⇒ ‖∇f(w(t))‖≤ √\n2α2u L\n√ f(w(t))− f(w(t+1))\n=⇒ ‖∇f(w(t))‖≤ η √ 2Lα2u α2l √ f(w(t))− f(w(t+1))\nHence we conclude that w(t+1) ∈ B̃ and so induction is complete.\nIn the case that φ(t) is time-dependent, we establish a similar convergence result by assuming that∥∥∥∥ ∞∑ i=1 φ(i)(w(i))− φ(i−1)(w(i)) ∥∥∥∥ = δ <∞. Additionally if α(t)u has a uniform upper bound and α(t)l has a uniform lower bound, then:\n‖φ(t)(w(t+1))− φ(0)(w(0))‖ = ‖φ(t)(w(t+1))− φ(t)(w(t)) + φ(t)(w(t))− φ(t−1)(w(t)) + φ(t−1)(w(t))− φ(t−1)(w(t−1)) + . . . φ(0)(w(1))− φ(0)(w(0))‖\n≤ ∥∥∥∥∥ t∑ i=0 φ(i)(w(i+1))− φ(i)(w(i)) ∥∥∥∥∥+ ∥∥∥∥∥ t∑ i=1 φ(i)(w(i))− φ(i−1)(w(i)) ∥∥∥∥∥ ≤ R+ δ\nHence we would conclude that φ(t)(w(t+1)) ∈ B(φ(0)(w(0)), R+ δ)." }, { "heading": "H PROOF OF COROLLARY 1 AND COROLLARY 2", "text": "We repeat Corollary 1 below.\nCorollary. Let f : Rd → R be an L-smooth function that is µ-PL. Let α(t)l 2 = mini∈[d] G (t) i,i and\nα (t) u 2 = maxi∈[d] G\n(t) i,i . If limt→∞\nα (t) l α (t) u 6= 0, then Adagrad converges linearly for adaptive step size\nη(t) = α\n(t) l L .\nProof. By definition of G(t), we have that:\n(1) α (t) l 2 = min i∈[d] G(t)i,i\n(2) α(t)u 2 = max i∈[d] G(t)i,i\nFrom the proof of Theorem 1, using learning rate η(t) = α (t) l\nL at timestep t gives:\nf(w(t+1))− f(w∗) ≤ 1− µα(t)l 2 Lα (t) u 2 (f(w(t))− f(w∗)) Let κ(t) = µα (t) l 2\nLα (t) u 2 . Although we have that (1 − κ(t)) < 1 for all t, we need to ensure that ∞∏ i=0 (1 − κ(i)) = 0 (otherwise we would not get convergence to a global minimum). Using the\nassumption that lim t→∞\nα (t) l α (t) u 6= 0, let lim t→∞\n(1 − κ(t)) = 1 − c < 1. Then using the definition of the limit, for 0 < < c, there exists N such that for t > N , ∣∣κ(t) − c∣∣ < . Hence, letting\nc∗ = min ( c− , min t∈{0,1,...N} κ(t) ) , implies that (1 − κ(t)) < 1 − c∗ for all timesteps t. Thus, we have that: ∞∏ i=0 (1− κ(i)) < ∞∏ i=0 (1− c∗) = 0\nThus, Adagrad converges linearly to a global minimum.\nWe present Corollary 2 below.\nCorollary 2. Let f : Rd → R be an L-smooth function that is µ-PL. Let α(t)l 2 = mini∈[d] G (t) i,i . Then Adagrad converges linearly for adaptive step size η(t) = α (t) l\nL or fixed step size η = α\n(0) l\nL if α\n(0) l\n2\n2L(f(w(0))−f(w∗)) > L µ .\nProof. By definition of G(t), we have that:\n(1) α (t) l 2 = min i∈[d] G(t)i,i\n(2) α(t)u 2 = max i∈[d] G(t)i,i\nIn particular, we can choose αl = α (0) l uniformly. We need to now ensure that α (t) u does not diverge. We prove this by using strong induction to show that α(t)u 2 ≤ S uniformly for some S > 0. The base case holds by Lemma 2 since we have:\nα(0)u 2 ≤ ‖∇f(w(0))‖2= S\nNow assume that α(i)u 2 < S for i ∈ {0, 1, . . . t− 1}. Then we have:\nα(t)u 2 ≤ t∑ i=0 ‖∇f(w(i))‖2\n≤ t∑ i=0 2L(f(w(i))− f(w∗)) by Lemma 2 ≤ 2L(f(w(0))− f(w∗)) t−1∑ i=0 i∏ j=0 1− µα(j)l 2 Lα (j) u 2\n ≤ 2L(f(w(0))− f(w∗))\nt−1∑ i=0 i∏ j=0 1− µα(0)l 2 LS ≤ 2L(f(w(0))− f(w∗)) 1\n1− 1 + µα (0) l\n2\nLS\n= 2L(f(w(0))− f(w∗)) LS\nµα (0) l\n2 < S by assumption\nHence, by induction, α(t)u is bounded uniformly for all timesteps t." }, { "heading": "I PROOF OF COROLLARY 3", "text": "We present the corollary below.\nCorollary 3. Suppose ψ is an αl-strongly convex function and that ∇ψ is αu-Lipschitz. Let Dψ(x, y) = ψ(x) − ψ(y) − ∇ψ(y)T (x − y) denote the Bregman divergence for x, y ∈ Rd. If f : Rd → R is non-negative, L-smooth, and µ-PL* on B̃ = {x ; ∇ψ(x) ∈ B(∇ψ(w(0)), R)} with R = 2 √ 2L √ f(w(0))α2u αlµ , then:\n(1) There exists a global minimum w(∞) ∈ B̃ such that Dψ(w(∞), w(0)) ≤ R2\n2αl .\n(2) Mirror descent with potential ψ converges linearly to w(∞) for η = αl L .\n(3) If w∗ = argmin {w ; f(w)=0}\nDψ(w,w (0)), then D(w∗, w(∞)) ≤ αuR\n2 α3l + R2 αl .\nProof. The proof of existence and linear convergence follow immediately from Theorem 4. All that remains is to show that Dψ(w(∞), w(0)) ≤ R 2\n2µ . As ψ is αl-strongly convex, we have:\nψ(w(∞)) ≤ ψ(w(0)) + 〈∇ψ(w(0)), w(∞) − w(0)〉+ 1 2αl ‖∇ψ(w(∞))−∇ψ(w(0))‖2 By Lemma 5\n=⇒ Dψ(w(∞), w(0)) ≤ 1 2αl ‖∇ψ(w(∞))−∇ψ(w(0))‖2≤ R\n2\n2αl\nNow let w∗ = argmin{w ; f(w)=0}Dψ(w,w(0)). Hence Dψ(w∗, w(0)) < R 2\n2αl by definition. Then\nwe have:\nDψ(w ∗, w(∞)) ≤ 1\n2αl ‖∇ψ(w∗)−∇ψ(w(∞))‖2\n≤ 1 2αl (2‖∇ψ(w∗)−∇ψ(w(0))‖2+2‖∇ψ(w(0))−∇ψ(w(∞))‖2)\n≤ αu αl ‖w∗ − w(0)‖2+R\n2\nαl\n≤ αu αl 2 αl Dψ(w ∗, w(0)) + R2 αl By Definition 3 ≤ αuR 2\nα3l + R2 αl" }, { "heading": "J EXPERIMENTS ON OVER-PARAMETERIZED NEURAL NETWORKS", "text": "Below, we present experiments in which we apply the learning rate given by Corollary 1 to overparameterized neural networks. Since the main difficulty is estimating the parameter L in neural networks, we instead provide a crude approximation for L by setting L(t) = .99‖∇f(w\n(t))‖2 2f(w(t))\n. The intuition for this approximation comes from Lemma 2. While there are no guarantees that this approximation yields linear convergence according to our theory, Figure 2 suggests empirically that this approximation provides convergence. Moreover, this approximation allows us to compute our adaptive learning rate in practice.\nCode for all experiments is available at:\nhttps://anonymous.4open.science/r/cef30260-473d-4116-bda1-1debdcc4e00a/\nConvergence of Adagrad in Over-parameterized Neural Networks\n2f(w(t))\nleads to convergence for Adagrad in the noisy linear regression setting (60 exam-\nples in 50 dimensions with uniform noise applied to the labels). (a) 1 hidden layer network with Leaky ReLU activation Xu et al. (2015) and 100 hidden units. (b) 1 hidden layer network with x+ sin(x) activation with 100 hidden units. All networks were trained using a single Titan Xp, but can be trained on a laptop as well." } ]
2,020
null
SP:2eec02429adee2ab91752629c85df9f1463e54d8
[ "This paper addresses the problem of unsupervised learning of class representation using data augmentation. Its key idea is to encourage the learned representations to have low MI while maximizing the original augmentation-driven MI objective. It reports the improved performance for the benchmarks of Ji et al. 2019 – classification on some easy datasets (e.g. CIFAR-10, CINIC-10, SVHN and STL-10)." ]
Unsupervised learning of categorical representations using data augmentations appears to be a promising approach and has proven useful for finding suitable representations for downstream tasks. However current state-of-the-art methods require preprocessing (e.g. Sobel edge detection) to work. We introduce a mutual information minimization strategy for unsupervised learning from augmentations, that prevents learning from locking on to easy to find, yet unimportant, representations at the expense of more informative ones requiring more complex processing. We demonstrate specifically that this process learns representations which capture higher mutual information between augmentations, and demonstrate that these representations are better suited to the downstream exemplar task of clustering. We obtain substantial accuracy improvements on CIFAR-10, CIFAR-100-20, and SVHN.
[]
[ { "authors": [ "Philip Bachman", "Devon Hjelm", "William Buchwalter" ], "title": "Learning representations by maximizing mutual information across views", "venue": "arXiv preprint arXiv:1906.00910,", "year": 2019 }, { "authors": [ "Mohamed Ishmael Belghazi", "Aristide Baratin", "Sai Rajeswar", "Sherjil Ozair", "Yoshua Bengio", "Aaron Courville", "R Devon Hjelm" ], "title": "MINE: mutual information neural estimation", "venue": "In Proceedings of the thirty-fifth International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Mathilde Caron", "Piotr Bojanowski", "Armand Joulin", "Matthijs Douze" ], "title": "Deep clustering for unsupervised learning of visual features", "venue": "In Proceedings of the European Conference on Computer Vision,", "year": 2018 }, { "authors": [ "Jianlong Chang", "Lingfeng Wang", "Gaofeng Meng", "Shiming Xiang", "Chunhong Pan" ], "title": "Deep adaptive image clustering", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Adam Coates", "Andrew Ng", "Honglak Lee" ], "title": "An analysis of single-layer networks in unsupervised feature learning", "venue": "In Proceedings of the fourteenth International Conference on Artificial Intelligence andSstatistics,", "year": 2011 }, { "authors": [ "Ekin D Cubuk", "Barret Zoph", "Jonathon Shlens", "Quoc V Le" ], "title": "RandAugment: Practical data augmentation with no separate search", "venue": null, "year": 1909 }, { "authors": [ "Luke N Darlow", "Elliot J Crowley", "Antreas Antoniou", "Amos J Storkey" ], "title": "CINIC-10 is not ImageNet or CIFAR-10", "venue": "arXiv preprint arXiv:1810.03505,", "year": 2018 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "ImageNet: A Large-Scale Hierarchical Image Database", "venue": "In CVPR09,", "year": 2009 }, { "authors": [ "Alexey Dosovitskiy", "Jost Tobias Springenberg", "Martin Riedmiller", "Thomas Brox" ], "title": "Discriminative unsupervised feature learning with convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2014 }, { "authors": [ "Alexey Dosovitskiy", "Philipp Fischer", "Jost Tobias Springenberg", "Martin Riedmiller", "Thomas Brox" ], "title": "Discriminative unsupervised feature learning with exemplar convolutional neural networks", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2015 }, { "authors": [ "Maziar Moradi Fard", "Thibaut Thonet", "Eric Gaussier" ], "title": "Deep k-means: Jointly clustering with k-means and learning representations", "venue": "arXiv preprint arXiv:1806.10069,", "year": 2018 }, { "authors": [ "Kamran Ghasedi Dizaji", "Amirhossein Herandi", "Cheng Deng", "Weidong Cai", "Heng Huang" ], "title": "Deep clustering via joint convolutional autoencoder embedding and relative entropy minimization", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Philip Haeusser", "Johannes Plapp", "Vladimir Golkov", "Elie Aljalbout", "Daniel Cremers" ], "title": "Associative deep clustering: training a classification network with no labels", "venue": "In German Conference on Pattern Recognition,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In Proceedings of fourteenth European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Xu Ji", "João F Henriques", "Andrea Vedaldi" ], "title": "Invariant information clustering for unsupervised image classification and segmentation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Harold W Kuhn" ], "title": "The hungarian method for the assignment problem", "venue": "Naval research logistics quarterly,", "year": 1955 }, { "authors": [ "Sungbin Lim", "Ildoo Kim", "Taesup Kim", "Chiheon Kim", "Sungwoong Kim" ], "title": "Fast AutoAugment", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ralph Linsker" ], "title": "Self-organization in a perceptual network", "venue": null, "year": 1988 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "In NIPS Workshop on Deep Learning and Unsupervised Feature Learning,", "year": 2011 }, { "authors": [ "Michael Tschannen", "Josip Djolonga", "Paul K Rubenstein", "Sylvain Gelly", "Mario Lucic" ], "title": "On mutual information maximization for representation learning", "venue": null, "year": 1907 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Jianlong Wu", "Keyu Long", "Fei Wang", "Chen Qian", "Cheng Li", "Zhouchen Lin", "Hongbin Zha" ], "title": "Deep comprehensive correlation mining for image clustering", "venue": "In International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Junyuan Xie", "Ross Girshick", "Ali Farhadi" ], "title": "Unsupervised deep embedding for clustering analysis", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Jianwei Yang", "Devi Parikh", "Dhruv Batra" ], "title": "Joint unsupervised learning of deep representations and image clusters", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "It is very expensive to label a dataset with respect to a particular task. Consider the alternative where a user, instead of labelling a dataset, specifies a simple set of class-preserving transformations or ‘augmentations’. For example, lighting changes will not change a dog into a cat. Is it possible to learn a model that produces a useful representation by leveraging a set of such augmentations? This representation would need to be good at capturing salient information about the data, and enable downstream tasks to be done efficiently. If the representation were a discrete labelling which groups the dataset into clusters, an obvious choice of downstream task is unsupervised clustering. Ideally the clusters should match direct labelling, without ever having been learnt on explicitly labelled data.\nUsing data augmentations to drive unsupervised representation learning for images has been explored by a number of authors (Dosovitskiy et al., 2014; 2015; Bachman et al., 2019; Chang et al., 2017; Wu et al., 2019; Ji et al., 2019; Cubuk et al., 2019). These approaches typically involve learning neural networks that map augmentations of the same image to similar representations, which is reasonable since variances across many common augmentations often align with the invariances we would require.\nA number of earlier works target maximising mutual information (MI) between augmentations (van den Oord et al., 2018; Hjelm et al., 2019; Wu et al., 2019; Ji et al., 2019; Bachman et al., 2019). Targetting high MI between representations computed from distinct augmentations enables learning representations that capture the invariances induced by the augmentations. We are interested in a particularly parsimonious representation: a discrete labelling of the data. This labelling can be seen as a clustering (Ji et al., 2019) procedure, where MI can be computed and assessment can be done directly using the learned labelling, as opposed to via an auxiliary network trained posthoc." }, { "heading": "1.1 SUBOPTIMAL MUTUAL INFORMATION MAXIMISATION", "text": "We argue and show that the MI objective is not maximised effectively in existing work due to the combination of:\n1. Greedy optimisation algorithms used to train neural networks, such as stochastic gradient descent (SGD) that potentially target local optima; and\n2. A limited set of data augmentations that can result in the existence of multiple local optima to the MI maximisation objective.\nSGD is greedy in the sense that early-found high-gradient features can dominate and so networks will tend to learn easier-to-compute locally-optimal representations (for example, one that can be computed using fewer neural network layers) over those that depend on complex features.\nBy way of example, in natural images, average colour is an easy-to-compute characteristic, whereas object type is not. If the augmentation strategy preserves average colour, then a reasonable mapping need only compute colour information, and high MI between learned image representations will be obtained. This result is suboptimal in the sense that a hypothetical higher MI optima exists that also captures semantic information, assuming the model has sufficient capacity to learn and represent this. The conceivable existence of many such local optima coupled with greedy optimisation presents a challenge: how can we leverage powerful image augmentation-driven MI objectives while avoiding greedily-found local optima?\nDealing with greedy solutions Heuristic solutions, such as as Sobel edge-detection (Caron et al., 2018; Ji et al., 2019) as a pre-processing step, have been suggested to remove/alter the features in images that may cause trivial representations to be learned. This is a symptomatic treatment and not a solution. In the work presented herein, we acknowledge that greedy SGD can get stuck in local optima of the MI maximisation objective because of limited data augmentations. Instead of trying to prevent a greedy solution, our technique lets a model learn this representation, but also requires it to learn an additional distinct representation. Specifically, we minimise the MI between these two representations so that the latter cannot rely on the same features. We extend this idea by adding representations, each time requiring the latest to be distinct from all previous representations.\nDownstream task: clustering For this work, our focus is on finding higher MI representations; we then assess the downstream capability on the ground truth task of image classification, meaning that we can either (1) learn a representation that must be ‘decoded’ via an additional learning step, or (2) produce a discrete labelling that requires no additional learning. Clustering methods offer a direct comparison and require no labels for learning a mapping from the learned representation to class labels. Instead, labels are only required to assign groups to appropriate classes and no learning is done using these. Our comparisons are with respect to clustering methods." }, { "heading": "1.2 CONTRIBUTIONS", "text": "Learning a set of representations by encouraging them to have low MI, while still maximising the original augmentation-driven MI objective for each representation, is the core idea behind Deep Hierarchical Object Grouping (DHOG). We define a mechanism to produce a set of hierarchicallyordered solutions (in the sense of easy-to-hard orderings, not tree structures). DHOG is able to better maximise the original MI objective between augmentations since each representation must correspond to a unique local optima. Our contributions are:\n1. We demonstrate that current methods do not effectively maximise the MI objective1 because greedy stochastic gradient descent (SGD) typically results in suboptimal local optima. To mitigate for this problem, we introducing DHOG: a robust neural network image grouping method to learn diverse and hierarchically arranged sets of discrete image labellings (Section 3) by explicitly modelling, accounting for, and avoiding spurious local optima, requiring only simple data augmentations, and needing no Sobel edge detection.\n2. We show a marked improvement over the current state-of-the-art for standard benchmarks in end-to-end image clustering for CIFAR-10, CIFAR-100-20 (a 20-way class grouping of CIFAR-100, and SVHN; we set a new accuracy benchmarks on CINIC-10; and show the utility of our method on STL-10 (Section 4).\nTo be clear, DHOG still learns to map data augmentations to similar representations as this is imperative to the learning process. The difference is that DHOG enables a number of intentionally distinct data labellings to be learned, arranged hierarchically in terms of source feature complexity.\n1We show this by finding higher mutual information solutions using DHOG, rather than by any analysis of the solutions themselves." }, { "heading": "2 RELATED WORK", "text": "The idea of MI maximisation for representation learning is called the infoMAX principle (Linsker, 1988; Tschannen et al., 2019). Contrastive predictive coding (van den Oord et al., 2018) (CPC) models a 2D latent space using an autoregressive model and defines a predictive setup to maximise MI between distinct spatial locations. Deep InfoMAX (Hjelm et al., 2019) (DIM) does not maximise MI across a set of data augmentations, but instead uses mutual information neural estimation (Belghazi et al., 2018) and negative sampling to balance maximising MI between global representations and local representations. Augmented multiscale Deep InfoMAX (Bachman et al., 2019) (AMDIM) incorporates MI maximisation across data augmentations and multiscale comparisons.\nClustering approaches are more directly applicable for comparison with DHOG because they explicitly learn a discrete labelling. The authors of deep embedding for clustering (DEC) (Xie et al., 2016) focused their attention on jointly learning an embedding suited to clustering and a clustering itself. They argued that the notion of distance in the feature space is crucial to a clustering objective. Joint unsupervised learning of deep representations and image clusters (JULE) (Yang et al., 2016) provided supervisory signal for representation learning. Some methods (Ghasedi Dizaji et al., 2017; Fard et al., 2018) employ autoencoder architectures along with careful regularisation of cluster assignments to (1) ensure sufficient information retention, and (2) avoid cluster degeneracy (i.e., mapping all images to the same class).\nDeep adaptive clustering (Chang et al., 2017) (DAC) recasts the clustering problem as binary pairwise classification, pre-selecting comparison samples via feature cosine distances. A constraint on the DAC system allows for a one-hot encoding that avoids cluster degeneracy. Another mechanism for dealing with degeneracy is to use a standard clustering algorithm, such asK-means to iteratively group on learned features. This approach is used by DeepCluster (Caron et al., 2018).\nAssociative deep clustering (ADC) (Haeusser et al., 2018) uses the idea that associations in the embedding space are useful for learning. A network was learned to associate data with (pseudolabelled) centroids. They leveraged augmentations by encouraging samples to output similar cluster probabilities.\nDeep comprehensive correlation mining (Wu et al., 2019) (DCCM) constructs a sample correlation graph for pseudo-labels and maximises the MI between augmentations, and the MI between local and global features for each augmentation. While many of the aforementioned methods estimate MI in some manner, invariant information clustering (Ji et al., 2019) (IIC) directly defines the MI using the c-way softmax output (i.e., probability of belong to class c), and maximises this over data augmentations to learn clusters. They effectively avoid degenerate solutions because MI maximisation implicitly targets marginal entropy. We use the same formulation for MI in Section 3." }, { "heading": "3 METHOD", "text": "Figure 1 shows the DHOG architecture. DHOG is an approach for obtaining jointly trained multilevel representations as discrete labellings, arranged in a simple-to-complex hierarchy, and computed by separate ‘heads’. A head is an unit that computes a multivariate class probability vector. By requiring low MI between heads, a diversity of solutions to the MI maximisation objective can be found. The head that best maximises MI between augmentations typically aligns better with a ground truth task that also relies on complex features that augmentations are designed to preserve.\nFigure 1 demonstrates the DHOG architecture and training principles. There are shared model weights ( 2 : ResNet blocks 1, 2, and 3) and head-specific weights (the MLP layers and 3 : ResNet blocks 4 to 8). For the sake of brevity, we abuse notation and use MI(z, z′) between labelling probability vectors as an overloaded shorthand for the mutual information MI(c, c′) between the labelling random variables c and c′ that have probability vectors z and z′ respectively.\nAny branch of the DHOG architecture ( 1 to any zi) can be regarded as a single neural network. These are trained to maximise the MI between the label variables at each head for different augmentations; i.e., between label variables with probability vectors zi(x) and zi(x′) for augmentations x and x′. Four augmentations are shown at 1 . The MI is maximised pairwise between all pairs, at 4 . This process can be considered pulling the mapped representations together.\nblock is repeated k − 3 times (k = 8 here). 1 Augmentations of each image, xa...d, are separately processed by the network. 2 Each shallow ResNet block (1 . . . 3) constitutes shared computation for deeper blocks, while also computing separate probability vectors, z1 . . . z3. Each zi is viewed as the probability for each outcome of the random variable ci that makes a discrete labelling choice. 3 The deepest ResNet blocks compute further z>3. 4 The network is trained by maximising the MI between allocations ci from all data augmentations, and 5 separately for each node i, minimising the MI between ci and c<i for the same data augmentation. 6 This is implemented by stopping gradients such that they are not back-propagated for later computation paths (red crosses).\nFollowing IIC (Ji et al., 2019), we compute the MI directly from the label probability vectors within a minibatch. Let zi, z′i denote the random probability vectors at head i associated with sampling a data item and its augmentations, and passing those through the network. Then we can compute the mutual MI between labels associated with each augmentation using\nMIaug(ci, c ′ i) = Tr(E[zi(z ′ i) T ]T log(E[zi(z ′ i) T ]))\n− E[zTi ] logE[zi]− E[(z′i)T ] logE[(z′i)], (1)\nwhere Tr is the matrix trace, logarithms are computed element-wise, and expectations are over data samples and augmentations of each sample. In practice we compute an empirical estimate of this MI based on samples from a minibatch." }, { "heading": "3.1 DISTINCT HEADS", "text": "Each clustering head in DHOG is encouraged to compute unique solutions via cross-head MI minimisation. For a minibatch of images, the labelling from any head is optimised to have low MI with other heads’ labellings. We assume multiple viable labellings because of natural patterns in the data. By encouraging low MI between heads, these must capture different patterns in the data.\nSimple concepts (brightness, colour, etc.) are axes of variation that are reasonable and easy to group by. Groupings according to complex features (e.g., object type) typically require more processing and greedy optimisation may not discover these groupings without explicit encouragement. Unfortunately, the easier-to-compute groupings typically correspond poorly to downstream tasks. Without a mechanism to explore viable patterns in the data, greedy optimisation will avoid finding them.\nCross-head MI minimisation We address suboptimal MI maximisation by encouraging unique solutions at sequential heads (z1 . . . z8 in Figure 1), which rely on different features. Let zi, zj denote the random probability vectors from two heads. We can minimise the MI across heads:\nMIhead(ci, cj) = Tr(E[ziz T j ] T log(E[ziz T j ]))\n− E[zTi ] log(E[zi])− E[(zj)T ] log(E[zj ]). (2)\nLogarithms are element-wise, and expectations are over the data and augmentations. Note zi and zj are each computed from the same data augmentation. We estimate this from each minibatch sample. This process can be thought of as pushing the heads apart. We note that the Tr operator is commutative – the hierarchical arrangement is accomplished through gradient stopping.\nHierarchical arrangement Requiring k heads (where k = 8 here) to produce unique representations is not necessarily the optimal method to account for suboptimal MI maximisation. Instead, what we do is encourage a simple-to-complex hierarchy structure to the heads, defined according to cross-head comparisons made using Equation 2. The hierarchy enables a reference mechanism to produce diverse labellings of the data.\nFigure 1 shows 8 heads, 3 of which are computed from early residual blocks of the network. The hierarchical arrangement is induced by only updating head-specific weights according to comparisons made with earlier heads. In practice this is done by stopping the appropriate gradients – 6 and all red crosses. For example, when computing the MI between zi=6 and those using zi 6=6, gradient back-propagation is allowed when i < 6 but not when i > 6. In other words, when learning to produce zi=6, the network is encouraged to produce a head that is distinct from heads ‘lower’ on the hierarchy. Extending this concept for i = 1 . . . 8 gives rise to the hierarchy. Initial experiments showed that if this routine was ignored, the gains were reduced." }, { "heading": "3.2 OBJECTIVE", "text": "The part of the objective producing high MI representations by ‘pulling’ together discrete labellings from augmentations is Equation 1 normalised over k heads:\nMIpull = 1\nk k∑ i=0 MIaug(ci, c ′ i). (3)\nThe quantity used to ‘push’ heads apart is Equation 2 normalised per head:\nMIpush = k∑ i=1\n∑i j=1 j 6=i MIhead(ci, cj)\ni , (4)\nwhere each cross-head MI term is scaled by the head index, i, since that directly tracks the number of comparisons made for each head. i scales up the hierarchy, such that the total MIhead associated with any head is scaled according to the number of comparisons. Scaling ensures that head-specific weight updates are all equally important. The final optimisation objective is:\nθ∗ = argmax θ MIpull − αMIpush, (5)\nwhere θ are the network parameters, α is a hyper-parameter we call the cross-head MI-minimization coefficient. For an ablation study we set α = 0 in Section 4." }, { "heading": "3.3 DESIGN AND TRAINING CHOICES", "text": "The architecture (Figure 1) is based on a ResNet-18 backbone, where each residual block has two layers (with a skip connection over these). Blocks 1 to 3 have 64, 128, and 256 units, respectively. Each parallel final block (4 to 8, here) have 512 units. Each MLP has a single hidden layer of width 200. Early experiments showed that entire block repetition was important to model flexibility. Similar to IIC (Ji et al., 2019) we used four data augmentation repeats with a batch size of 220.\nDHOG maximises MI between discrete labellings from different data augmentations. This is equivalent to a clustering and is similar to IIC. There are, however, key differences. In our experiments:\n• We train for 1000 epochs with a cosine annealing learning rate schedule. • We do not use sobel edge-detection or any other preprocessing as a fixed processing step. • We make use of the fast auto-augment CIFAR-10 data augmentation strategy (for all tested\ndatasets) found by (Lim et al., 2019). We then randomly apply (with p = 0.5) grayscale and take random square crops of sizes 64 and 20 pixels for STL-10 and other datasets.\nThe choice of data augmentation is important, and we acknowledge that for a fair comparison to IIC the same augmentation strategy must be used. The ablation of any DHOG-specific loss (when α = 0) largely recreates the IIC approach but with augmentations, network and head structure matched to DHOG; this enables a fair comparison between an IIC and DHOG approach.\nSince STL-10 has much more unlabelled data of a similar but broader distribution than the training data, the idea of ‘overclustering’ was used by (Ji et al., 2019); they used more clusters than the number of classes (70 versus 10 in this case). We repeat each head with an overclustering head that does not play a part in the cross-head MI minimisation. The filter widths are doubled for STL-10. We interspersed the training data evenly and regularly through the minibatches.\nTo determine the DHOG cross-head MI-minimisation coefficient, α, we carried out a non-exhaustive hyper parameter search using only CIFAR-10 images (without the labels), assessing performance on a held out validation set sampled from the training data. This did not use the evaluation data.\nAssessment Once learned, the optimal head can be identified either using the highest MI, or using a small set of labelled data. Alternatively all heads can be used as different potential alternatives, with posthoc selection of the best head according to some downstream task. In this paper the head that maximises the normalised mutual information on the training data is chosen. This is then fully unsupervised, as with the head selection protocol of IIC. We also give results for the best posthoc head to show the potential for downstream analysis." }, { "heading": "4 EXPERIMENTS", "text": "The datasets used for assessment were CIFAR-10 (Krizhevsky et al., 2009), CIFAR-100-20 (CIFAR100 (Krizhevsky et al., 2009) where classes are grouped into 20 super-classes), CINIC-10 (Darlow et al., 2018) (an extension of CIFAR-10 using images from Imagenet (Deng et al., 2009) of similar classes), street view house numbers (Netzer et al., 2011) (SVHN), and STL-10 (Coates et al., 2011). For CINIC-10 only the standard training set of 90000 images (without labels) was used for training.\nTable 1 gives the accuracy, normalised mutual information (NMI), and the adjusted rand index (ARI) between remapped assignments and classification targets. Before assessment a labelling-to-class remapping was computed using the training data and the Hungarian method (Kuhn, 1955). The results listed for DHOG correspond to average over 3 seeded runs. At the time of writing, in terms of all measured metrics DHOG outperformed other relevant fully-unsupervised end-to-end clustering methods, with an accuracy improvement of 4.3% on CIFAR-10, 0.35% on CIFAR-100-20, and 7.2% on SVHN. Importantly, no Sobel edge-detection was used.\nWe used an unsupervised posthoc head selection using NMI(z, z′) – which corresponds directly to the original MI objective. The selected heads almost always corresponded with the head that maxmimised NMI(z, y), where y are user-defined class labels. DHOG produces data groupings that:\n1. Better maximise the widely used MI objective (between mappings of data augmentations) and therefore is an effective mechanism for dealing with suboptimal MI optimization owing to greedy SGD, as discussed in this work. Table 2 gives the NMI with and without DHOG (controlled by α) to confirm this.\n2. Correspond better with the challenging underlying object classification test objective.\nIt is only on STL-10 that DHOG never beat the current state-of-the-art. Our aim was to show that the simple hierarchical ordering of heads in DHOG improves performance. The difference between STL-10 with and without the MI cross-head minimisation term (controlled by α) shows a marked improvement. IIC effectively used an overclustering procedure to achieve good results on STL-10.\nThe advantage of a hierarchical ordering is evident when considering the ablation study: with (α = 0.05) and without (α = 0) cross-head MI minimisation. Figure 2 (a) and (b) are accuracy versus head curves, showing that without cross-head MI minimisation later heads converge to similar solutions. The confusion matrices in Figure 3 (b) show the classes the final learned network confuses in CIFAR-10. Compare this to the confusion matrix in Figure 3 (a) where α = 0 and note the greater prevalence of cross-class confusion.\nMethod Accuracy NMI(z, y) ARI\nK-means on pixels 21.18± 0.0170 0.0811± 0.0001 0.0412± 0.0001 Cartesian K-means 22.89 0.0871 0.0487\nA claim throughout this paper is that greedy training of neural networks can result in sub-optimal MI maximisation. Table 2 shows that for all datasets except STL-10 (for which further experiments are needed) DHOG resulted in a better MI result, thereby directly improving the training objective." }, { "heading": "5 CONCLUSION", "text": "We presented deep hierarchical object grouping (DHOG): a method that leverages the challenges faced by current data augmentation-driven unsupervised representation learning methods that maximise mutual information. Learning a good representation of an image using data augmentations is limited by the user, who chooses the set of plausible data augmentations but who is also unable to cost-effectively define an ideal set of augmentations. We argue and show that learning using greedy optimisation typically causes models to get stuck in local optima, since the data augmentations fail to fully describe the sought after invariances to all task-irrelevant information.\nWe address this pitfall via a simple-to-complex ordered sequence of representations. DHOG works by minimising mutual information between these representations such that those later in the hierarchy are encouraged to produce unique and independent discrete labellings of the data (w.r.t. earlier representations). Therefore, later heads avoid becoming stuck in the same local optima of the original mutual information objective (between augmentations, applied separately to each head). Our tests showed that DHOG resulted in an improvement on CIFAR-10, CIFAR-100-20, and SVHN, without using preprocessing such as Sobel edge detection, and a consistent improvement of the underlying MI objective." } ]
2,020
DHOG: DEEP HIERARCHICAL OBJECT GROUPING
SP:22fbfa80cf81ea79a19faee749e9c8b2e23f1f3f
[ "The paper presents the first GPU-capable library implementing the _\"signature\"_ and _\"log-signature\"_ functions as well as their gradients. It introduces these transformations to a machine learning audience, as well as their recent uses in ML, then proposes algorithmic improvements that reduce the necessary computation. The resulting library is benchmarked against existing implementations, and the code, benchmarks, and proofs are included in supplementary materials." ]
Signatory is a library for calculating and performing functionality related to the signature and logsignature transforms. The focus is on machine learning, and as such includes features such as CPU parallelism, GPU support, and backpropagation. To our knowledge it is the first GPU-capable library for these operations. Signatory implements new features not available in previous libraries, such as efficient precomputation strategies. Furthermore, several novel algorithmic improvements are introduced, producing substantial real-world speedups even on the CPU without parallelism. The library operates as a Python wrapper around C++, and is compatible with the PyTorch ecosystem. It may be installed directly via pip. Source code, documentation, examples, benchmarks and tests may be found at https://github.com/patrick-kidger/signatory. The license is Apache-2.0.
[ { "affiliations": [], "name": "Patrick Kidger" }, { "affiliations": [], "name": "Terry Lyons" } ]
[ { "authors": [ "Patric Bonnier", "Patrick Kidger", "Imanol Pérez Arribas", "Cristopher Salvi", "Terry Lyons" ], "title": "Deep signature transforms", "venue": "In Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Djalil Chafai", "Terry Lyons" ], "title": "https://coropa.sourceforge.io/ rde", "venue": "Coropa project,", "year": 2005 }, { "authors": [ "Ricky T.Q. Chen", "Yulia Rubanova", "Jesse Bettencourt", "David K Duvenaud" ], "title": "Neural Ordinary Differential Equations", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "I. Chevyrev", "A. Kormilitzin" ], "title": "A primer on the signature method in machine learning", "venue": "arXiv preprint arXiv:1603.03788,", "year": 2016 }, { "authors": [ "Christa Cuchiero", "Lukas Gonon", "Lyudmila Grigoryeva", "Juan-Pablo Ortega", "Josef Teichmann" ], "title": "Discrete-time signatures and randomness in reservoir computing", "venue": null, "year": 2010 }, { "authors": [ "Adeline Fermanian" ], "title": "Embedding and learning with signatures", "venue": null, "year": 2019 }, { "authors": [ "Amir Gholami", "Kurt Keutzer", "George Biros" ], "title": "Anode: Unconditionally accurate memory-efficient gradients for neural odes", "venue": "Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Liam Hodgkinson", "Chris van der Heide", "Fred Roosta", "Michael Mahoney" ], "title": "Stochastic Normalizing Flows", "venue": null, "year": 2002 }, { "authors": [ "Patrick Kidger", "James Morrill", "Terry Lyons" ], "title": "Generalised Interpretable Shapelets for Irregular Time Series", "venue": null, "year": 2005 }, { "authors": [ "Franz J Király", "Harald Oberhauser" ], "title": "Kernels for sequentially ordered data", "venue": "Journal of Machine Learning Research,", "year": 2019 }, { "authors": [ "A.B. Kormilitzin", "K.E.A. Saunders", "P.J. Harrison", "J.R. Geddes", "T.J. Lyons" ], "title": "Application of the signature method to pattern recognition in the cequel clinical trial", "venue": null, "year": 2016 }, { "authors": [ "Pierre Lalonde", "Arun Ram" ], "title": "Standard lyndon bases of lie algebras and enveloping algebras", "venue": "Transactions of the American Mathematical Society,", "year": 1995 }, { "authors": [ "D. Levin", "T. Lyons", "H. Ni" ], "title": "Learning from the past, predicting the statistics for the future, learning an evolving system", "venue": null, "year": 2013 }, { "authors": [ "Chenyang Li", "Xin Zhang", "Lianwen Jin" ], "title": "LPSNet: a novel log path signature feature based hand gesture recognition framework", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Shujian Liao", "Terry Lyons", "Weixin Yang", "Hao Ni" ], "title": "Learning stochastic differential equations using rnn with log signature", "venue": null, "year": 2019 }, { "authors": [ "Terry Lyons" ], "title": "Rough paths, signatures and the modelling of functions on streams", "venue": null, "year": 2014 }, { "authors": [ "Terry Lyons" ], "title": "esig, 2017. URL https://esig.readthedocs.io", "venue": null, "year": 2017 }, { "authors": [ "Terry Lyons", "Michael Caruana", "Thierry Levy" ], "title": "Differential equations driven by rough paths", "venue": "École d’Été de Probabilités de Saint-Flour XXXIV -", "year": 2004 }, { "authors": [ "Terry Lyons", "Hao Ni", "Harald Oberhauser" ], "title": "A feature set for streams and an application to highfrequency financial tick data", "venue": "In Proceedings of the 2014 International Conference on Big Data Science and Computing,", "year": 2014 }, { "authors": [ "Terry J Lyons" ], "title": "Differential equations driven by rough signals", "venue": "Revista Matemática Iberoamericana,", "year": 1998 }, { "authors": [ "Ming Min", "Tomoyuki Ichiba" ], "title": "Convolutional Signatures for Sequential Data", "venue": null, "year": 2009 }, { "authors": [ "Michael Moor", "Max Horn", "Christian Bock", "Karsten Borgwardt", "Bastian Rieck" ], "title": "Path Imputation Stratgies for Signature Models", "venue": null, "year": 2005 }, { "authors": [ "James Morrill", "Adeline Fermanian", "Patrick Kidger", "Terry Lyons" ], "title": "A Generalised Signature Method for Time", "venue": "Series. arXiv:2006.00873,", "year": 2020 }, { "authors": [ "James Morrill", "Patrick Kidger", "Cristopher Salvi", "James Foster", "Terry Lyons" ], "title": "Neural CDEs for Long Time-Series via the Log-ODE Method", "venue": "Machine Learning and the Physical Sciences, NeurIPS Workshop,", "year": 2020 }, { "authors": [ "James Lewis Morrill", "Andrey Kormilitzin", "Alejo J Nevado-Holgado", "Sumanth Swaminathan", "Sam Howison", "Terry Lyons" ], "title": "The signature-based model for early detection of sepsis from electronic health records in the intensive care unit", "venue": "In International Conference in Computing in Cardiology", "year": 2019 }, { "authors": [ "Hao Ni", "Lukasz Szpruch", "Magnus Wiese", "Shujian Liao", "Baoren Xiao" ], "title": "Conditional SigWasserstein GANs for Time Series Generation", "venue": null, "year": 2006 }, { "authors": [ "Imanol Perez Arribas", "Guy M. Goodwin", "John R. Geddes", "Terry Lyons", "Kate E.A. Saunders" ], "title": "A signature-based machine learning model for distinguishing bipolar disorder and borderline personality disorder", "venue": "Translational Psychiatry,", "year": 2018 }, { "authors": [ "Imanol Perez Arribas", "Cristopher Salvi", "Lukasz" ], "title": "Szpruch. Sig-SDEs model for quantitative finance", "venue": null, "year": 2006 }, { "authors": [ "Jeremy Reizenstein. Iterated-integral signatures in machine learning." ], "title": "URL http://wrap", "venue": "warwick.ac.uk/131162/.", "year": 2019 }, { "authors": [ "Jeremy Reizenstein", "Benjamin Graham" ], "title": "The iisignature library: efficient calculation of iteratedintegral signatures and log signatures", "venue": null, "year": 2018 }, { "authors": [ "Christophe Reutenauer" ], "title": "Free Lie Algebras", "venue": null, "year": 1993 }, { "authors": [ "Csaba Toth", "Harald Oberhauser" ], "title": "Bayesian Learning from Sequential Data using Gaussian Processes with Signature Covariances", "venue": "International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Weixin Yang", "Lianwen Jin", "Manfei Liu" ], "title": "Deepwriterid: An end-to-end online text-independent writer identification system", "venue": "IEEE Intelligent Systems,", "year": 2016 }, { "authors": [ "Weixin Yang", "Lianwen Jin", "Hao Ni", "Terry Lyons" ], "title": "Rotation-free online handwritten character recognition using dyadic path signature features, hanging normalization, and deep neural network", "venue": "In 2016 23rd International Conference on Pattern Recognition (ICPR),", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "The signature transform, sometimes referred to as the path signature or simply signature, is a central object in rough path theory (Lyons, 1998; 2014). It is a transformation on differentiable paths1, and may be thought of as loosely analogous to the Fourier transform. However whilst the Fourier transform extracts information about frequency, treats each channel separately, and is linear, the signature transform exacts information about order and area, explicitly considers combinations of channels, and is in a precise sense ‘universally nonlinear’ (Bonnier et al., 2019, Proposition A.6).\nThe logsignature transform (Liao et al., 2019) is a related transform, that we will also consider. In both cases, by treating sequences of data as continuous paths, then the (log)signature transform may be applied for use in problems with sequential structure, such as time series. Indeed there is a significant body of work using the (log)signature transform in machine learning, with examples ranging from handwriting identification to sepsis prediction, see for example Morrill et al. (2019); Fermanian (2019); Király & Oberhauser (2019); Toth & Oberhauser (2020); Morrill et al. (2020b).\nEarlier work often used the signature and logsignature transforms as a feature transformation. See Levin et al. (2013); Chevyrev & Kormilitzin (2016); Yang et al. (2016a;b); Kormilitzin et al. (2016); Li et al. (2017); Perez Arribas et al. (2018) for a range of examples. In this context, when training a model on top, it is sufficent to simply preprocess the entire dataset with the signature or logsignature transform, and then save the result.\nHowever, recent work has focused on embedding the signature and logsignature transforms within neural networks. Recent work includes Bonnier et al. (2019); Liao et al. (2019); Moor et al. (2020); Morrill et al. (2020a); Kidger et al. (2020) among others. In this context, the signature and logsignature transforms are evaluated many times throughout a training procedure, and as such efficient and differentiable implementations are crucial. Previous libraries (Lyons, 2017; Reizenstein & Graham, 2018) have been CPU-only and single-threaded, and quickly become the major source of slowdown when training and evaluating these networks.\n1And may be extended to paths of bounded variation, or merely finite p-variation (Lyons et al., 2004)." }, { "heading": "1.1 CONTRIBUTIONS", "text": "We introduce Signatory, a CPU- and GPU-capable library for calculating and performing functionality related to the signature and logsignature transforms. To our knowledge it is the first GPU-capable library for these operations. The focus is on machine learning applications.\nSignatory is significantly faster than previous libraries (whether run on the CPU or the GPU), due to a combination of parallelism and novel algorithmic improvements. In particular the latter includes both uniform and asymptotic rate improvements over previous algorithms. Additionally, Signatory provides functionality not available in previous libraries, such as precomputation strategies for efficient querying of the (log)signature transform over arbitrary overlapping intervals.\nThe library integrates with the open source PyTorch ecosystem and runs on Linux or Windows. Documentation, examples, benchmarks and tests form a part of the project.\nMuch of the code is written in C++ primitives and the CPU implementation utilises OpenMP. The backward operations are handwritten for both speed and memory efficiency, and do not rely on the autodifferentiation provided by PyTorch.\nThe source code is located at https://github.com/patrick-kidger/signatory, documentation and examples are available at https://signatory.readthedocs.io, and the project may be installed directly via pip.\nThis paper is not a guide to using Signatory—for that we refer to the documentation. This is meant as a technical exposition of its innovations." }, { "heading": "1.2 APPLICATIONS", "text": "Signatory has already seen a rapid uptake amongst the signature community. Recent work using Signatory include Morrill et al. (2020b); Perez Arribas et al. (2020) who involve signatures in neural differential equations, or Moor et al. (2020); Min & Ichiba (2020) who study deep signature models (Bonnier et al., 2019). Meanwhile Ni et al. (2020) apply Signatory to hybridise signatures with GANs, and Morrill et al. (2020a) create a generalised framework for the “signature method”. As a final example, Signatory is now itself a dependency for other libraries (Kidger, 2020)." }, { "heading": "2 BACKGROUND", "text": "We begin with some exposition on theory of the signature and logsignature transforms. We begin with definitions and offer intuition afterwards. Also see Reizenstein & Graham (2018) for an introduction focusing on computational concerns, and Lyons et al. (2004) and Hodgkinson et al. (2020) for pedagogical introductions to the motivating theory of rough paths." }, { "heading": "2.1 THE SIGNATURE TRANSFORM", "text": "Definition 1. Let Rd1⊗Rd2⊗· · ·⊗Rdn denote the space of all real tensors with shape d1×d2×· · ·× dn. There is a corresponding binary operation ⊗, called the tensor product, which maps a tensor of shape (d1, . . . , dn) and a tensor of shape (e1, . . . , em) to a tensor of shape (d1, . . . , dn, e1, . . . , em) via (Ai1,...,in , Bj1,...,jm) 7→ Ai1,...,inBj1,...,jm . For example when applied to two vectors, it reduces to the outer product. Let ( Rd )⊗k\n= Rd ⊗ · · · ⊗Rd, and v⊗k = v⊗ · · · ⊗ v for v ∈ Rd, in each case with k− 1 many ⊗. Definition 2. Let N ∈ N. The signature transform to depth N is defined as\nSigN : { f ∈ C([0, 1];Rd) ∣∣ f differentiable} → N∏ k=1 ( Rd )⊗k ,\nSigN (f) = ∫ · · · ∫ 0<t1<···<tk<1 df dt (t1)⊗ · · · ⊗ df dt (tk) dt1 · · · dtk 1≤k≤N . (1)\nMost texts define the signature transform using the notation of stochastic calculus. Here, we sacrifice some generality (that is not needed in this context) in favour of more widely-used notation.2\nThe signature transform may naturally be extended to sequences of data. Definition 3. The space of sequences of data over a set V is\nS (V ) = {x = (x1, . . . , xL) |L ∈ N, xi ∈ V for all i} . An interval of (x1, . . . , xL) ∈ S (V ) is (xi, . . . , xj) ∈ S (V ) for some 1 ≤ i < j ≤ L. Definition 4. Let x = (x1, . . . , xL) ∈ S ( Rd )\nwith L ≥ 2. Let f : [0, 1] → Rd be the unique continuous piecewise affine function such that f( i−1L−1 ) = xi for all i, and is affine on the pieces in between. Let N ∈ N. Then define SigN (x) = SigN (f). In this way we interpret SigN as a map\nSigN : S ( Rd ) → N∏ k=1 ( Rd )⊗k .\nNote that the choice of i−1L−1 is unimportant; any L points in [0, 1] would suffice, and in fact the definition is invariant to this choice (Bonnier et al., 2019, Definition A.10)." }, { "heading": "2.2 THE GROUPLIKE STRUCTURE", "text": "With A0 = B0 = 1 ∈ R on the right hand side, define by3\n:\n( N∏\nk=1\n( Rd )⊗k)×( N∏\nk=1\n( Rd )⊗k)→ N∏\nk=1\n( Rd )⊗k ,\n(A1, . . . AN ) (B1, . . . , BN ) 7→ k∑ j=0 Aj ⊗Bk−j 1≤k≤N .\nChen’s identity (Lyons et al., 2004, Theorem 2.9) states that the image of the signature transform forms a noncommutative group with respect to . That is, given a sequence of data (x1, . . . , xL) ∈ S ( Rd ) and some j ∈ {2, . . . , L− 1}, then\nSigN ((x1, . . . , xL)) = Sig N ((x1, . . . , xj)) Sig N ((xj , . . . , xL)). (2)\nFurthermore the signature of a sequence of length two may be computed explicitly from the definition. Letting\nexp: Rd → N∏\nk=1\n( Rd )⊗k , exp: v → ( v, v⊗2\n2! , v⊗3 3! , . . . , v⊗N N !\n) ,\nthen SigN ((x1, x2)) = exp(x2 − x1).\nWith Chen’s identity, this implies that the signature transform may be computed by evaluating\nSigN ((x1, . . . , xL)) = exp(x2 − x1) exp(x3 − x2) · · · exp(xL − xL−1). (3)" }, { "heading": "2.3 THE LOGSIGNATURE, INVERTED SIGNATURE, AND INVERTED LOGSIGNATURE", "text": "The group inverse we denote −1. Additionally a notion of logarithm may be defined (Liao et al., 2019), where\nlog : image ( SigN ) → N∏ k=1 ( Rd )⊗k . (4)\n2Additionally, many texts also include a k = 0 term, which is defined to equal one. We omit this as it does not carry any information, and is therefore irrelevant to the task of machine learning.\n3Most texts use ⊗ rather than to denote this operation, as it may be regarded as an generalisation of the tensor product. That will not be important to us, however, so we use differing notation to aid interpretation.\nThis then defines the notions of inverted signature transform, logsignature transform and inverted logsignature transform as\nInvertSigN (x) = SigN (x)−1, LogSigN (x) = log ( SigN (x) ) ,\nInvertLogSigN (x) = log ( SigN (x)−1 ) respectively. We emphasise that the inverted signature or logsignature transforms are not the inverse maps of the signature or the logsignature transforms.\nThe logsignature transform extracts the same information as the signature transform, but represents the information in a much more compact way, as image (log) is a proper subspace4 of∏N\nk=1\n( Rd )⊗k . Its dimension is w(d,N) = ∑N\nk=1 1 k ∑ i|k µ ( k i ) di, which is known as Witt’s for-\nmula (Lothaire, 1997). µ is the Möbius function." }, { "heading": "2.4 SIGNATURES IN MACHINE LEARNING", "text": "In terms of the tensors used by most machine learning frameworks, then the (inverted) signature and logsignature transforms of depth N may both be thought of as consuming a tensor of shape (b, L, d), corresponding to a batch of b different sequences of data, each of the form (x1, . . . , xL) for xi ∈ Rd. The (inverted) signature transform then produces a tensor of shape (b, ∑N k=1 d\nk), whilst the (inverted) logsignature transform produces a tensor of shape (b, w(d,N)). We note that these can be easily be large, and much research has focused on ameliorating this Bonnier et al. (2019); Morrill et al. (2020a); Cuchiero et al. (2020).\nAll of these transforms are in fact differentiable with respect to x, and so may be backpropagated through. These transforms may thus be thought of as differentiable operations between tensors, in the way usually performed by machine learning frameworks." }, { "heading": "2.5 INTUITION", "text": "The (inverted) signature and logsignature transforms all have roughly the same intuition as one another. (They all represent the same information, just in slightly different ways.) Given a sequence of data (x1, . . . , xL), then these transforms may be used as binning functions, feature extractors, or nonlinearities, to give summary statistics over the data.\nThese summary statistics describe the way in which the data interacts with dynamical systems (Morrill et al., 2020b). Indeed, we have already linked the signature to the exponential map, which is defined as the solution to a differential equation: d expdt (t) = exp(t). The signature may in fact be defined as the solution of a controlled exponential map: dSigN (f)(t) = SigN (f)(t) ⊗ df(t), so that SigN (f) is response of a particular dynamical system driven by f . The theory here is somewhat involved, and is not an interpretation we shall pursue further here.\nAn equivalent more straightforward interpretation is arrived at by observing that the terms of the exponential of a scalar exp: x ∈ R 7→ (1, x, 12x\n2, . . .) produce (up to scaling factors) every monomial of its input. Classical machine learning takes advantage of this as a feature extractor in polynomial regression. The signature transform is the equivalent operation when the input is a sequence." }, { "heading": "3 CODE EXAMPLE", "text": "Signatory is designed to be Pythonic, and offer operations working just like any other PyTorch operation, outputting PyTorch tensors. A brief example is:\n1 import s i g n a t o r y 2 import t o r c h 3 ba tch , s t r eam , c h a n n e l s , d e p t h = 1 , 10 , 2 , 4\n4log is actually a bijection. image ( SigN ) is some curved manifold in ∏N k=1 ( Rd )⊗k\n, and log is the map that straightens it out into a linear subspace.\n4 p a t h = t o r c h . r and ( ba tch , s t r eam , c h a n n e l s , r e q u i r e s _ g r a d =True ) 5 s i g n a t u r e = s i g n a t o r y . s i g n a t u r e ( pa th , d e p t h ) 6 s i g n a t u r e . sum ( ) . backward ( )" }, { "heading": "4 ALGORITHMIC IMPROVEMENTS", "text": "We present several noveral algorithmic improvements for computing signatures and logsignatures." }, { "heading": "4.1 FUSED MULTIPLY-EXPONENTIATE", "text": "Recall from equation (3) that the signature may be computed by evaluating several exponentials and several . We begin by finding that it is beneficial to compute(\nN∏ k=1\n( Rd )⊗k)× Rd → N∏\nk=1\n( Rd )⊗k ,\nA, z 7→ A exp(z) as a fused operation. Doing so has uniformly (over d,N ) fewer scalar multiplications than the composition of the individual exponential and , and in fact reduces the asymptotic complexity of this operation from O(NdN ) to O(dN ). Furthermore this rate is now optimal, as the size of result (an element of ∏N k=1 ( Rd )⊗k\n), is itself of size O(dN ). The bulk of a signature computation may then be sped up by writing it in terms of this fused operation. See equation (3): a single exponential is required at the start, followed by a reduction with respect to this fused multiply-exponentiate. This gives substantial real-world speedups; see the benchmarks of Section 6.\nThe fusing is done by expanding\nA exp(z) =\n( k∑\ni=0\nAi ⊗ z⊗(k−i)\n(k − i)! ) 1≤k≤N ,\nat which point the k-th term may be computed by a scheme in the style of Horner’s method:\nk∑ i=0 Ai ⊗ z⊗(k−i)\n(k − i)! =(( · · · (( z\nk +A1 ) ⊗ z k − 1 +A2 ) ⊗ z k − 2 + · · · ) ⊗ z 2 +Ak−1 ) ⊗ z +Ak. (5)\nSee Appendix A.1 for the the mathematics, including proofs of both the asymptotic complexity and the uniformly fewer multiplications." }, { "heading": "4.2 IMPROVED PRECOMPUTATION STRATEGIES", "text": "Given a sequence of data x = (x1, . . . , xL), it may be desirable to query SigN ((xi, . . . , xj)) for many different pairs i, j. We show that this query may be computed in just O(1) (in L) time and memory by using O(L) precomputation and storage. Previous theoretical work has achieved only O(logL) inference with O(L logL) precomputation (Chafai & Lyons, 2005).\nDoing so is surprisingly simple. Precompute SigN ((x1, . . . , xj)) and InvertSigN ((x1, . . . , xj)) for all j. This may be done in only O(L) work, by iteratively computing each signature via\nSigN ((x1, . . . , xj)) = Sig N ((x1, . . . , xj−1)) Sig N ((xj−1, xj)), (6) with a similar relation for the inverted signature. Then, at inference time use the group-like structure\nSigN ((xi, . . . , xj)) = InvertSig N ((x1, . . . , xi)) Sig N ((x1, . . . , xj)),\nfollowed by a log if it is a logsignature that is desired. As a single operation this is O(1) in L. We do remark that this should be used with caution, and may suffer from numerical stability issues when used for large i, j." }, { "heading": "4.3 MORE EFFICIENT LOGSIGNATURE BASIS", "text": "The logsignature transform of a path has multiple possible representations, corresponding to different possible bases of the ambient space, which is typically interpreted as a free Lie algebra (Reutenauer, 1993). The Lyndon basis is a typical choice (Reizenstein & Graham, 2018).\nWe show that there exists a more computationally efficient basis. It is mathematically unusual, as it is not constructed as a Hall basis. But if doing deep learning, then the choice of basis is (mostly) unimportant if the next operation is a learnt linear transformation.\nThe Lyndon basis uses Lyndon brackets as its basis elements. Meanwhile our new basis uses basis elements that, when written as a sum of Lyndon brackets and expanded as a sum of words, have precisely one word as a Lyndon word. This means that the coefficient of this basis element can be found cheaply, by extracting the coeffient of that Lyndon word from the tensor algebra representation of the logsignature. See Appendix A.2 for the full exposition." }, { "heading": "5 NEW FEATURES", "text": "Signatory provides several features not available in previous libraries." }, { "heading": "5.1 PARALLELISM", "text": "There are two main levels of parallelism. First is naïve parallelism over the batch dimension. Second, we observe that equation (3) takes the form of a noncommutative reduction with respect to the fused multiply-exponentiate. The operation is associative, and so this may be parallelised in the usual way for reductions, by splitting the computation up into chunks.\nParallelism on the CPU is implemented with OpenMP. For speed the necessary operations are written in terms of C++ primitives, and then bound into PyTorch." }, { "heading": "5.2 GPU SUPPORT", "text": "An important feature is GPU support, which is done via the functionality available through LibTorch. It was a deliberate choice not to write CUDA code; this is due to technical reasons for distributing the library in a widely-compatible manner. See Appendix B.\nThere are again two levels of parallelism, as described in the previous section.\nWhen the (log)signature transform is part of a deep learning model trained on the GPU, then GPU support offers speedups not just from the use of a GPU over the CPU, but also obviates the need for copying the data to and from the GPU." }, { "heading": "5.3 BACKPROPAGATION", "text": "Crucial for any library used in deep learning is to be able to backpropagate through the provided operations. Signatory provides full support for backpropagation through every provided operation. There has previously only been limited support for backpropagation through a handful of simple operations, via the iisignature library (Reizenstein & Graham, 2018).\nThe backpropagation computations are handwritten, rather than being generated autodifferentiably. This improves the speed of the computation by using C++ primitives rather than high-level tensors, and furthermore allows for improved memory efficiency, by exploiting a reversibility property of the signature (Reizenstein, 2019, Section 4.9.3). We discuss backpropagation in more detail in Appendix C." }, { "heading": "5.4 INVERTED SIGNATURES AND LOGSIGNATURES", "text": "Signatory provides the capability to compute inverted signatures and logsignatures, via the optional inverse argument to the signature and logsignature functions. This is primarily a convenience as\nSigN ((x1, . . . , xL)) −1 = SigN ((xL, . . . , x1))." }, { "heading": "5.5 EXPLOITING THE GROUPLIKE STRUCTURE", "text": "It is often desirable to compute (inverted) (log)signatures over multiple intervals of the same sequence of data. These calculations may jointly be accomplished more efficiently than by evaluating the signature transform over every interval separately. In some cases, if the original data has been discarded and only its signature is now known, exploiting this structure is the only way to perform the computation.\nHere we detail several notable cases, and how Signatory supports them. In all cases the aim is to provide a flexible set of tools that may be used together, so that wherever possible unecessary recomputation may be elided. Their use is also discussed in the documentation, including examples.\nCombining adjacent intervals Recall equation (2). If the two signatures on the right hand side of the equation are already known, then the signature of the overall sequence of data may be computed using only a single operation, without re-iterating over the data.\nThis operation is provided for by the multi_signature_combine and signature_combine functions. Expanding intervals Given a sequence of data (x1, . . . , xL) ∈ S ( Rd ) , a scenario that is particularly important for its use in Section 4.2 is to compute the signature of expanding intervals of the data,5\n(SigN ((x1, x2)),Sig N ((x1, x2, x3)), . . . ,Sig N ((x1, . . . , xL))). This may be interpreted as a sequence of signatures, that is to say an element of S (∏N\nk=1\n( Rd )⊗k) .\nBy equation (6), this may be done efficiently in only O(L) work, and with all the earlier signatures available as byproducts, for free, of computing the final element SigN ((x1, . . . , xL)).\nThis is handled by the optional stream argument to the signature function and to the logsignature function.\nArbitrary intervals Given a sequence of data x = (x1, . . . , xL), it may be desirable to query SigN ((xi, . . . , xj)) for many i, j such that 1 ≤ i < j ≤ L, as in Section 4.2. Using the efficient precomputation strategy described there, Signatory provides this capability via the Path class. Keeping the signature up-to-date Suppose we have a sequence of data (x1, . . . , xL) ∈ S ( Rd ) whose signature SigN ((x1, . . . , xL)) has already been computed. New data subsequently arrives, some (xL+1, . . . , xL+M ) ∈ S ( Rd ) , and we now wish to update our computed signature, for example to compute the sequence of signatures\n(SigN ((x1, . . . , xL+1)), . . . ,Sig N ((x1, . . . , xL+M ))). (7)\nThis could done by computing the signatures over (xL+1, . . . , xL+M ), and combining them together as above. However, if these signatures (over (xL+1, . . . , xL+M )) are not themselves of interest, then this approach may be improved upon, as this only exploits the grouplike structure, but not the fused multiply-exponentiate described in Section 4.1.\nComputing them in this more efficient way may be handled via the basepoint and initial arguments to the signature function, and via the update method of the Path class." }, { "heading": "6 BENCHMARK PERFORMANCE", "text": "We are aware of two existing software libraries providing similar functionality, esig (Lyons, 2017) and iisignature (Reizenstein & Graham, 2018). We ran a series of benchmarks against the latest versions of both of these libraries, namely esig 0.6.31 and iisignature 0.24. The computer used was equipped with a Xeon E5-2960 v4 and a Quadro GP100, and was running Ubuntu 18.04 and Python 3.7.\n5Note that we start with SigN ((x1, x2)), as two is the shortest a sequence of data can be to define a path; see Definition 4." }, { "heading": "6.1 DIRECT COMPARISON", "text": "For esig and iisignature we report run time on the CPU, whilst for Signatory we report run time on the GPU, CPU with parallelism, and CPU without parallelism. As it is in principle possible to parallelise these alternative libraries using Python’s multiprocessing module6, the most important comparisons are to Signatory on the CPU without parallelism (representing like-for-like computational resources), and on the GPU (representing the best possible performance).\nWe begin with a benchmark for the forward operation through the signature transform. We consider a batch of 32 sequences, of length 128. We then investigate the scaling as we vary either the number of channels (over 2–7) in the input sequences, or the depth (over 2–9) of the signature transform. For varying channels, the depth was fixed at 7. For varying depths, the channels was fixed at 4.\nEvery test case is repeated 50 times and the fastest time taken. See Figure 1. Note the logarithmic scale.\nWe observe that iisignature is Signatory’s strongest competitor in all cases. Signatory and iisignature are comparable for the very smallest of computations. As the computation increases in size, then the CPU implementations of Signatory immediately overtake iisignature, followed by the GPU implementation.\nFor larger computations, Signatory can be orders of magnitude faster. For example, to compute the signature transform with depth and number of channels both equal to 7, then iisignature takes 20.9 seconds to perform the computation. In contrast, running on the CPU without parallelism,\n6Subject to nontrivial overhead.\nSignatory takes only 3.8 seconds, which represents a 5.5× speedup. We emphasise that the same computational resources (including lack of parallelism) were used for both. To see the benefits of a GPU implementation over a CPU implementation—the primary motivation for Signatory’s existence—then we observe that Signatory takes only 0.16 seconds to compute this same operation. Compared to the best previous alternative in iisignature, this represents a 132× speedup. Next, we consider the backward operation through the signature transform. We vary over multiple inputs as before. See Figure 2. We again observe the same behaviour. iisignature is Signatory’s strongest competitor, but is still orders of magnitude slower on anything but the very smallest of problems. For example, to backpropagate through the signature transform, with depth and number of channels both equal to 7, then Signatory on the CPU without parallelism takes 13.7 seconds. Meanwhile, iisignature takes over 2 minutes – 128 seconds – to perform this computation on like-for-like computational resources. Running Signatory on the GPU takes a fraction of a second, specifically 0.772 seconds. These represent speedups of 9.4× and 166× respectively. For further speed benchmarks, a discussion on memory usage benchmarks, the precise numerical values of the graphs presented here, and code to reproduce these benchmarks, see Appendix D. We observe the same consistent improvements on these additional benchmarks." }, { "heading": "6.2 DEEP LEARNING EXAMPLE", "text": "To emphasise the benefit of Signatory to deep learning applications, we consider training a deep signature model (Bonnier et al., 2019) on a toy dataset of geometric Brownian motion samples. The samples have one of two different volatilities, and the task is to perform binary classification.\nThe model sweeps a small feedforward network over the input sequence (to produce a sequence of hidden states), applies the signature transform, and then maps to a binary prediction via a final learnt linear map. This model has learnt parameters prior to the signature transform, and so in particular backpropagation through the signature transform is necessary.\nThe signature transform is computed either using Signatory, or using iisignature. We train the model on the GPU and plot training loss against wall-clock time.\nBoth models train successfully, but the model using Signatory trains 210 times faster than the one using iisignature. This makes clear how signatures have previously represented the largest computational bottleneck. The improvement of 210× is even larger than the improvements obtained in the previous section. We attribute this to the fact that iisignature necessarily has the additional overhead copying data from the GPU to the CPU and back again." }, { "heading": "7 CONCLUSION", "text": "We have introduced Signatory, a library for performing functionality related to the signature and logsignature transforms, with a particular focus on applications to machine learning. Notable contributions are the speed of its operation, its GPU support, differentiability of every provided operation, and its novel algorithmic innovations." }, { "heading": "ACKNOWLEDGEMENTS", "text": "This work was supported by the Engineering and Physical Sciences Research Council [EP/L015811/1]." }, { "heading": "A FURTHER DETAILS OF ALGORITHMIC IMPROVEMENTS", "text": "" }, { "heading": "A.1 FUSED MULTIPLY-EXPONENTIATE", "text": "The conventional way to compute a signature is to iterate through the computation described by equation (3): for each new increment, take its exponential, and it on to what has already been computed; repeat.\nOur proposed alternate way is to fuse the exponential and into a single operation, and then iteratively perform this fused operation.\nWe now count the number of multiplications required to compute( N∏\nk=1\n( Rd )⊗k)× Rd → N∏\nk=1\n( Rd )⊗k ,\nA, z 7→ A exp(z) for each approach.\nWe will establish that the fused operation uses fewer multiplications for all possible d ≥ 1 and N ≥ 1. We will then demonstrate that it is in fact of a lower asymptotic complexity." }, { "heading": "A.1.1 THE CONVENTIONAL WAY", "text": "The exponential is defined as\nexp: Rd → N∏\nk=1\n( Rd )⊗k ,\nexp: x 7→ ( x, x⊗2\n2! , x⊗3 3! , . . . , x⊗N N !\n) ,\nsee Bonnier et al. (2019, Proposition 15).\nNote that every tensor in the exponential is symmetric, and so in principle requires less work to compute than its number of elements would suggest. For the purposes of this analysis, to give the benefit of the doubt to a competing method, we shall assume that this is done (although taking advantage of this in practice is actually quite hard (Reizenstein & Graham, 2018, Section 2)). This takes\nN∑ k=2 ( d+ ( d+ k − 1 k )) scalar multiplications, using the formula for unordered sampling with replacement (Reizenstein & Graham, 2018, Section 2), under the assumption that each division by a scalar costs the same as a multiplication (which can be accomplished by precomputing their reciprocals and then multiplying by them).\nNext, we need to count the number of multiplications to perform a single .\nLet\nA,B ∈ N∏\nk=1\n( Rd )⊗k .\nLet A = (A1, . . . , AN ), with Ai = (A j1,...,ji i )1≤j1,...,ji≤d, and every Aj1,...,jii ∈ R. Additionally let A0 = 1. Similarly for B. Then is defined by\n:\n( N∏\nk=1\n( Rd )⊗k)×( N∏\nk=1\n( Rd )⊗k)→ N∏\nk=1\n( Rd )⊗k ,\n: A,B 7→\n( k∑\ni=0\nAi ⊗Bk−i ) 1≤k≤N , (8)\nwhere each Ai ⊗Bk−i = ( Aj1,...,jii B ĵ1,...,ĵk−i k−i ) 1≤j1,...,ji,ĵ1,...,ĵk−i≤d\nis the usual tensor product, the result is thought of as a tensor in (Rd)⊗k, and the summation is taken in this space. See Bonnier et al. (2019, Definition A.13).\nTo the authors’ knowledge there has been no formal analysis of a lower bound on the computational complexity of , and there is no better way to compute it than naïvely following this definition.\nThis, then, requires\nN∑ k=1 k−1∑ i=1 d∑ j1,...,ji=1 d∑ ĵ1,...,ĵk−i=1 1 = N∑ k=1 k−1∑ i=1 dk\n= N∑ k=1 (k − 1)dk\nscalar multiplications.\nThus the overall cost of the conventional way involves\nC(d,N) = N∑\nk=2\n( d+ ( d+ k − 1\nk\n)) + N∑ k=1 (k − 1)dk (9)\nscalar multiplications." }, { "heading": "A.1.2 THE FUSED OPERATION", "text": "Let A ∈ ∏N\nk=1\n( Rd )⊗k and z ∈ Rd. Then\nA exp(z) =\n( k∑\ni=0\nAi ⊗ z⊗(k−i)\n(k − i)! ) 1≤k≤N ,\nwhere the k-th term may be computed by a scheme in the style of Horner’s method:\nk∑ i=0 Ai ⊗ z⊗(k−i)\n(k − i)! =(( · · · (( z\nk +A1 ) ⊗ z k − 1 +A2 ) ⊗ z k − 2 + · · · ) ⊗ z 2 +Ak−1 ) ⊗ z +Ak. (10)\nAs before, we assume that the reciprocals 12 , . . . , 1 N have been precomputed, so that each division costs the same as a multiplication.\nThen we begin by computing z/2, . . . , z/N , which takes d(N − 1) multiplications. Computing the k-th term as in equation (10) then involves d2 + d3 + · · ·+ dk multiplications. This is because, working from innermost bracket to outermost, the first ⊗ produces a d× d matrix as the outer product of two size d vectors, and may thus be computed with d2 multiplications; the second ⊗ produces a d× d× d tensor from a d× d matrix and a size d vector, and may thus be computed with d3 multiplications; and so on.\nThus the overall cost of a fused multiply-exponentiate is\nF(d,N) = d(N − 1) + N∑\nk=1 k∑ i=2 di (11)\nscalar multiplications." }, { "heading": "A.1.3 COMPARISON", "text": "We begin by establishing the uniform bound F(d,N) ≤ C(d,N) for all d ≥ 1 and N ≥ 1. First suppose d = 1. Then\nF(1, N) = (N − 1) + N∑\nk=1\n(k − 1)\n≤ 2(N − 1) + N∑\nk=1\n(k − 1)\n= C(1, N).\nNow suppose N = 1. Then F(d, 1) = 0 = C(d, 1).\nNow suppose N = 2. Then\nF(d, 2) = d+ d2 ≤ d+ ( d+ 1\n2\n) + d2\n= C(d, 2)\nNow suppose d ≥ 2 and N ≥ 3. Then\nF(d,N) = d(N − 1) + N∑\nk=1 k∑ i=2 di\n= dN+2 − d3 − (N − 1)d2 + (N − 1)d\n(d− 1)2 . (12)\nAnd\nC(d,N) = N∑\nk=2\n( d+ ( d+ k − 1\nk\n)) + N∑ k=1 (k − 1)dk\n≥ N∑\nk=1\n(k − 1)dk\n= (N − 1)dN+2 −NdN+1 + d2\n(d− 1)2 . (13)\nThus we see that it suffices to show that\ndN+2 − d3 − (N − 1)d2 + (N − 1)d ≤ (N − 1)dN+2 −NdN+1 + d2, for d ≥ 2 and N ≥ 3. That is,\n0 ≤ dN+1(d(N − 2)−N) + d(d2 +N(d2 − 1) + 1). (14)\nAt this point d = 2, N = 3 must be handled as a special case, and may be verified by direct evaluation of equation (14). So now assume d ≥ 2, N ≥ 3, and that d = 2, N = 3 does not occur jointly. Then we see that equation (14) is implied by\n0 ≤ d(N − 2)−N and 0 ≤ d2 +N(d2 − 1) + 1. The second condition is trivially true. The first condition rearranges to N/(N − 2) ≤ d, which is now true for d ≥ 2, N ≥ 3 with d = 2, N = 3 not jointly true. This establishes the uniform bound F(d,N) ≤ C(d,N). Checking the asymptotic complexity is much more straightforward. Consulting equations (12) and (13) shows that F(d, n) = O(dN ) whilst C(d,N) = Ω(NdN ). And in fact as ( d+k−1\nk\n) ≤ dk then\nequation (9) demonstrates that C(d,N) = O(NdN )." }, { "heading": "A.2 LOGSIGNATURE BASES", "text": "We move on to describing our new more efficient basis for the logsignature." }, { "heading": "A.2.1 WORDS, LYNDON WORDS, AND LYNDON BRACKETS", "text": "LetA = {a1, . . . , ad} be a set of d letters. LetA+N be the set of all words in these letters, of length between 1 and N inclusive. For example a1a4 ∈ A+N is a word of length two. Impose the order a1 < a2 < · · · < ad on A, and extend it to the lexicographic order on words in A+N of the same length as each other, so that for example a1a2 < a1a3 < a2a1. Then a Lyndon word (Lalonde & Ram, 1995) is a word which comes earlier in lexicographic order than any of its rotations, where rotation corresponds to moving some number of letters from the start of the word to the end of the word. For example a2a2a3a4 is a Lyndon word, as it is lexicographically earlier than a2a3a4a2, a3a4a2a2 and a4a2a2a3. Meanwhile a2a2 is not a Lyndon word, as it is not lexicographically earlier than a2a2 (which is a rotation). Denote by L ( A+N ) the set of all Lyndon words of length between 1 and N .\nGiven any Lyndon word w1 · · ·wn with n ≥ 2 and wi ∈ A, we may consider its longest Lyndon suffix; that is, the smallest j > 1 for which wj · · ·wn is a Lyndon word. (It is guaranteed to exist as wn alone is a Lyndon word.) It is a fact (Lalonde & Ram, 1995) that w1 · · ·wj−1 is then also a Lyndon word. Given a Lyndon word w, we denote by wb its longest Lyndon suffix, and by wa the corresponding prefix.\nConsidering spans with respect to R, let\n[ · , · ] : span(A+N )× span(A+N )→ span(A+N )\nbe the commutator given by [w, z] = wz − zw,\nwhere wz denotes concatenation of words, distributed over the addition, as w and z belong to a span and thus may be linear combinations of words. For example w = 2a1a2 +a1 and z = a1 +a3 gives wz = 2a1a2a1 + 2a1a2a3 + a1a1 + a1a3.\nThen define φ : L ( A+N ) → span(A+N )\nby φ(w) = w if w is a word of only a single letter, and by\nφ(w) = [φ(wa), φ(wb)]\notherwise. For example,\nφ(a1a2a2) = [[a1, a2], a2]\n= [a1a2 − a2a1, a2] = a1a2a2 − 2a2a1a2 + a2a2a1.\nNow extend φ by linearity from L ( A+N ) to span(L ( A+N ) ), so that\nφ : span(L ( A+N ) )→ span(A+N )\nis a linear map between finite dimensional real vector spaces, from a lower dimensional space to a higher dimensional space.\nNext, let ψ : A+N → span(L ( A+N ) )\nbe such that ψ(w) = w if w ∈ L ( A+N ) , and ψ(w) = 0 otherwise. Extend ψ by linearity to\nspan(A+N ), so that ψ : span(A+N )→ span(L ( A+N ) )\nis a linear map between finite dimensional real vector spaces, from a higher dimensional space to a lower dimensional space." }, { "heading": "A.2.2 A BASIS FOR SIGNATURES", "text": "Recall that the signature transform maps between spaces as follows.\nSigN : S ( Rd ) → N∏ k=1 ( Rd )⊗k .\nLet {ei | 1 ≤ i ≤ d} be the usual basis for Rd. Then\n{ei1 ⊗ · · · ⊗ eik | 1 ≤ i1, . . . ik ≤ d} is a basis for (Rd)⊗k. An arbitrary element of ∏N\nk=1\n( Rd )⊗k\nmay be written as d∑ i1,...ik=1 αi1,...,ikei1 ⊗ · · · ⊗ eik 1≤k≤N\n(15)\nfor some αi1,...,ik . Then A+N may be used to represent a basis for ∏N\nk=1\n( Rd )⊗k\n. Identify ei1 ⊗ · · · ⊗ eik with ai1 · · · aik . Extend linearly, so as to identify expression (15) with the formal sum of words\nN∑ k=1 d∑ i1,...ik=1 αi1,...,ikai1 · · · aik .\nWith this identification,\nspan(A+N ) ∼= N∏\nk=1\n( Rd )⊗k\n(16)" }, { "heading": "A.2.3 BASES FOR LOGSIGNATURES", "text": "Suppose we have some x ∈ S ( Rd ) . Using the identification in equation (16), then we may attempt\nto seek some x ∈ span(L ( A+N ) ) such that\nφ(x) = log ( SigN (x) ) . (17)\nThis is an overdetermined linear system. As a matrix φ is tall and thin. However it turns out that image (log) = image (φ) and moreover there exists a unique solution (Reizenstein & Graham, 2018). (That it is an overdetermined system is typically the point of the logsignature transform over the signature transform, as it then represents the same information in less space.)\nIf x = ∑\n`∈L(A+N ) α``, with α` ∈ R, then by linearity∑ `∈L(A+N ) α`φ(`) = log ( SigN (x) ) ,\nso that φ(L ( A+N ) ) is a basis, called the Lyndon basis, of image (log). When calculating the logsignature transform in a computer, then the collection of α` are a sensible choice for representing the result, and indeed, this is what is done by iisignature. See Reizenstein & Graham (2018) for details of this procedure.\nHowever, it turns out that this is unnecessarily expensive. In deep learning, it is typical to apply a learnt linear transformation after a nonlinearity - in which case we largely do not care in what basis we represent the logsignature, and it turns out that we can find a more efficient one.\nThe Lyndon basis exhibits a particular triangularity property (Reutenauer, 1993, Theorem 5.1), (Reizenstein, 2019, Theorem 32), meaning that for all ` ∈ L ( A+N ) , then φ(`) has coefficient zero for any Lyndon word lexicographically earlier than `. This property has already been exploited by iisignature to solve (17) efficiently, but we can do better: it means that\nψ ◦ φ : span ( L ( A+N )) → span ( L ( A+N ))\nis a triangular linear map, and so in particular it is invertible, and defines a change of basis; it is this alternate basis that we shall use instead. Instead of seeking x as in equation (17), we may now instead seek z ∈ span ( L ( A+N )) such that\n(φ ◦ (ψ ◦ φ)−1)(z) = log ( SigN (x) ) .\nBut now by simply applying ψ to both sides: z = ψ ( log ( SigN (x) )) .\nThis is now incredibly easy to compute. Once log ( SigN (x) ) has been computed, and interpreted as in equation (16), then the operation of ψ is simply to extract the coefficients of all the Lyndon words, and we are done." }, { "heading": "B LIBTORCH VS CUDA", "text": "LibTorch is the C++ equivalent to the PyTorch library. GPU support in Signatory was provided by using the operations provided by LibTorch.\nIt was a deliberate choice not to write custom CUDA kernels. The reason for this is as follows. We have to make a choice between distributing source code and distributing precompiled binaries. If we distribute source code, then we rely on users being able to compile CUDA, which is far from a guarantee.\nMeanwhile, distributing precompiled binaries is unfortunately not feasible on Linux. C/C++ extensions for Python are typically compiled for Linux using the ‘manylinux’ specification, and indeed PyPI will only host binaries claiming to be compiled according to this specification. Unfortunately, based on our inspection of its build scripts, PyTorch appears not to conform to this specification. It instead compiles against a later version of Centos than is supported by manylinux, and then subsequently modifies things so as to seem compatible with the manylinux specification.\nUnpicking precisely how PyTorch does this so that we might duplicate the necessary functionality (as we must necessarily remain compatible with PyTorch as well) was judged a finickity task full of hard-to-test edge cases, that is an implementation detail of PyTorch that should not be relied upon, and that may not remain stable across future versions." }, { "heading": "C BACKPROPAGATION", "text": "Backpropagation is calculated in the usual way, mathematically speaking, by treating the signature and logsignature transforms as a composition of differentiable primitives, as discussed in Section 2.2.\nThe backpropagation computations are handwritten, and are not generated autodifferentiably. This improves the speed of the computation by using C++ primitives, rather than high-level tensors." }, { "heading": "C.1 REVERSIBILITY", "text": "Moreover, it allows us to exploit a reversibility property of the signature (Reizenstein, 2019). When backpropagating through any forward operation, then typically the the forward results are stored in memory, as these are used in the backward pass.\nHowever, recall the grouplike structure of the signature; in particular this means that\nSigN ((x1, . . . , xL−1)) = Sig N ((x1, . . . , xL)) Sig N ((xL−1, xL)) −1.\n= SigN ((x1, . . . , xL)) Sig N ((xL, xL−1)). (18)\nConsider the case of SigN ((x1, . . . , xL)) by iterating through equation (3) from left to right. Reversibility means we do not need to store the intermediate computations SigN ((x1, . . . , xj)): given the final SigN ((x1, . . . , xL)), we can recover SigN ((x1, . . . , xj)) in the order that they are needed in the backward pass by repeatedly applying equation (18).\nWe remark in Section 2.5 that the signature may be interpreted as the solution to a differential equation. This recomputation procedure actually corresponds to the adjoint method for backpropagating through a differential equation, as popularised in machine learning via Chen et al. (2018).\nImportantly however, this does not face reconstruction errors in the same way as neural differential equations (Gholami et al., 2019). Because the driving path f is taken to be piecewise affine in Definition 4, then the differential equation defining the signature may be solved exactly, without numerical approximations.\nEquation (18) uses the same basic operations as the forward operation, and can be computed using the same subroutines, including the fused multiply-exponentiate." }, { "heading": "C.2 SPEED VERSUS MEMORY TRADE-OFFS", "text": "The reversibility procedure just described introduces the additional cost of recomputing the path (rather than just holding it in memory). In principle this need not be performed by holding partial results in memory.\nFor simplicity we do not offer this an alternative with Signatory. Signature-based techniques are often applied to long or high-frequency data (Lyons et al., 2014; Morrill et al., 2020b), for which the large size of multiple partially computed signatures can easily become a memory issue. Nonetheless this represents an opportunity for further work." }, { "heading": "C.3 PARALLELISM", "text": "The use of parallelism in the gradient computation depends upon whether to use reversibility as discussed.\nConsider first the case in which reversibility is not used, and all intermediate results are held in memory. As discussed in Section 5.1, the forward operation may be computed in parallel as a reduction. The computation graph (within the signature computation) then looks like a balanced tree, and so the backward operation through this computation graph may be performed in parallel as well.\nHowever if reversibility is used then only the final SigN ((x1, . . . , xL)) is held in memory, then the necessary intermediate computations to backpropagate in parallel are not available.\nAs Signatory uses reversibility then backpropagation is not performed in parallel.\nThis represents an opportunity for further work, but practically speaking we expect that its impact is only moderate. Backpropagation is typically performed as part of a training procedure over batches of data; thus the available parallelism may already saturated by parallelism over the batch, and by the intrinsic parallelism available within each primitive operation." }, { "heading": "D FURTHER BENCHMARKS", "text": "" }, { "heading": "D.1 CODE FOR REPRODUCIBILITY", "text": "The benchmarks may be reproduced with the following code on a Linux system. First we install the necessary packages.\npip install numpy==1.18.0 matplotlib==3.0.3 torch==1.5.0 pip install iisignature==0.24 esig==0.6.31 signatory==1.2.1.1.5.0 git clone https://github.com/patrick-kidger/signatory.git cd signatory\nNote that numpy must be installed before iisignature, and PyTorch must be installed before Signatory. The unusually long version number for Signatory is necessary to specify both the version of Signatory, and the version of PyTorch that it is for. The git clone is necessary as the benchmarking code is not distributed via pip.\nNow run\npython command.py benchmark -help\nfor further details on how to run any particular benchmark. For example,\npython command.py benchmark -m time -f sigf -t channels -o graph\nwill reproduce Figure 1a." }, { "heading": "D.2 MEMORY BENCHMARKS", "text": "Our benchmark scripts offer some limited ability to benchmark memory consumption, via the -m memory flag to the benchmark scripts.\nThe usual approach to such benchmarking, using valgrind’s massif, necessarily includes measuring the set-up code. As this includes loading both the Python interpreter and PyTorch, measuring the memory usage of our code becomes tricky.\nAs such we use an alternate method, in which the memory usage is sampled at intervals, using the Python package memory_profiler, which may be installed via pip install memory_profiler. This in turn has the limitation that it may miss a peak in memory usage; for small calculations it may miss the entire calculation. Furthermore, the values reported are inconsistent with those reported in Reizenstein & Graham (2018).\nNonetheless, when compared against iisignature using memory_profiler, on larger computations where peaks are less likely to go unobserved, then Signatory typically uses at an order of magnitude less memory. However due to the limitations above, we have chosen not report quantitative memory benchmarks here." }, { "heading": "D.3 SIGNATURE TRANSFORM BENCHMARKS", "text": "The precise values of the points of Figures 1 and 2 are shown in Tables 1–4.\nFor convenience, the ratio between the speed of Signatory and the speed of iisignature is also shown." }, { "heading": "D.4 LOGSIGNATURE TRANSFORM BENCHMARKS", "text": "See Figure 4 for the graphs of the benchmarks for the logsignature transform.\nThe computer and runtime environment used was as described in Section 6.\nWe observe similar behaviour to the benchmarks for the signature transform. iisignature is slightly faster for some very small computations, but that as problem size increases, Signatory swiftly overtakes iisignature, and is orders of magnitude faster for larger computations.\nThe precise values of the points on these graphs are shown in Tables 5–8. Times are given in seconds. Also shown is the ratio between the speed of Signatory and the speed of iisignature. A dash indicates that esig does not support that operation." }, { "heading": "D.5 SINGLE-ELEMENT-BATCH BENCHMARKS", "text": "The benchmarks so far considered were for a batch of samples (of size 32). Whilst this is of particular relevance for training, it is sometimes less relevant for inference. We now repeat all the previous benchmarks (forward and backward through both signature and logsignature, varying both depth and channels), except that the batch dimension is reduced to size 1. See Figures 5 and 6. Numerical values are presented in Tables 9–16.\nHere we see on very small problems that iisignature now outperforms Signatory by about a millisecond, but that once again Signatory overtakes iisignature on reasonably-sized problems, and is still orders of magnitude faster on larger problems.\nWe do not regard the performance on very small single-element problems as a drawback of Signatory. If performing very few very small calculations, then the difference of a millisecond is irrelevant. If performing very many very small calculations, then these can typically be batched together." } ]
2,021
null
SP:42107a5481baa3bdf72b965d0db08ef92b78a92f
[ "This paper proposes pretraining language model for e-commerce domain. Specifically, the authors design five pretraining objectives to incorporate various domain knowledge into the the models with an encoder-decoder architecture. When further finetuned on language understanding and generation tasks in the e-commerce domain, the proposed models named K-PLUG outperforms the existing baseline models including those pretrained on general domains. The paper is generally easy to follow. Designs of the pretraining objectives are reasonable and empirically effective. Experiments are solid and convincing." ]
Existing pre-trained language models (PLMs) have demonstrated the effectiveness of self-supervised learning for a broad range of natural language processing (NLP) tasks. However, most of them are not explicitly aware of domain-specific knowledge, which is essential for downstream tasks in many domains, such as tasks in e-commerce scenarios. In this paper, we propose K-PLUG, a knowledgeinjected pre-trained language model based on the encoder-decoder transformer that can be transferred to both natural language understanding and generation tasks. We verify our method in a diverse range of e-commerce scenarios that require domain-specific knowledge. Specifically, we propose five knowledgeaware self-supervised pre-training objectives to formulate the learning of domainspecific knowledge, including e-commerce domain-specific knowledge-bases, aspects of product entities, categories of product entities, and unique selling propositions of product entities. K-PLUG achieves new state-of-the-art results on a suite of domain-specific NLP tasks, including product knowledge base completion, abstractive product summarization, and multi-turn dialogue, significantly outperforms baselines across the board, which demonstrates that the proposed method effectively learns a diverse set of domain-specific knowledge for both language understanding and generation tasks. The code, data, and models will be publicly available1.
[]
[ { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "In 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Meng Chen", "Ruixue Liu", "Lei Shen", "Shaozu Yuan", "Jingyan Zhou", "Youzheng Wu", "Xiaodong He", "Bowen Zhou" ], "title": "The jddc corpus: A large-scale multi-turn chinese dialogue dataset for e-commerce customer service", "venue": "In Proceedings of The 12th Language Resources and Evaluation Conference (LREC),", "year": 2020 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": "In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL),", "year": 2019 }, { "authors": [ "Li Dong", "Nan Yang", "Wenhui Wang", "Furu Wei", "Xiaodong Liu", "Yu Wang", "Jianfeng Gao", "Ming Zhou", "Hsiao-Wuen Hon" ], "title": "Unified language model pre-training for natural language understanding and generation", "venue": "In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems, (NeurIPS),", "year": 2019 }, { "authors": [ "Günes Erkan", "Dragomir R Radev" ], "title": "LexRank: Graph-based lexical centrality as salience in text summarization", "venue": "Journal of Artificial Intelligence Research (JAIR),", "year": 2004 }, { "authors": [ "Joseph L Fleiss" ], "title": "Measuring nominal scale agreement among many raters", "venue": "Psychological bulletin,", "year": 1971 }, { "authors": [ "Dan Hendrycks", "Kevin Gimpel" ], "title": "Gaussian error linear units (gelus)", "venue": "arXiv preprint arXiv:1606.08415,", "year": 2016 }, { "authors": [ "Pei Ke", "Haozhe Ji", "Siyang Liu", "Xiaoyan Zhu", "Minlie Huang" ], "title": "SentiLR: Linguistic knowledge enhanced language representation for sentiment analysis", "venue": null, "year": 1911 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Anne Lauscher", "Ivan Vulić", "Edoardo Maria Ponti", "Anna Korhonen", "Goran Glavaš" ], "title": "Informing unsupervised pretraining with external linguistic knowledge", "venue": null, "year": 1909 }, { "authors": [ "Yoav Levine", "Barak Lenz", "Or Dagan", "Ori Ram", "Dan Padnos", "Or Sharir", "Shai Shalev-Shwartz", "Amnon Shashua", "Yoav Shoham" ], "title": "SenseBERT: Driving some sense into BERT", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL),", "year": 2020 }, { "authors": [ "Mike Lewis", "Yinhan Liu", "Naman Goyal", "Marjan Ghazvininejad", "Abdelrahman Mohamed", "Omer Levy", "Veselin Stoyanov", "Luke Zettlemoyer" ], "title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL),", "year": 2020 }, { "authors": [ "Haoran Li", "Peng Yuan", "Song Xu", "Youzheng Wu", "Xiaodong He", "Bowen Zhou" ], "title": "Aspect-aware multimodal summarization for chinese e-commerce products", "venue": "In Proceedings of the Thirty-Forth AAAI Conference on Artificial Intelligence (AAAI),", "year": 2020 }, { "authors": [ "Chin-Yew Lin", "Eduard Hovy" ], "title": "Automatic evaluation of summaries using n-gram co-occurrence statistics", "venue": "In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (NAACL),", "year": 2003 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "RoBERTa: A robustly optimized BERT pretraining approach", "venue": null, "year": 1907 }, { "authors": [ "Robert L Logan IV", "Samuel Humeau", "Sameer Singh" ], "title": "Multimodal attribute extraction", "venue": "arXiv preprint arXiv:1711.11118,", "year": 2017 }, { "authors": [ "Xusheng Luo", "Luxin Liu", "Yonghua Yang", "Le Bo", "Yuanpeng Cao", "Jinghang Wu", "Qiang Li", "Keping Yang", "Kenny Q Zhu" ], "title": "AliCoCo: Alibaba e-commerce cognitive concept net", "venue": "In Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data (SIGMOD),", "year": 2020 }, { "authors": [ "Matthew Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations", "venue": "In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL),", "year": 2018 }, { "authors": [ "Matthew E. Peters", "Mark Neumann", "Robert Logan", "Roy Schwartz", "Vidur Joshi", "Sameer Singh", "Noah A. Smith" ], "title": "Knowledge enhanced contextual word representations", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Alec Radford", "Karthik Narasimhan", "Tim Salimans", "Ilya Sutskever" ], "title": "Improving language understanding by generative pre-training", "venue": null, "year": 2018 }, { "authors": [ "Colin Raffel", "Noam Shazeer", "Adam Roberts", "Katherine Lee", "Sharan Narang", "Michael Matena", "Yanqi Zhou", "Wei Li", "Peter J Liu" ], "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "venue": "arXiv preprint arXiv:1910.10683,", "year": 2019 }, { "authors": [ "Abigail See", "Peter J. Liu", "Christopher D. Manning" ], "title": "Get to the point: Summarization with pointer-generator networks", "venue": "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL),", "year": 2017 }, { "authors": [ "Heung-Yeung Shum", "Xiao-dong He", "Di Li" ], "title": "From Eliza to Xiaoice: challenges and opportunities with social chatbots", "venue": "Frontiers of Information Technology & Electronic Engineering,", "year": 2018 }, { "authors": [ "Kaitao Song", "Xu Tan", "Tao Qin", "Jianfeng Lu", "Tie-Yan Liu" ], "title": "MASS: masked sequence to sequence pre-training for language generation", "venue": "In Proceedings of the 36th International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Yu Sun", "Shuohuan Wang", "Yukun Li", "Shikun Feng", "Xuyi Chen", "Han Zhang", "Xin Tian", "Danxiang Zhu", "Hao Tian", "Hua Wu" ], "title": "ERNIE: Enhanced representation through knowledge integration", "venue": null, "year": 1904 }, { "authors": [ "Chongyang Tao", "Wei Wu", "Can Xu", "Wenpeng Hu", "Dongyan Zhao", "Rui Yan" ], "title": "One time of interaction may not be enough: Go deep with an interaction-over-interaction network for response selection in dialogues", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Hao Tian", "Can Gao", "Xinyan Xiao", "Hao Liu", "Bolei He", "Hua Wu", "Haifeng Wang", "Feng Wu" ], "title": "SKEP: Sentiment knowledge enhanced pre-training for sentiment analysis", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL),", "year": 2020 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "Ruize Wang", "Duyu Tang", "Nan Duan", "Zhongyu Wei", "Xuanjing Huang", "Cuihong Cao", "Daxin Jiang", "Ming Zhou" ], "title": "K-Adapter: Infusing knowledge into pre-trained models with adapters", "venue": null, "year": 2002 }, { "authors": [ "Xiaozhi Wang", "Tianyu Gao", "Zhaocheng Zhu", "Zhiyuan Liu", "Juanzi Li", "Jian Tang" ], "title": "KEPLER: A unified model for knowledge embedding and pre-trained language representation", "venue": null, "year": 1911 }, { "authors": [ "Yu Wu", "Wei Wu", "Chen Xing", "Ming Zhou", "Zhoujun Li" ], "title": "Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots", "venue": "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL),", "year": 2017 }, { "authors": [ "Wenhan Xiong", "Jingfei Du", "William Yang Wang", "Veselin Stoyanov" ], "title": "Pretrained encyclopedia: Weakly supervised knowledge-pretrained language model", "venue": "In 8th International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Huimin Xu", "Wenting Wang", "Xin Mao", "Xinyu Jiang", "Man Lan" ], "title": "Scaling up open tagging from tens to thousands: Comprehension empowered attribute value extraction from product title", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL),", "year": 2019 }, { "authors": [ "Liang Xu", "Xuanwei Zhang", "Lu Li", "Hai Hu", "Chenjie Cao", "Weitang Liu", "Junyi Li", "Yudong Li", "Kai Sun", "Yechen Xu" ], "title": "CLUE: A chinese language understanding evaluation", "venue": null, "year": 2004 }, { "authors": [ "Yu Yan", "Weizhen Qi", "Yeyun Gong", "Dayiheng Liu", "Nan Duan", "Jiusheng Chen", "Ruofei Zhang", "Ming Zhou" ], "title": "ProphetNet: Predicting future n-gram for sequence-to-sequence pre-training", "venue": null, "year": 2001 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Yiming Yang", "Jaime Carbonell", "Russ R Salakhutdinov", "Quoc V Le" ], "title": "XLNET: Generalized autoregressive pretraining for language understanding", "venue": "In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Chunyuan Yuan", "Wei Zhou", "Mingming Li", "Shangwen Lv", "Fuqing Zhu", "Jizhong Han", "Songlin Hu" ], "title": "Multi-hop selector network for multi-turn response selection in retrieval-based chatbots", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Jingqing Zhang", "Yao Zhao", "Mohammad Saleh", "Peter J Liu" ], "title": "PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization", "venue": "In Proceedings of the 37th International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Zhengyan Zhang", "Xu Han", "Zhiyuan Liu", "Xin Jiang", "Maosong Sun", "Qun Liu" ], "title": "ERNIE: Enhanced language representation with informative entities", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL),", "year": 2019 }, { "authors": [ "Zhuosheng Zhang", "Jiangtong Li", "Pengfei Zhu", "Hai Zhao", "Gongshen Liu" ], "title": "Modeling multi-turn conversation with deep utterance aggregation", "venue": "In Proceedings of the 27th International Conference on Computational Linguistics (COLING),", "year": 2018 }, { "authors": [ "Xiangyang Zhou", "Lu Li", "Daxiang Dong", "Yi Liu", "Ying Chen", "Wayne Xin Zhao", "Dianhai Yu", "Hua Wu" ], "title": "Multi-turn response selection for chatbots with deep attention matching network", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "year": 2018 }, { "authors": [ "Tiangang Zhu", "Yue Wang", "Haoran Li", "Youzheng Wu", "Xiaodong He", "Bowen Zhou" ], "title": "Multimodal joint attribute prediction and value extraction for e-commerce product", "venue": "In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Pre-trained language models (PLMs), such as ELMo (Peters et al., 2018), GPT (Radford et al., 2018), BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), and XLNet (Yang et al., 2019), have made remarkable breakthroughs in many natural language understanding (NLU) tasks, including text classification, reading comprehension, and natural language inference. These models are trained on large-scale text corpora with self-supervision based on either bi-directional or auto-regressive pre-training. Equally promising performances have been achieved in natural language generation (NLG) tasks, such as machine translation and text summarization, by MASS (Song et al., 2019), UniLM (Dong et al., 2019), BART (Lewis et al., 2020), T5 (Raffel et al., 2019), PEGASUS (Zhang et al., 2020), and ProphetNet (Yan et al., 2020). In contrast, these approaches adopt Transformerbased sequence-to-sequence models to jointly pre-train for both the encoder and the decoder.\nWhile these PLMs can learn rich semantic patterns from raw text data and thereby enhance downstream NLP applications, many of them do not explicitly model domain-specific knowledge. As a result, they may not be as sufficient for capturing human-curated or domain-specific knowledge that is necessary for tasks in a certain domain, such as tasks in e-commerce scenarios. In order to overcome this limitation, several recent studies have proposed to enrich PLMs with explicit knowledge, including knowledge base (KB) (Zhang et al., 2019; Peters et al., 2019; Xiong et al., 2020; Wang et al., 2019; 2020), lexical relation (Lauscher et al., 2019; Wang et al., 2020), word sense (Levine et al., 2020), part-of-speech tag (Ke et al., 2019), and sentiment polarity (Ke et al., 2019; Tian et al., 2020). However, these methods only integrate domain-specific knowledge into the encoder, and the decoding process in many NLG tasks benefits little from these knowledge.\n1Our code is available at https://github.com/ICLR21Anonymous/knowledge_pretrain.\nTo mitigate this problem, we propose a Knowledge-injected Pre-trained Language model that is suitable for both Natural Language Understanding and Generation (K-PLUG). Different from existing knowledge-injected PLMs, K-PLUG integrates knowledge into pre-training for both the encoder and the decoder, and thus K-PLUG can be adopted to both downstream knowledge-driven NLU and NLG tasks. We verify the performance of the proposed method in various e-commerce scenarios. In the proposed K-PLUG, we formulate the learning of four types of domain-specific knowledge: e-commerce domain-specific knowledge-bases, aspects of product entities, categories of product entities, and unique selling propositions (USPs) (Garrett, 1961) of product entities. Specifically, e-commerce KB stores standardized product attribute information, product aspects are features that play a crucial role in understanding product information, product categories are the backbones for constructing taxonomies for organization, and USPs are the essence of what differentiates a product from its competitors. K-PLUG learns these types of knowledge into a unified PLM, enhancing performances for various language understanding and generation tasks.\nTo effectively learn these four types of valuable domain-specific knowledge in K-PLUG, we proposed five new pre-training objectives: knowledge-aware masked language model (KMLM), knowledge-aware masked sequence-to-sequence (KMS2S), product entity aspect boundary detection (PEABD), product entity category classification (PECC), and product entity aspect summary generation (PEASG). Among these objectives, KMLM and KMS2S learn to predict the masked single and multiple tokens, respectively, that are associated with domain-specific knowledge rather than general information; PEABD determines the boundaries between descriptions of different product aspects given full product information; PECC identifies the product category that each product belongs to; and PEASG generates a summary for each individual product aspect based on the entire product description.\nAfter pre-training K-PLUG, we fine-tune it on three domain-specific NLP tasks, namely, ecommerce knowledge base completion, abstractive product summarization, and multi-turn dialogue. The results show that K-PLUG significantly outperforms comparative models on all these tasks.\nOur main contributions can be summarized as follows:\n• We present K-PLUG that learns domain-specific knowledge for both the encoder and the decoder in a pre-training language model framework, which benefits both NLG and NLU tasks.\n• We formulate the learning of four types of domain-specific knowledge in e-commerce scenarios: e-commerce domain-specific knowledge-bases, aspects of product entities, categories of product entities, and unique selling propositions of product entities, which provide critical information for many applications in the domain of e-commerce. Specifically, five self-supervised objectives are proposed to learn these four types of knowledge into a unified PLM.\n• Our proposed model exhibits clear effectiveness in many downstream tasks in the ecommerce scenario, including e-commerce KB completion, abstractive product summarization, and multi-turn dialogue." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 PLMS IN GENERAL", "text": "Unsupervised pre-training language model has been successfully applied to many NLP tasks. ELMo (Peters et al., 2018) learns the contextual representations based on a bidirectional LM. GPT (Radford et al., 2018) predicts tokens based on the context on the left-hand side. BERT (Devlin et al., 2019) adopts a bi-directional LM to predict the masked tokens. XLNet (Yang et al., 2019) predicts masked tokens in a permuted order through an autoregressive method. MASS (Song et al., 2019) pre-trains the sequence-to-sequence LM to recover a span of masked tokens. UniLM (Dong et al., 2019) combines bidirectional, unidirectional, and sequence-to-sequence LMs. T5 (Raffel et al., 2019) and BART (Lewis et al., 2020) present denoising sequence-to-sequence pre-training. PEGASUS (Zhang et al., 2020) pre-trains with gap-sentence generation objective. While humancurated or domain-specific knowledge is essential for downstream knowledge-driven tasks, these methods do not explicitly consider external knowledge like our proposed K-PLUG.\nties, categories of product entities, and\n::::::::::::::::::::::::::::::::::::: unique selling propositions of product entities. Pre-training\nobjectives include knowledge-aware masked language model (KMLM), knowledge-aware masked sequence-to-sequence (KMS2S), product entity aspect boundary detection (PEABD), product entity category classification (PECC), and product entity aspect summary generation (PEASG)." }, { "heading": "2.2 INJECTING KNOWLEDGE INTO PLMS", "text": "Recent work investigates how to incorporate knowledge into PLMs for NLU. ERNIE (Sun et al., 2019) enhances language representation with the entity/phrase-level masking. ERNIE (Zhang et al., 2019) identifies and links entity mentions in texts to their corresponding entities in KB. Similar to ERNIE (Zhang et al., 2019), KnowBERT (Peters et al., 2019) injects KBs into PLM. Xiong et al. (2020) leverages an entity replacement pre-training objective to learn better representations for entities. KEPLER (Wang et al., 2019) adopts the knowledge embedding objective in the pretraining. Besides, SKEP (Tian et al., 2020), SenseBERT (Levine et al., 2020), SentiLR (Ke et al., 2019), and K-ADAPTER (Wang et al., 2020) propose to integrate sentiment knowledge, word sense, sentiment polarity, and lexical relation into PLM, respectively. However, most of these studies are focused on integrating knowledge for language understanding task, work of utilizing domainspecific knowledge for pre-training for language generation tasks are limited. Inspired by these work, we construct K-PLUG that learns domain-specific knowledge into a PLM for both NLU and NLG tasks." }, { "heading": "3 KNOWLEDGE-INJECTED PRE-TRAINING", "text": "In this section, we explain the data used to pre-train K-PLUG, its model architecture, and our pretraining objectives." }, { "heading": "3.1 DATA PREPARATION", "text": "We collect the pre-training data from a mainstream Chinese e-commerce platform2, which contains approximately 25 million textual product descriptions and covers 40 product categories. With an average length of 405 tokens, these product descriptions constitute a corpus with a size of 10B Chinese characters. Each product description consists of information on 10.7 product aspects on average, and each product aspect is accompanied with a summary highlighting its prominent features, as shown in Figure 1(a). Additionally, the e-commerce KB and USPs (further explained below) used in our pre-training data are as specified by the e-commerce platform and its online stores.\n2https://www.jd.com/" }, { "heading": "3.2 MODEL ARCHITECTURE", "text": "K-PLUG adopts the standard sequence-to-sequence Transformer architecture (Vaswani et al., 2017), consisting of a 6-layer encoder and a 6-layer decoder as Song et al. (2019). We set the size of hidden vectors as 768, and the number of self-attention heads as 12. More details about the experimental settings are in the appendix." }, { "heading": "3.3 KNOWLEDGE FORMULATION AND PRE-TRAINING OBJECTIVES", "text": "We formulate the learning of four types of knowledge in a unified PLM: e-commerce KB, aspects of product entities, categories of product entities, and USPs of product entities. Specifically, ecommerce KB stores standardized product attribute information, e.g., (Material: Cotton) and (Collar Type: Pointed Collar). It provides details about the products (Logan IV et al., 2017). Aspects of product entities are features of a product, such as the sound quality of a stereo speaker, etc. (Li et al., 2020). Categories of product entities such as Clothing and Food are widely used by e-commerce platforms to organize their products so to present structured offerings to their customers (Luo et al., 2020; Dong et al., 2020) USPs of product entities are the essence of what differentiates a product from its competitors (Garrett, 1961). For example, a stereo speaker’s USP exhibiting its supreme sound quality could be “crystal clear stereo sound”. An effective USP immediately motivates the purchasing behavior of potential buyers.\nWe propose and evaluate five novel self-supervised pre-training objectives to learn the abovementioned four types of knowledge in the K-PLUG model (see Figure 1)." }, { "heading": "Knowledge-aware Masked Language Model (KMLM)", "text": "Inspired by BERT (Devlin et al., 2019), we adopt the masked language model (MLM) to train the Transformer encoder as one of our pre-training objectives, which learns to predict the masked tokens in the source sequence (e.g., “The company is [MASK] at the foot of a hill.”). Similar to BERT, we mask 15% of all tokens in a text sequence; 80% of the masked tokens are replaced with the [MASK] token, 10% with a random token, and 10% left unchanged. Particularly, given an original text sequence x = (x1, ..., xm, ..., xM ) with M tokens, a masked sequence is produced by masking xm through one of the three ways explained above, e.g., replacing xm with [MASK] to create x̃ = (x1, ..., [MASK], ..., xM). MLM aims to model the conditional likelihood P (xm|x̃), and the loss function is:\nLMLM = logP (xm|x̃) (1)\nThe major difference from BERT is that our KMLM prioritizes knowledge tokens, which contain knowledge regarding product attributes and USPs, when selecting positions to mask and, in the case that the knowledge tokens make up less than 15% of all tokens, randomly picks non-knowledge tokens to complete the masking." }, { "heading": "Knowledge-aware Masked Sequence-to-Sequence (KMS2S)", "text": "K-PLUG inherits the strong ability of language generation from the masked sequence-to-sequence (MS2S) objective. The encoder takes a sentence with a masked fragment (several consecutive tokens) as the input, and the decoder predicts this masked fragment conditioned on the encoder representations (e.g., “The company [MASK] [MASK] [MASK] the foot of a hill.”).\nGiven a text sequence x = (x1, ..., xu, ..., xv, ..., xM ), a masked sequence x̃ = (x1, ..., [MASK], ..., [MASK], ..., xM) is produced by replacing the span xu:v , ranging from xu to xv , with the [MASK] token. MS2S aims to model P (xu:v|x̃), which can be further factorized into a product P (xu:v|x̃) = ∏v t=u P (xt|x̃) according to the chain rule. The loss function is:\nLMS2S = v∑ t=u logP (xt|x̃) (2)\nWe set the length of the masked span as 30% of the length of the original text sequence. Similar to KMLM, KMS2S prioritizes the masking of text spans that cover knowledge tokens." }, { "heading": "Product Entity Aspect Boundary Detection (PEABD)", "text": "A product description usually contains multiple product entity aspects. Existing work (Li et al., 2020) proves that product aspects influence the quality of product summaries from the views of importance, non-redundancy, and readability, which are not directly taken into account in language modeling. In order to train a model that understands product aspects, we leverage the PEABD objective to detect boundaries between the product entity aspects. It is essentially a sequence labeling task based on the representations of K-PLUG’s top encoder layer.\nGiven a text sequence x = (x1, ..., xM ), the encoder of K-PLUG outputs a sequence h = (h1, ..., hM ), which is fed into a softmax layer, and generates a probability sequence y. The loss function is:\nLPEABD = − ∑ t ŷt log yt (3)\nwhere y ∈ {[0, 1]} are the ground-truth labels for the aspect boundary detection task." }, { "heading": "Product Entity Category Classification (PECC)", "text": "Product entity categories are the backbones for constructing taxonomies (Luo et al., 2020; Dong et al., 2020). Each product description document corresponds to one of the 40 categories included in our corpus, such as Clothing, Bags, Home Appliances, Shoes, Foods, etc. Identifying the product entity categories accurately is the prerequisite for creating an output that is consistent with the input.\nGiven a text sequence x = (x1, ..., xM ), a softmax layer outputs the classification score, y, based on the representation of the encoder classification token, [CLS]. The loss function maximizes the model’s probability of outputting the true product entity category as follows:\nLPECC = −ŷ log y (4) where ŷ is the ground-truth product category." }, { "heading": "Product Entity Aspect Summary Generation (PEASG)", "text": "Inspired by PEGASUS (Zhang et al., 2020), which proves that using a pre-training objective that more closely resembles the downstream task leads to better and faster fine-tuning performance, we propose a PEASG objective to generate a summary from the description of a product entity aspect. Unlike extracted gap-sentences generation in PEGASUS, our method constructs a more realistic summary generation task because the aspect summary naturally exists in our pre-training data.\nGiven an aspect description sequence x = (x1, ..., xM ), and an aspect summary sequence y = (y1, ..., yT ), PEASG aims to model the conditional likelihood P (y|x). The loss function is:\nLPEASG = ∑ t logP (yt|x,y<t) (5)" }, { "heading": "4 EXPERIMENTS AND RESULTS", "text": "" }, { "heading": "4.1 PRE-TRAINED MODEL VARIANTS", "text": "To evaluate the effectiveness of pre-training with domain-specific data and with domain-specific knowledge separately, we implement pre-training experiments with two model variants: C-PLUG and E-PLUG, whose configurations are the same as that of K-PLUG.\n• C-PLUG is a pre-trained language model with the original objectives of MLM and MS2S, trained with a general pre-training corpus, CLUE (Xu et al., 2020), which contains 30GB of raw text with around 8B Chinese words.\n• E-PLUG is a pre-trained language model with the original objectives of MLM and MS2S, trained with our collected e-commerce domain-specific corpus." }, { "heading": "4.2 DOWNSTREAM TASKS", "text": "We fine-tune K-PLUG on three downstream tasks: e-commerce KB completion, abstractive product summarization, and multi-turn dialogue. The e-commerce KB completion task involves the\nprediction of product attributes and values given product information. The abstractive product summarization task requires the model to generate a product summary from textual product description. The multi-turn dialogue task aims to output the response by utilizing a multi-turn dialogue context. The domain-specific knowledge we defined in this paper is essential for these tasks." }, { "heading": "4.2.1 E-COMMERCE KB COMPLETION", "text": "Task Definition. E-commerce KB provides abundant product information that is in the form of (product entity, product attribute, attribute value), such as (pid#133443, Material, Copper Aluminum). For the E-commerce KB completion task, the input is a textual product description for a given product, and the output is the product attribute values.\nDataset. We conduct experiments on the dataset of MEPAVE (Zhu et al., 2020). This dataset is collected from a major Chinese e-commerce platform, which consists of 87,194 instances annotated with the position of attribute values mentioned in the product descriptions. There are totally 26 types of product attributes such as Material, Collar Type, Color, etc. The training, validation, and testing sets contain 71,194/8,000/8,000 instances, respectively.\nModel. We consider the e-commerce KB completion task as a sequence labeling task that tags the input word sequence x = (x1, ..., xN ) with the label sequence y = (y1, ..., yN ) in the BIO format. For example, for the input sentence “A bright yellow collar”, the corresponding labels for “bright” and “yellow” are Color-B and Color-I, respectively, and O for the other tokens. For an input sequence, K-PLUG outputs an encoding representation sequence, and a linear classification layer with the softmax predicts the label for each input token based on the encoding representation.\nModel P R F1\nLSTM 79.68 86.43 82.92 ScalingUp 65.48 93.78 77.12 BERT 78.27 88.62 83.12 JAVE 80.27 89.82 84.78 M-JAVE 83.49 90.94 87.17\nC-PLUG 89.79 96.47 93.02 E-PLUG 89.91 96.75 93.20 K-PLUG 93.58 97.92 95.97" }, { "heading": "Baselines.", "text": "Result. Table 1 shows the experimental results. We observe that our K-PLUG performs better than baselines. C-PLUG achieves significantly better performance than BERT, which indicates that MS2S can also benefit the NLU task. E-PLUG outperforms C-PLUG, showing that training with domainspecific corpus is helpful. K-PLUG further exhibits a 2.51% improvement compared with E-PLUG. In short, we can conclude that the improvement is due to both the domain-specific pre-training data and knowledge-injected pre-training objectives." }, { "heading": "4.2.2 ABSTRACTIVE PRODUCT SUMMARIZATION", "text": "Task Definition. Abstractive product summarization task aims to capture the most attractive information of a product that resonates with potential purchasers. The input for this task is a product description, and the output is a condensed product summary.\nDataset. We perform experiments on the dataset of Li et al. (2020), which contains 1.4 million instances collected from a major Chinese e-commerce platform, covering three categories of product: Home Appliances, Clothing, and Cases & Bags. Each instance in the dataset is a (product information, product summary) pair, and the product information contains an image, a title, and other product descriptions. In our work, we do not consider the visual information of products. Notice that the task of abstractive product summarization and product entity aspect summary generation (PEASG) are partly different. The abstractive product summarization task aims to generate a complete and cohesive product summary given a detailed product description. Given a product aspect description, PEASG aims to produce an aspect summary that basically consists of condensed USPs. In addition, for abstractive product summarization task, the average length of the product summaries is 79, while the lengths of the product aspect summaries are less than 10 in general.\nModel. Abstractive product summarization task is an NLG task that takes the product description as the input and product summary as the output." }, { "heading": "Baselines.", "text": "• LexRank (Erkan & Radev, 2004) is a graph-based extractive summarization method. • Seq2seq (Bahdanau et al., 2015) is a standard seq2seq model with an attention mechanism. • Pointer-Generator (PG) (See et al., 2017) is a seq2seq model with a copying mechanism. • Aspect MMPG (Li et al., 2020) is the-state-of-the-art method for abstractive product sum-\nmarization, taking both textual and visual product information as the input.\nResult. Table 2 shows the experimental results, including ROUGE-1 (RG-1), ROUGE-2 (RG-2), and ROUGE-L (RG-L) F1 scores (Lin & Hovy, 2003). K-PLUG clearly performs better than other text-based methods. E-commerce knowledge plays a significant role in the abstractive product summarization task, and domain-specific pre-training data and knowledge-injected pre-training objectives both enhance the model. K-PLUG achieves comparable results with the multimodal model, Aspect MMPG. The work of Li et al. (2020) suggests that product images are essential for this task, and we will advance K-PLUG with multimodal information in the future." }, { "heading": "Model Home Applications Clothing Cases&BagsRG-1 RG-2 RG-L RG-1 RG-2 RG-L RG-1 RG-2 RG-L", "text": "" }, { "heading": "4.2.3 MULTI-TURN DIALOGUE", "text": "Task Definition. The multi-turn dialogue task aims to output a response based on the multi-turn dialogue context (Shum et al., 2018). The input for this task is the dialogue context consisting of previous question answering, and the output is the response to the last question.\nDataset. We conduct experiments on two datasets of JDDC (Chen et al., 2020) and ECD (Zhang et al., 2018). JDDC is collected from the conversations between users and customer service staffs from a popular e-commerce website in China and contains 289 different intents, which are the goals of a dialogue, such as updating addresses, inquiring prices, etc, from after-sales assistance. There are 1,024,196 multi-turn sessions and 20,451,337 utterances in total. The average number of turns for each session is 20, and the average tokens per utterance is about 7.4. After pre-processing, the training, validation, and testing sets include 1,522,859/5,000/5,000 (dialogue context, response) pairs, respectively. ECD is collected from another popular e-commerce website in China and covers over 5 types of conversations based on 20 commodities. Additionally, for each ground-truth response, negative responses are provided for discriminative learning. The training, validation, and testing sets include 1,000,000/10,000/10,000 (dialogue context, response) pairs, respectively.\nModel. We test with two types of K-PLUG: retrieval-based K-PLUG on the ECD dataset and generative-based K-PLUG on the JDDC dataset. For the retrieval-based approach, we concatenate the dialogue context and use [SEP] token to separate context and response. The [CLS] representation is fed into the output layer for classification. The generative-based approach is a sequence-to-\nsequence model, which is the same as the model adopted in the abstractive product summarization task.\nBaselines. The baselines also include both the retrieval-based (BM25, CNN, BiLSTM, and BERT) and generative-based approaches. Other baselines are as follows.\n• SMN (Wu et al., 2017) matches a response with each utterance in the context.\n• DUA (Zhang et al., 2018) is a deep utterance aggregation model based on the fine-grained context representations.\n• DAM (Zhou et al., 2018) matches a response with the context based using dependency information based on self-attention and cross-attention.\n• IoI (Tao et al., 2019) is a deep matching model by stacking multiple interactions blocks between utterance and response.\n• MSN (Yuan et al., 2019) selects relevant context and generates better context representations with the selected context.\nResult. Table 3 and 4 show the experimental results on the JDDC and ECD datasets, respectively. We report ROUGE-L (RG-L) F1, BLEU, and recall at position k in n candidates (Rn@k). We can observe that, both on the retrieval-based and generative-based tasks, K-PLUG achieves new state-ofthe-art results, and e-commerce knowledge presents consistent improvements. K-PLUG is evidently superior to BERT, possibly due to BERT’s lack of domain-specific knowledge for pre-training with the general MLM objective.\nWe further perform a human evaluation on the JDDC dataset. We randomly choose 100 samples from the test set, and three experienced annotators are involved to determine whether K-PLUG outperforms E-PLUG with respect to (1) relevance between the response and the contexts and (2) readability of the response. The results are shown in Table 5. We can see that the percentage of “Win”, which denotes that the results of K-PLUG is better than E-PLUG, is significantly larger than “Lose” (p-value < 0.01 for t-test). Kappa values (Fleiss, 1971) confirm the consistency for different annotators." }, { "heading": "4.3 ABLATION STUDIES", "text": "To better understand our model, we perform ablation experiments to study the effects of different pre-training objectives.\nResult. The ablation results are shown in Table 6. We can conclude that the lack of any pretraining objective hurts performance across all the tasks. KMS2S is the most effective objective for the abstractive product summarization and generative conversation tasks since this objective is highly close to the essence of NLG. Product-aspect-related objectives, i.e., PEABD and PEASG, contribute much to the abstractive product summarization task, which proves that this task requires comprehensively understanding the product description from the view of product aspects, going beyond individual tokens." }, { "heading": "5 CONCLUSION", "text": "We present a knowledge-injected pre-trained model (K-PLUG) that is a powerful domain-specific language model trained on a large-scale e-commerce corpus designed to capture e-commerce knowledge, including e-commerce KB, product aspects, product categories, and USPs. The pre-training framework combines masked language model and masked seq2seq with novel objectives formulated as product aspect boundary detection, product aspect summary generation, and product category classification tasks. Our proposed model demonstrates strong performances on both natural language understanding and generation downstream tasks, including e-commerce KB completion, abstractive product summarization, and multi-turn dialogue." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 EXPERIMENTS SETTINGS FOR PRE-TRAINING", "text": "We adopt GELU activation (Hendrycks & Gimpel, 2016) as in GPT (Radford et al., 2018). We use Adam optimizer (Kingma & Ba, 2015) with a learning rate of 5e-4, β1 = 0.9, β2= 0.98, L2 weight decay of 0.01, learning rate warm-up over the first 10,000 steps and linear decay of the learning rate. The dropout probability is 0.1. The maximum sequence length is set to 512 tokens. Pre-training was performed with 4 Telsa V100 GPUs. The pre-training is done within 10 epochs, which takes around 10 days, and the fine-tuning takes up to 1 day. We use the beam search with a beam size of 5 for inference for the NLG tasks. Other hyper-parameters can be found in our code." }, { "heading": "A.2 CASE STUDIES", "text": "We present some examples from the test set of each task, with comparisons of the ground-truth result and the outputs produced by the models of E-PLUG and K-PLUG.\nunique selling propositions of product entities." } ]
2,020
null
SP:970151fd51696294ccd5746783a07d4cfab90054
[ "This submission proposed a motion representation method based on spatio-temporal self-similarity (STSS), which represents each local region as similarities to its neighbors in both spatial and temporal dimension. There are previous works (e.g., Ref[1] , [2], [5] listed here) which utilize STSS for feature extractions, authors claim that this work is the first one to learn STSS representation based on modern CNN architecture. The proposed method is implemented as a neural block, i.e., SELFY, which can be applied into neural architectures and learned end-to-end without additional supervision. On 3 standard human action recognition data sets, Something-Something-V1 & V2, Diving-48, and FineGym, the proposed method achieves quite good empirical results. " ]
Spatio-temporal convolution often fails to learn motion dynamics in videos and thus an effective motion representation is required for video understanding in the wild. In this paper, we propose a rich and robust motion representation method based on spatio-temporal self-similarity (STSS). Given a sequence of frames, STSS represents each local region as similarities to its neighbors in space and time. By converting appearance features into relational values, it enables the learner to better recognize structural patterns in space and time. We leverage the whole volume of STSS and let our model learn to extract an effective motion representation from it. The proposed method is implemented as a neural block, dubbed SELFY, that can be easily inserted into neural architectures and learned end-to-end without additional supervision. With a sufficient volume of the neighborhood in space and time, it effectively captures long-term interaction and fast motion in the video, leading to robust action recognition. Our experimental analysis demonstrates its superiority over previous methods for motion modeling as well as its complementarity to spatio-temporal features from direct convolution. On the standard action recognition benchmarks, Something-Something-V1 & V2, Diving-48, and FineGym, the proposed method achieves the state-of-the-art results.
[]
[ { "authors": [ "Joao Carreira", "Andrew Zisserman" ], "title": "Quo vadis, action recognition? a new model and the kinetics dataset", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition", "year": 2017 }, { "authors": [ "Olivier Chapelle", "Mingrui Wu" ], "title": "Gradient descent optimization of smoothed information retrieval metrics", "venue": "Information retrieval,", "year": 2010 }, { "authors": [ "Jeffrey Donahue", "Lisa Anne Hendricks", "Sergio Guadarrama", "Marcus Rohrbach", "Subhashini Venugopalan", "Kate Saenko", "Trevor Darrell" ], "title": "Long-term recurrent convolutional networks for visual recognition and description", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2015 }, { "authors": [ "Alexey Dosovitskiy", "Philipp Fischer", "Eddy Ilg", "Philip Hausser", "Caner Hazirbas", "Vladimir Golkov", "Patrick Van Der Smagt", "Daniel Cremers", "Thomas Brox" ], "title": "Flownet: Learning optical flow with convolutional networks", "venue": "In Proc. IEEE International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Lijie Fan", "Wenbing Huang", "Chuang Gan", "Stefano Ermon", "Boqing Gong", "Junzhou Huang" ], "title": "Endto-end learning of motion representation for video understanding", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition", "year": 2018 }, { "authors": [ "Linxi Fan", "Shyamal Buch", "Guanzhi Wang", "Ryan Cao", "Yuke Zhu", "Juan Carlos Niebles", "Li FeiFei" ], "title": "Rubiksnet: Learnable 3d-shift for efficient video action recognition", "venue": "In Proc. European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Christoph Feichtenhofer" ], "title": "X3d: Expanding architectures for efficient video recognition", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Christoph Feichtenhofer", "Haoqi Fan", "Jitendra Malik", "Kaiming He" ], "title": "Slowfast networks for video recognition", "venue": "In Proc. IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch sgd: Training imagenet in 1 hour", "venue": "arXiv preprint arXiv:1706.02677,", "year": 2017 }, { "authors": [ "Raghav Goyal", "Samira Ebrahimi Kahou", "Vincent Michalski", "Joanna Materzynska", "Susanne Westphal", "Heuna Kim", "Valentin Haenel", "Ingo Fruend", "Peter Yianilos", "Moritz Mueller-Freitag" ], "title": "The” something something” video database for learning and evaluating visual common sense", "venue": "In Proc. IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Dan Hendrycks", "Thomas Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "arXiv preprint arXiv:1903.12261,", "year": 2019 }, { "authors": [ "Eva Hörster", "Rainer Lienhart" ], "title": "Deep networks for image retrieval on large-scale databases", "venue": "In Proceedings of the 16th ACM international conference on Multimedia,", "year": 2008 }, { "authors": [ "Han Hu", "Zheng Zhang", "Zhenda Xie", "Stephen Lin" ], "title": "Local relation networks for image recognition", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Boyuan Jiang", "Mengmeng Wang", "Weihao Gan", "Wei Wu", "Junjie Yan" ], "title": "Stm: Spatiotemporal and motion encoding for action recognition", "venue": "In Proc. IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Imran N Junejo", "Emilie Dexter", "Ivan Laptev", "Patrick PÚrez" ], "title": "Cross-view action recognition from temporal self-similarities", "venue": "In Proc. European Conference on Computer Vision (ECCV),", "year": 2008 }, { "authors": [ "Imran N Junejo", "Emilie Dexter", "Ivan Laptev", "Patrick Perez" ], "title": "View-independent action recognition from temporal self-similarities", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI),", "year": 2010 }, { "authors": [ "Gagan Kanojia", "Sudhakar Kumawat", "Shanmuganathan Raman" ], "title": "Attentive spatio-temporal representation learning for diving classification", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW),", "year": 2019 }, { "authors": [ "Andrej Karpathy", "George Toderici", "Sanketh Shetty", "Thomas Leung", "Rahul Sukthankar", "Li FeiFei" ], "title": "Large-scale video classification with convolutional neural networks", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2014 }, { "authors": [ "Will Kay", "Joao Carreira", "Karen Simonyan", "Brian Zhang", "Chloe Hillier", "Sudheendra Vijayanarasimhan", "Fabio Viola", "Tim Green", "Trevor Back", "Paul Natsev" ], "title": "The kinetics human action video dataset", "venue": "arXiv preprint arXiv:1705.06950,", "year": 2017 }, { "authors": [ "Seungryong Kim", "Dongbo Min", "Bumsub Ham", "Seungchul Ryu", "Minh N Do", "Kwanghoon Sohn" ], "title": "Dasc: Dense adaptive self-correlation descriptor for multi-modal and multi-spectral correspondence", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2015 }, { "authors": [ "Seungryong Kim", "Dongbo Min", "Bumsub Ham", "Sangryul Jeon", "Stephen Lin", "Kwanghoon Sohn" ], "title": "Fcss: Fully convolutional self-similarity for dense semantic correspondence", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition", "year": 2017 }, { "authors": [ "Marco Körner", "Joachim Denzler" ], "title": "Temporal self-similarity for appearance-based action recognition in multi-view setups", "venue": "In International Conference on Computer Analysis of Images and Patterns,", "year": 2013 }, { "authors": [ "Heeseung Kwon", "Manjin Kim", "Suha Kwak", "Minsu Cho" ], "title": "Motionsqueeze: Neural motion feature learning for video understanding", "venue": "arXiv preprint arXiv:2007.09933,", "year": 2020 }, { "authors": [ "Myunggi Lee", "Seungeui Lee", "Sungjoon Son", "Gyutae Park", "Nojun Kwak" ], "title": "Motion feature network: Fixed motion filter for action recognition", "venue": "In Proc. European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Jun Li", "Xianglong Liu", "Mingyuan Zhang", "Deqing Wang" ], "title": "Spatio-temporal deformable 3d convnets with attention for action recognition", "venue": "Pattern Recognition,", "year": 2020 }, { "authors": [ "Xianhang Li", "Yali Wang", "Zhipeng Zhou", "Yu Qiao" ], "title": "Smallbignet: Integrating core and contextual views for video classification", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Yan Li", "Bin Ji", "Xintian Shi", "Jianguo Zhang", "Bin Kang", "Limin Wang" ], "title": "Tea: Temporal excitation and aggregation for action recognition", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Yingwei Li", "Yi Li", "Nuno Vasconcelos" ], "title": "Resound: Towards action recognition without representation bias", "venue": "In Proc. European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Ji Lin", "Chuang Gan", "Song Han" ], "title": "Tsm: Temporal shift module for efficient video understanding", "venue": "In Proc. IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Xingyu Liu", "Joon-Young Lee", "Hailin Jin" ], "title": "Learning video representations from correspondence proposals", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition", "year": 2019 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Sgdr: Stochastic gradient descent with warm restarts", "venue": "arXiv preprint arXiv:1608.03983,", "year": 2016 }, { "authors": [ "Chenxu Luo", "Alan L Yuille" ], "title": "Grouped spatial-temporal aggregation for efficient action recognition", "venue": "In Proc. IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Joanna Materzynska", "Tete Xiao", "Roei Herzig", "Huijuan Xu", "Xiaolong Wang", "Trevor Darrell" ], "title": "Something-else: Compositional action recognition with spatial-temporal interaction networks", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "AJ Piergiovanni", "Michael S Ryoo" ], "title": "Representation flow for action recognition", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition", "year": 2019 }, { "authors": [ "Prajit Ramachandran", "Niki Parmar", "Ashish Vaswani", "Irwan Bello", "Anselm Levskaya", "Jonathon Shlens" ], "title": "Stand-alone self-attention in vision models", "venue": null, "year": 1906 }, { "authors": [ "Ramprasaath R Selvaraju", "Michael Cogswell", "Abhishek Das", "Ramakrishna Vedantam", "Devi Parikh", "Dhruv Batra" ], "title": "Grad-cam: Visual explanations from deep networks via gradient-based localization", "venue": "In Proc. IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Dian Shao", "Yue Zhao", "Bo Dai", "Dahua Lin" ], "title": "Finegym: A hierarchical video dataset for finegrained action understanding", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Eli Shechtman", "Michal Irani" ], "title": "Matching local self-similarities across images and videos", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2007 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Two-stream convolutional networks for action recognition in videos", "venue": "In Proc. Neural Information Processing Systems (NeurIPS),", "year": 2014 }, { "authors": [ "Swathikiran Sudhakaran", "Sergio Escalera", "Oswald Lanz" ], "title": "Gate-shift networks for video action recognition", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Deqing Sun", "Xiaodong Yang", "Ming-Yu Liu", "Jan Kautz" ], "title": "Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Shuyang Sun", "Zhanghui Kuang", "Lu Sheng", "Wanli Ouyang", "Wei Zhang" ], "title": "Optical flow guided feature: A fast and robust motion representation for video action recognition", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Du Tran", "Lubomir Bourdev", "Rob Fergus", "Lorenzo Torresani", "Manohar Paluri" ], "title": "Learning spatiotemporal features with 3d convolutional networks", "venue": "In Proc. IEEE International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Du Tran", "Heng Wang", "Lorenzo Torresani", "Jamie Ray", "Yann LeCun", "Manohar Paluri" ], "title": "A closer look at spatiotemporal convolutions for action recognition", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Du Tran", "Heng Wang", "Lorenzo Torresani", "Matt Feiszli" ], "title": "Video classification with channelseparated convolutional networks", "venue": "In Proc. IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Heng Wang", "Du Tran", "Lorenzo Torresani", "Matt Feiszli" ], "title": "Video modeling with correlation networks", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Limin Wang", "Yuanjun Xiong", "Zhe Wang", "Yu Qiao", "Dahua Lin", "Xiaoou Tang", "Luc Van Gool" ], "title": "Temporal segment networks: Towards good practices for deep action recognition", "venue": "In Proc. European Conference on Computer Vision (ECCV),", "year": 2016 }, { "authors": [ "Xiaolong Wang", "Abhinav Gupta" ], "title": "Videos as space-time region graphs", "venue": "In Proc. European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Xiaolong Wang", "Ross Girshick", "Abhinav Gupta", "Kaiming He" ], "title": "Non-local neural networks", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition", "year": 2018 }, { "authors": [ "Ceyuan Yang", "Yinghao Xu", "Jianping Shi", "Bo Dai", "Bolei Zhou" ], "title": "Temporal pyramid network for action recognition", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Gengshan Yang", "Deva Ramanan" ], "title": "Volumetric correspondence networks for optical flow", "venue": "In Proc. Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Christopher Zach", "Thomas Pock", "Horst Bischof" ], "title": "A duality based approach for realtime tv-l 1 optical flow", "venue": "Pattern Recognition,", "year": 2007 }, { "authors": [ "Hengshuang Zhao", "Jiaya Jia", "Vladlen Koltun" ], "title": "Exploring self-attention for image recognition", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Yue Zhao", "Yuanjun Xiong", "Dahua Lin" ], "title": "Trajectory convolution for action recognition", "venue": "In Proc. Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Bolei Zhou", "Alex Andonian", "Aude Oliva", "Antonio Torralba" ], "title": "Temporal relational reasoning in videos", "venue": "In Proc. European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Mohammadreza Zolfaghari", "Kamaljeet Singh", "Thomas Brox" ], "title": "Eco: Efficient convolutional network for online video understanding", "venue": "In Proc. European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "ods (Wang et al", "Liu" ], "title": "2019), which capture the long-range dynamics of videos", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Learning spatio-temporal dynamics is the key to video understanding. To this end, extending convolutional neural networks (CNNs) with spatio-temporal convolution has been actively investigated in recent years (Tran et al., 2015; Carreira & Zisserman, 2017; Tran et al., 2018). The empirical results so far indicate that spatio-temporal convolution alone is not sufficient for grasping the whole picture; it often learns irrelevant context bias rather than motion information (Materzynska et al., 2020) and thus the additional use of optical flow turns out to boost the performance in most cases (Carreira & Zisserman, 2017; Lin et al., 2019). Motivated by this, recent action recognition methods learn to extract explicit motion, i.e., flow or correspondence, between feature maps of adjacent frames and they improve the performance indeed (Li et al., 2020c; Kwon et al., 2020). But, is it essential to extract such an explicit form of flows or correspondences? How can we learn a richer and more robust form of motion information for videos in the wild?\nIn this paper, we propose to learn spatio-temporal self-similarity (STSS) representation for video understanding. Self-similarity is a relational descriptor for an image that effectively captures intrastructures by representing each local region as similarities to its spatial neighbors (Shechtman & Irani, 2007). Given a sequence of frames, i.e., a video, it extends along the temporal dimension and thus represents each local region as similarities to its neighbors in space and time. By converting appearance features into relational values, STSS enables a learner to better recognize structural patterns in space and time. For neighbors at the same frame it computes a spatial self-similarity map, while for neighbors at a different frame it extracts a motion likelihood map. If we fix our attention to the similarity map to the very next frame within STSS and attempt to extract a single displacement vector to the most likely position at the frame, the problem reduces to optical flow, which is a particular type of motion information. In contrast, we leverage the whole volume of STSS and let our model learn to extract an effective motion representation from it in an end-toend manner. With a sufficient volume of neighborhood in space and time, it effectively captures long-term interaction and fast motion in the video, leading to robust action recognition.\nWe introduce a neural block for STSS representation, dubbed SELFY, that can be easily inserted into neural architectures and learned end-to-end without additional supervision. Our experimental analysis demonstrates its superiority over previous methods for motion modeling as well as its complementarity to spatio-temporal features from direct convolutions. On the standard benchmarks for action recognition, Something-Something V1&V2, Diving-48, and FineGym, the proposed method achieves the state-of-the-art results." }, { "heading": "2 RELATED WORK", "text": "Video action recognition. Video action recognition is a task to categorize videos into pre-defined action classes. One of the conventional topics in action recognition is to capture temporal dynamics in videos. In deep learning, many approaches attempt to learn temporal dynamics in different ways: Two-stream networks with external optical flows (Simonyan & Zisserman, 2014; Wang et al., 2016), recurrent networks (Donahue et al., 2015), and 3D CNNs (Tran et al., 2015; Carreira & Zisserman, 2017). Recent approaches have introduced the advanced 3D CNNs (Tran et al., 2018; 2019; Feichtenhofer, 2020; Lin et al., 2019; Fan et al., 2020) and show the effectiveness of capturing spatio-temporal features, so that 3D CNNs now become a de facto approach to learn temporal dynamics in the video. However, spatio-temporal convolution is vulnerable unless relevant features are well aligned across frames within the fixed-sized kernel. To address this issue, a few methods adaptively translate the kernel offsets with deformable convolutions (Zhao et al., 2018; Li et al., 2020a), while several methods (Feichtenhofer et al., 2019; Li et al., 2020b) modulate the other hyper-parameters, e.g., higher frame rate or larger spatial receptive fields. Unlike these methods, we address the problem of the spatio-temporal convolution by a sufficient volume of STSS, capturing far-sighted spatio-temporal relations.\nLearning motion features. Since using the external optical flow benefits 3D CNNs to improve the action recognition accuracy (Carreira & Zisserman, 2017; Zolfaghari et al., 2018; Tran et al., 2018), several approaches try to learn frame-by-frame motion features from RGB sequences inside neural architectures. Fan et al. (2018); Piergiovanni & Ryoo (2019) internalize TV-L1 (Zach et al., 2007) optical flows into the CNN. Frame-wise feature differences (Sun et al., 2018b; Lee et al., 2018; Jiang et al., 2019; Li et al., 2020c) are also utilized as the motion features. Recent correlation-based methods (Wang et al., 2020; Kwon et al., 2020) adopt a correlation operator (Dosovitskiy et al., 2015; Sun et al., 2018a; Yang & Ramanan, 2019) to learn motion features between adjacent frames. However, these methods compute frame-by-frame motion features between two adjacent frames and then rely on stacked spatio-temporal convolutions for capturing long-range motion dynamics. We propose to learn STSS features, as generalized motion features, that enable to capture both shortterm and long-term interactions in the video.\nSelf-similarity. Self-similarity represents an internal geometric layout of images. It is widely used in many computer vision tasks, such as object detection (Shechtman & Irani, 2007), image retrieval (Hörster & Lienhart, 2008), and semantic correspondence matching (Kim et al., 2015; 2017). In the video domain, Shechtman & Irani (2007) firstly introduce the concept of STSS and transforms the STSS to a hand-crafted local descriptor for action detection. Inspired from this work, early methods adopt self-similarities for capturing view-invariant temporal patterns (Junejo et al.,\n2008; 2010; Körner & Denzler, 2013), but they use temporal self-similarities only due to computational costs. Recently, there are several non-local approaches (Wang et al., 2018; Liu et al., 2019) that utilize STSS for capturing long-range dynamics of videos. However, they use STSS for reweighting or aligning visual features, which is an indirect way of using STSS. Different from these methods, our method leverages full STSS directly as generalized motion information and learns an effective representation for action recognition within a video-processing architecture. To the best of our knowledge, our work is the first in learning STSS representation using modern CNNs.\nThe contribution of our paper can be summarized as follows. First, we revisit the notion of selfsimilarity and propose to learn a generalized, far-sighted motion representations from STSS. Second, we implement STSS representation learning as a neural block, dubbed SELFY, that can be integrated into existing neural architectures. Third, we provide comprehensive evaluations on SELFY, achieving the state-of-the-art on benchmarks: Something-Something V1&V2, Diving-48, and FineGym." }, { "heading": "3 OUR APPROACH", "text": "In this section, we first revisit the notions of self-similarity and discuss its relation to motion. We then introduce our method for learning effective spatio-temporal self-similarity representation, which can be easily integrated into video-processing architectures and learned end-to-end." }, { "heading": "3.1 SELF-SIMILARITY REVISITED", "text": "Self-similarity is a relational descriptor that suppresses variations in appearance and reveals structural patterns in images or videos (Shechtman & Irani, 2007).\nGiven an image feature map I ∈ RX×Y×C , self-similarity transformation of I results in a 4D tensor S ∈ RX×Y×U×V , whose elements are defined as\nSx,y,u,v = sim(Ix,y, Ix+u,y+v),\nwhere sim(·, ·) is a similarity function, e.g., cosine similarity. Here, (x, y) is a query coordinate while (u, v) is a spatial offset from it. To impose a locality, the offset is restricted to its neighborhood: (u, v) ∈ [−dU, dU] × [−dV, dV], so that U = 2dU + 1 and V = 2dV + 1, respectively. By converting C-dimensional appearance feature Ix,y into UV -dimensional relational feature Sx,y , it suppresses variations in appearance and reveals spatial structures in the image. Note that the selfsimilarity transformation closely relates to conventional cross-similarity (or correlation) across two different feature maps (I, I′ ∈ RX×Y×C), which can be defined as\nSx,y,u,v = sim(Ix,y, I ′ x+u,y+v).\nGiven two images of a moving object, the cross-similarity transformation effectively captures motion information and thus is commonly used in optical flow and correspondence estimation (Dosovitskiy et al., 2015; Sun et al., 2018a; Yang & Ramanan, 2019).\nFor a sequence of frames, i.e., a video, one can naturally extend the spatial self-similarity along the temporal axis. Let V ∈ RT×X×Y×C denote a feature map of the video with T frames. Spatiotemporal self-similarity (STSS) transformation of V results in a 6D tensor S ∈ RT×X×Y×L×U×V , whose elements are defined as\nSt,x,y,l,u,v = sim(Vt,x,y,Vt+l,x+u,y+v), (1)\nwhere (t, x, y) is the spatio-temporal coordinate and (l, u, v) is a spatio-temporal offset from it. In addition to the locality of spatial offsets above, the temporal offset l is also restricted to its temporal neighborhood: l ∈ [−dL, dL], so that L = 2dL + 1. What types of information does STSS describe? Interestingly, for each time t, the STSS tensor S can be decomposed along temporal offset l into a single spatial self-similarity tensor (when l = 0) and 2dL spatial cross-similarity tensors (when l 6= 0); the partial tensors with a small offset (e.g., l = −1 or +1) collect motion information from adjacent frames and those with larger offsets capture it from further frames both forward and backward in time. Unlike previous approaches to learn motion (Dosovitskiy et al., 2015; Wang et al., 2020; Kwon et al., 2020), which rely on crosssimilarity between adjacent frames, STSS allows to take a generalized, far-sighted view on motion, i.e., both short-term and long-term, both forward and backward, as well as spatial self-motion." }, { "heading": "3.2 SPATIO-TEMPORAL SELF-SIMILARITY REPRESENTATION LEARNING", "text": "By leveraging the rich information in STSS, we propose to learn a generalized motion representation for video understanding. To achieve this goal without additional supervision, we design a neural block, dubbed SELFY, which can be inserted into a video-processing architectures and learned end-to-end. The overall structure is illustrated in Fig. 2. It consists of three steps: self-similarity transformation, feature extraction, and feature integration.\nGiven the input video feature tensor V, the self-similarity transformation step converts it to the STSS tensor S as in Eq.(1). In the following, we describe feature extraction and integration steps." }, { "heading": "3.2.1 FEATURE EXTRACTION", "text": "From the STSS tensor S ∈ RT×X×Y×L×U×V , we extract aCF -dimensional feature for each spatiotemporal position (t, x, y) and temporal offset l so that the resultant tensor is F ∈ RT×X×Y×L×CF , which is equivariant to translation in space, time, and temporal offset. The dimension of L is preserved to extract motion information across different temporal offsets in a consistent manner. While there exist many design choices, we introduce three methods for feature extraction in this work.\nSoft-argmax. The first method is to compute explicit displacement fields using S, which previous motion learning methods adopt using spatial cross-similarity (Dosovitskiy et al., 2015; Sun et al., 2018a; Yang & Ramanan, 2019). One may extract the displacement field by indexing the positions with the highest similarity value via argmax(u,v), but it is not differentiable. We instead use softargmax (Chapelle & Wu, 2010), which aggregates displacement vectors with softmax weighting (Fig. 3a). The soft-argmax feature extraction can be formulated as\nFt,x,y,l = ∑ u,v exp(St,x,y,l,u,v/τ)∑ u′,v′ exp(St,x,y,l,u′,v′/τ) [u; v], (2)\nwhich results in a feature tensor F ∈ RT×X×Y×L×2. The temperature factor τ adjusts the softmax distribution, and we set τ = 0.01 in our experiments.\nMulti-layer perceptron (MLP). The second method is to learn an MLP that converts self-similarity values into a feature. For this, we flatten the (U, V ) volume into UV -dimensional vectors, and apply an MLP to them (Fig. 3b). For the reshaped tensor S ∈ RT×X×Y×L×UV , a perceptron f(·) can be expressed as\nf(S) = ReLU(S×5 Wφ), (3)\nwhere ×n denotes the n-mode tensor product, Wφ ∈ RC ′×UV is the perceptron parameters, and the output is f(S) ∈ RT×X×Y×L×C′ . The MLP feature extraction can thus be formulated as F = (fn ◦ fn−1 ◦ · · · ◦ f1)(S), (4)\nwhich produces a feature tensor F ∈ RT×X×Y×L×CF . This method is more flexible and may also be more effective than the soft-argmax because not only can it encode displacement information but also it can directly access the similarity values, which may be helpful for learning motion distribution.\nConvolution. The third method is to learn convolution kernels over (L,U, V ) volume of S (Fig. 3c). When we regard S as a 7D tensor S ∈ RT×X×Y×L×U×V×C with C = 1, the convolution layer g can be expressed using S′ = g(S), whose elements are computed by\nS′t,x,y,l,u,v,c′ = ReLU ( ∑ lκ,uκ,vκ,c Klκ,uκ,vκ,c,c′St,x,y,l+l̂κ,u+ûκ,v+v̂κ,c ) . (5)\nwhere K ∈ RLκ×Uκ×Vκ×C×C′ is a multi-channel convolution kernel, (lκ, uκ, vκ) is the kernel parameter indices, and (c, c′) is the channel indices. The indices (l̂κ, ûκ, v̂κ) are centered as l̂κ = lκ − Lκ/2, ûκ = uκ − Uκ/2, v̂κ = vκ − Vκ/2. Starting from RT×X×Y×L×U×V×1, we gradually downsample (U,V) and expand channels via multiple convolutions with strides, finally resulting in RT×X×Y×L×1×1×CF ; we preserve the L dimension, since maintaining fine temporal resolution is shown to be effective for capturing detailed motion information (Lin et al., 2019; Feichtenhofer et al., 2019). The convolutional feature extraction with n layers can thus be formulated as\nF = (gn ◦ gn−1 ◦ · · · ◦ g1)(S), (6)\nwhich results in a feature tensor F ∈ RT×X×Y×L×CF . This method is effective in learning structural patterns with their convolution kernels, thus outperforming the former methods as will be seen in our experiments." }, { "heading": "3.2.2 FEATURE INTEGRATION", "text": "In this step, we integrate the extracted STSS features F ∈ RT×X×Y×L×CF to feed them back to the original input stream with (T,X, Y, C) volume.\nWe first use 3× 3 spatial convolution kernels along (x, y) dimension of F. The spatial convolution layer h can be expressed using F′ = h(F), whose elements are computed by\nF′l,x,y,t,c′F = ReLU ( ∑ (xκ,yκ,c′F ) Kxκ,yκ,cF ,c′FFl,x+x̂κ,y+ŷκ,t,cF ) , (7)\nwhere K ∈ RXκ×Yκ×CF×C′F is the multi-channel convolution kernel, (xκ, yκ) is the kernel parameter indices, and (cF , c′F ) is the channel indices. (x̂κ, ŷκ) is centered as x̂κ = xκ − Xκ/2, ŷκ = yκ − Yκ/2. This type of spatial convolutions integrate the original features by extending receptive fields along (x, y) dimension. The resultant features F? ∈ RT×X×Y×L×C?F is defined as\nF? = (hn ◦ hn−1 ◦ · · · ◦ h1)(F). (8)\nWe then flatten the (L,C?F ) volume into LC ? F -dimensional vectors to obtain F ? ∈ RT×X×Y×LC?F , and apply an 1× 1× 1 convolution layer to obtain the final output. This convolution layer integrates features from different temporal offsets and also adjusts its channel dimension to fit that of the original input V. We adopt the identity mapping of the input for residual learning (He et al., 2016). The final output tensor Z is expressed as\nZ = ReLU(F? ×5 Wθ) +V, (9)\nwhere×n is the n-mode tensor product and Wθ ∈ RC×LC ? F is the weights of the convolution layer." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 DATASETS & IMPLEMENTATION DETAILS", "text": "For evaluation, we use benchmarks that contain fine-grained spatio-temporal dynamics in videos.\nSomething-Something V1 & V2 (SS-V1 & V2) (Goyal et al., 2017b), which are both large-scale action recognition datasets, contain∼108k and∼220k video clips, respectively. Both datasets share the same 174 action classes that are labeled, e.g., ‘pretending to put something next to something.’\nDiving-48 (Li et al., 2018), which contains ∼18k videos with 48 different diving action classes, is an action recognition dataset that minimizes contextual biases, i.e., scenes or objects.\nFineGym (Shao et al., 2020) is a fine-grained action dataset built on top of gymnastic videos. We adopt the Gym288 and Gym99 sets for experiments that contains 288 and 99 classes, respectively.\nAction recognition architecture. We employ TSN ResNets (Wang et al., 2016) as 2D CNN backbones and TSM ResNets (Lin et al., 2019) as 3D CNN backbones. TSM enables to obtain the effect of spatio-temporal convolutions using spatial convolutions by shifting a part of input channels along the temporal axis before the convolution operation. TSM is added into each residual block of the ResNet. We adopt ImageNet pre-trained weights for our backbones. To transform the backbones to the self-similarity network (SELFYNet), we insert a single SELFY block after the third stage in the backbones. For SELFY block, we use the convolution method as a default feature extraction method and use multi-channel 1× 3× 3 convolution kernels. For more details, please refer Appendix A, B. Training & testing. For training, we sample a clip of 8 or 16 frames from each video by using segment-based sampling (Wang et al., 2016). The spatio-temporal matching region (L,U, V ) of SELFY block is set as (5, 9, 9) or (9, 9, 9) when using 8 or 16 frames, respectively. For testing, we sample one or two clips from a video, crop their center, and evaluate the averaged prediction of the sampled clips. For more details, please refer Appendix A." }, { "heading": "4.2 COMPARISON WITH THE STATE-OF-THE-ART METHODS", "text": "For a fair comparison, we compare our model with other models that are not pre-trained on additional large-scale video datasets, e.g., Kinetics (Kay et al., 2017) or Sports1M (Karpathy et al., 2014) in the following experiments.\nTable 1 summarizes the results on SS-V1&V2. The first and second compartment of the table shows the results of other 2D CNN and (pseudo-) 3D CNN models, respectively. The last part of each compartment shows the results of SELFYNet. SELFYNet with TSN-ResNet (SELFYNet-TSN-R50) achieves 50.7% and 62.7% at top-1 accuracy, respectively, which outperforms other 2D models using 8 frames only. When we adopt TSM ResNet (TSM-R50) as our backbone and use 16 frames, our method (SELFYNet-TSM-R50) achieves 54.3% and 65.7% at top-1 accuracy, respectively, which is the best among the single models. Compared to TSM-R50, a single SELFY block obtains a significant gain of 7.0%p and 4.5%p at top-1 accuracy, respectively; our method is more accurate than TSM-R50 Two-stream on both datasets. Finally, our ensemble model (SELFYNet-TSM-R50EN ) with 2-clip evaluation sets a new state of the art on both datasets by achieving 56.6% and 67.7% at top-1 accuracy, respectively.\nTable 2 summarizes the results on Diving-48 & FineGym. For Diving-48, TSM-R50 using 16 frames shows 38.8% in top-1 accuracy in our implementation. SELFYNet-TSM-R50 outperforms TSMR50 by 2.8%p in accuracy so that it sets a new state-of-the-art top-1 accuracy as 41.6% on Diving48. For FineGym, SELFYNet-TSM-R50 achieves 49.5% and 87.7% at given 288 and 99 classes, respectively, surpassing all the other models reported in Shao et al. (2020)." }, { "heading": "4.3 ABLATION STUDIES", "text": "We conduct ablation experiments to demonstrate the effectiveness of the proposed method. All experiments are performed on SS-V1 by using 8 frames. Unless otherwise specified, we set ImageNet pre-trained TSM ResNet-18 (TSM-R18) with the single SELFY block of which (L,U, V ) = (5, 9, 9), as our default SELFYNet.\nTypes of similarity. In Table 3a, we investigate the effect of different types of similarity by varying the set of temporal offset l on both TSN-ResNet-18 (TSN-R18) and TSM-R18. Interestingly, learning spatial self-similarity ({0}) improves accuracy on both backbones, which implies that selfsimilarity features help capture structural patterns of visual features. Learning cross-similarity with a short temporal range ({1}) shows a noticeable gain in accuracy on both backbones, indicating the significance of motion features. Learning STSS outperforms other types of similarity, and the accuracy of SELFYNet increases as the temporal range becomes longer. When STSS takes a far-sighted\n(a) (b)\n(c)\n(d)\n(e)\n(f)\nFigure 4: Basic blocks and their combinations. (a) spatio-temporal convolution block (STCB), (b) SELFY-s block, and (c-f) their combinations.\nTable 5: Spatio-temporal features v.s. STSS features. Results of different combinations of two blocks ((a) - (f) from Fig. 4) are shown.\nmodel, TSN-R18 top-1 top-5 baseline 16.2 40.8 (a) STCB 42.4 71.7 (b) SELFY-s 46.3 75.1 (c) STCB + STCB 44.4 73.7 (d) SELFY-s + SELFY-s 46.8 75.9 (e) SELFY-s + STCB (parallel) 46.9 76.5 (f) SELFY-s + STCB (sequential) 47.6 76.6\nview on motion, STSS learns both short-term and long-term interactions in videos, as well as spatial self-similarity. Qualitative results in Appendix D show that SELFYNet with a long temporal range ({−3, · · · , 3}) effectively captures long-term interactions in videos. Feature extraction and integration methods. In Table 3b, we compare the performance of different combinations of feature extraction and integration methods. From the 2nd to the 4th rows, different feature extraction methods are compared fixing the feature integration methods to a single fully-connected (FC) layer. Compared to the baseline, the use of soft-argmax, which extracts spatial displacement features, improves top-1 accuracy by 1.0%p. Replacing soft-argmax with MLP provides an additional gain of 1.9%p at top-1 accuracy, showing the effectiveness of directly using similarity values. When using the convolution method for feature extraction, we achieve 46.7% at top-1 accuracy; the multi-channel convolution kernel is more effective in capturing structural patterns along (u, v) dimensions than MLP. From the 4th to the 6th rows, different feature integration methods are compared fixing the feature extraction method to convolution. Replacing the single FC layer with MLP improves the top-1 accuracy by 0.6%p. Replacing MLP with convolutional layers further improves and achieves 48.4% at top-1 accuracy. These results demonstrate that our design choice of using convolutions along (u, v) and (h,w) dimensions is the most effective in learning the geometry-aware STSS representation. For more ablation experiments, please refer to Appendix B." }, { "heading": "4.4 COMPLEMENTARITY BETWEEN SPATIO-TEMPORAL FEATURES AND STSS FEATURES", "text": "We conduct experiments for analyzing different meanings of spatio-temporal features and STSS features. We organize two basic blocks for representing two different features: spatio-temporal convolution block (STCB) that consists of several spatial-temporal convolutions (Fig. 4a) and SELFY-s block, light-weighted version of the SELFY block by removing spatial convolution layers (Fig. 4b). Both blocks have the same receptive fields and a similar number of parameters for a fair comparison. Different combinations of the basic blocks are inserted after the third stage of TSN-ResNet-18. Table 5 summarizes the results on SS-V1. STSS features (Fig. 4b, 4d) are more effective than spatio-temporal features (Fig. 4a, 4c) in top-1 and top-5 accuracy when the same number of blocks are inserted. Interestingly, the combination of two different features (Fig. 4e, 4f) shows better results in top-1 and top-5 accuracy compared to the single feature cases (Fig. 4c, 4d), which demonstrate\nthat both features complement each other. We conjecture that this complementarity comes from different characteristics of the two features; while spatio-temporal features are obtained by directly encoding appearance features, STSS features are obtained by suppressing variations in appearance and focusing on the relational features in space and time." }, { "heading": "4.5 IMPROVING ROBUSTNESS WITH STSS", "text": "In this section, we demonstrate that STSS representation helps video-processing models to be more robust to video corruptions. We test two corruptions that are likely to happen in the real world videos: occlusion and motion blur. To induce the corruptions, We either cut out a rectangle patch of a particular frame or generate a motion blur (Hendrycks & Dietterich, 2019). We corrupt a single center-frame for every clip of SS-V1 at the testing phase and gradually increase the severity of corruptions. We compare the results of TSM-R18 and SELFYNet variants of Table 3a. Fig. 5a, 5b summarizes the results of two corruptions, respectively. The top-1 accuracy of TSM-R18 and SELFYNets with the short temporal range ({0}, {1}, and {−1, 0, 1}) significantly drops as the severity of corruptions becomes harder. We conjecture that the features of the corrupted frame propagate through the stacked TSMs, confusing the entire network. However, the SELFYNets with the long temporal range ({−2, · · · , 2} and {−3, · · · , 3}) show more robust performance than the other models. As shown in Fig. 5a, 5b, the accuracy gap between SELFYNets with the long temporal range and the others increases as the severity of corruptions becomes higher, indicating that the larger size of STSS features can improve the robustness on action recognition. We also present some qualitative results (Fig. 5c) where two SELFYNets with different temporal ranges, {1} and {−3, · · · , 3}, both answer correctly without corruption, while the SELFYNet with {1} fails for the corrupted input." }, { "heading": "5 CONCLUSION", "text": "In this paper, we have proposed to learn a generalized, far-sighted motion representation from STSS for video understanding. The comprehensive analyses on the STSS demonstrate that STSS features effectively capture both short-term and long-term interactions, complement spatio-temporal features, and improve the robustness of video-processing models. Our method outperforms other state-of-the-art methods on the three benchmarks for video action recognition." }, { "heading": "B ADDITIONAL EXPERIMENTS", "text": "We conduct additional ablation experiments to identify the behaviors of the proposed method. All experiments are performed on SS-V1 by using 8 frames. Unless otherwise specified, we set ImageNet pre-trained TSM ResNet-18 (TSM-R18) with a single SELFY block of which (L,U, V ) = (5, 9, 9), as our default SELFYNet.\nComparison with non-local methods. We compare our method with popular non-local methods (Wang et al., 2018; Liu et al., 2019), which capture the long-range dynamics of videos. While computing the self-similarity values as ours, both methods use them as attention weights for feature aggregation by multiplying them to the visual features (Wang et al., 2018) or aligning top-K corresponding features (Liu et al., 2019); they both do not use STSS itself as a relational representation. In contrast, our method does it indeed and learns a more powerful relational feature from STSS. The difference between our method and non-local methods are illustrated in Fig. 6.\nWe have conducted experiments for performance comparison, and the results are shown in Table 7a. We re-implement the non-local block and the CP module in Pytorch based on their official codes23. For a fair comparison, we insert a single block or module at the same position (after res3 of ResNet18). Note that our method downsamples a spatial resolution of V to 14× 14 before STSS transformation, whereas the others do not. Compared to the non-local block and the CP module, SELFY block improves top-1 accuracy by 4.4%p and 1.5%p, while computing less floating-point operations as 7.5 GFLOPs and 8.3 GFLOPs, respectively. It demonstrates that the direct integration of\n1https://github.com/hendrycks/robustness 2https://github.com/xingyul/cpnet 3https://github.com/facebookresearch/video-nonlocal-net\nSTSS features is more effective for action recognition than the indirect ways of using STSS, e.g., re-weighting visual-semantic features or learning correspondences.\nComparison with correlation-based methods. We also compare our method with correlationbased methods (Kwon et al., 2020; Wang et al., 2020). While correlation-based methods extract motion features between two adjacent frames only and are thus limited to short-term motion, our method effectively captures bi-directional and long-term motion information via learning with the sufficient volume of STSS. Our method can also exploit richer information from the self-similarity values than other methods. MS module (Kwon et al., 2020) only focuses on the maximal similarity value of the (u, v) dimensions to extract flow information, and Correlation block (Wang et al., 2020) uses an 1× 1 convolution layer for extracting motion features from the similarity values. In contrast to the two methods, we introduce a generalized motion learning framework using the self-similarity tensor at Sec 3.2 of our main paper. The difference between our method and correlation-based methods are illustrated in Fig. 6.\nWe also have conducted experiments to compare our method with MSNet (Kwon et al., 2020), one of the correlation-based methods. For an apple-to-apple comparison, we apply kernel soft-argmax and max pooling operation (KS + CM in Kwon et al. (2020)) to our feature extraction method by following their official codes4. Please note that, when we restrict the temporal offset l to {1}, the SELFY block using KS + CM is equivalent to the MS module of which feature transformation layers are the standard 2D convolutional layers. Table 7b summarizes the results. KS+CM method achieves 46.1% at top-1 accuracy. As we enlarge the temporal window L to 5, we obtain an additional gain as 1.3%p. The learnable convolution layers improve top-1 accuracy by 1.0%p in both cases. The results demonstrates the effectiveness of learning geometric patterns within the sufficient volume of STSS tensors for learning abundant motion features.\nMulti-channel 3×3×3 kernel for feature extraction. We investigate the effect of the convolution method for STSS feature extraction, when we use multi-channel 3 × 3 × 3 kernels. For the experiment, we stack four 3× 3× 3 convolution layers followed by the feature integration step, which are the same as in Section 3.2.2. Table 7c summarizes the results. Note that we do not report models of which temporal window L = 1, e.g., {0} and {1}. As shown in the table, indeed, the long temporal range gives the higher accuracy. However, the effect of the 3× 3× 3 kernel is comparable to that of the 1× 3× 3 kernel in Table 3a in terms of accuracy. Considering the accuracy-computation trade-off, we choose to fix the kernel size, Lκ×Uκ×Vκ, as 1×3×3 for the STSS feature extraction. Spatial matching region. In Table 7d, we compare a single SELFY block with different spatial matching regions, (U, V ). As a result, indeed, the larger spatial matching region leads the better accuracy. Considering the accuracy-computation trade-off, we set our spatial matching region, (U, V ), as (9, 9) as a default.\nFusing STSS with visual features. We evaluate SELFYNet purely based on STSS features to see how much the ordinary visual feature V contributes for the final prediction. That is, we pass the STSS features, ReLU(F?×5 Wθ), into the downstream layers without the visual features V (Eq. 9 in our main paper). For the simplicity of description, we denote the relational feature ReLU(F? ×5 Wθ) by R . Table 7e compares the results of using different cases of the output tensor Z (Z = V, Z = R, and Z = R+V) on SS-V1. Interestingly, SELFYNet using only R achieves 45.5% at top-1 accuracy, which is higher as 2.5%p than the baseline. As we add V to R, we obtain an additional gain of 2.9%p. It indicates that the STSS features and the visual features are complementary to each other.\nBlock position. From the 2nd to the 6th row of Table 7f, we identify the effect of different positions of SELFY block in the backbone. We resize the spatial resolution of the video tensor, (X,Y ), into 14×14, and fix the matching region, (L,U, V ), as (5, 9, 9) for all the cases maintaining the similar computational cost. SELFY after the res3 shows the best trade-off by achieving the highest accuracy among the cases. The last row in Table 7f shows that the multiple SELFY blocks improve accuracy compared to the single block.\n4https://github.com/arunos728/MotionSqueeze" }, { "heading": "C THE RELATIONSHIP WITH THE LOCAL SELF-ATTENTION MECHANISMS", "text": "The local self-attention (Hu et al., 2019; Ramachandran et al., 2019; Zhao et al., 2020) and our method have a common denominator of using the self-similarity tensor but use it in a very different way and purpose. The local self-attention mechanism aims to aggregate the local context features using the self-similarity tensor and it thus uses the self-similarity values as attention weights for feature aggregation. However, our method aims to learn a generalized motion representation from the local STSS, so the final STSS representation is directly fed into the neural network instead of multiplying it to local context features.\nFor an empirical comparison, we conduct an ablation experiment as follows. We extend the local self attention layer (Ramachandran et al., 2019) to temporal dimension, and then add the spatio-temporal local self-attention layer, which is followed by feature integration layers, after res3. All experimental details are the same as those in Appendix A, except that we reduce the channel dimension C‘ of appearance feature V to 32. Table 8 summarizes the results on SS-V1. The spatio-temporal local self-attention layer is accurate as 43.8% at top-1 accuracy, and both of SELFY blocks using the embedded Gaussian and the cosine similarity outperform the local self-attention by achieving top-1 accuracy as 47.6% and 47.8%, respectively. These results are in alignment with the prior work (Liu et al., 2019), which reveals that the self-attention mechanism hardly captures motion features in video.\nD VISUALIZATIONS\nIn Fig. 7, we visualize some qualitative results of two different SELFYNet-TSM-R18 ({1} and {−3, · · · , 3}) on SS-V1. We show the different predictions of the two models with 8 input frames. We also overlay Grad-CAMs (Selvaraju et al., 2017) on the input frames to see whether a larger volume of STSS benefits to capture long-term interactions in videos. We take Grad-CAMs of features which is right before a global average pooling layer. As shown in the figure, the STSS with the sufficient volume helps to learn more enriched context of temporal dynamics in the video; in Fig. 7a, for example, SELFYNet with the range of ({−3, · · · , 3}) focuses on not only regions on which an action occurs but also focuses on the white-stain after the action to verify whether the stain is wiped off or not." } ]
2,020
null
SP:2409111bd2e2211c6e3c11c4c4eaf494d14e3f44
[ "This paper provides an extensive empirical evaluation of zero-cost proxies which can be combined with existing NAS methods to speed up search time. The proposed method utilizes ‘pruning-at-initialization’ works which computes gradient-computation at initialization as a proxy for performance of the given neural architectures. Through extensive experiments, this paper compares between conventional proxies and ablation study on five NAS benchmarks and shows the validation of the proposed proxy. " ]
Neural Architecture Search (NAS) is quickly becoming the standard methodology to design neural network models. However, NAS is typically compute-intensive because multiple models need to be evaluated before choosing the best one. To reduce the computational power and time needed, a proxy task is often used for evaluating each model instead of full training. In this paper, we evaluate conventional reduced-training proxies and quantify how well they preserve ranking between neural network models during search when compared with the rankings produced by final trained accuracy. We propose a series of zero-cost proxies, based on recent pruning literature, that use just a single minibatch of training data to compute a model’s score. Our zero-cost proxies use 3 orders of magnitude less computation but can match and even outperform conventional proxies. For example, Spearman’s rank correlation coefficient between final validation accuracy and our best zero-cost proxy on NAS-Bench-201 is 0.82, compared to 0.61 for EcoNAS (a recently proposed reduced-training proxy). Finally, we use these zerocost proxies to enhance existing NAS search algorithms such as random search, reinforcement learning, evolutionary search and predictor-based search. For all search methodologies and across three different NAS datasets, we are able to significantly improve sample efficiency, and thereby decrease computation, by using our zero-cost proxies. For example on NAS-Bench-101, we achieved the same accuracy 4× quicker than the best previous result. Our code is made public at: https://github.com/mohsaied/zero-cost-nas.
[ { "affiliations": [], "name": "LIGHTWEIGHT NAS" }, { "affiliations": [], "name": "Mohamed S. Abdelfattah" }, { "affiliations": [], "name": "Abhinav Mehrotra" }, { "affiliations": [], "name": "Łukasz Dudziak" }, { "affiliations": [], "name": "Nicholas D. Lane" } ]
[ { "authors": [ "Gabriel Bender", "Pieter-Jan Kindermans", "Barret Zoph", "Vijay Vasudevan", "Quoc V. Le" ], "title": "Understanding and simplifying one-shot architecture search", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Han Cai", "Tianyao Chen", "Weinan Zhang", "Yong Yu", "Jun Wang" ], "title": "Efficient architecture search by network transformation", "venue": "In AAAI Conference on Artificial Intelligence (AAAI),", "year": 2018 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "ImageNet: A Large-Scale Hierarchical Image Database", "venue": "In CVPR,", "year": 2009 }, { "authors": [ "Xuanyi Dong", "Yi Yang" ], "title": "NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Łukasz Dudziak", "Thomas Chau", "Mohamed S. Abdelfattah", "Royson Lee", "Hyeji Kim", "Nicholas D. Lane" ], "title": "BRP-NAS: Prediction-based NAS using GCNs", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2020 }, { "authors": [ "Mikhail Figurnov", "Aizhan Ibraimova", "Dmitry P Vetrov", "Pushmeet Kohli" ], "title": "Perforatedcnns: Acceleration through elimination of redundant convolutions", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2016 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "John S. Garofolo", "Lori F. Lamel", "William M. Fisher", "Jonathan G. Fiscus", "David S. Pallett", "Nancy L. Dahlgren", "Victor Zue" ], "title": "Timit acoustic phonetic continuous speech corpus", "venue": "Linguistic Data Consortium,", "year": 1993 }, { "authors": [ "Song Han", "Jeff Pool", "John Tran", "William Dally" ], "title": "Learning both weights and connections for efficient neural network", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2015 }, { "authors": [ "Babak Hassibi", "David G. Stork" ], "title": "Second order derivatives for network pruning: Optimal brain surgeon", "venue": "In Advances in Neural Information Processing Systems", "year": 1993 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Andrew G. Howard", "Menglong Zhu", "Bo Chen", "Dmitry Kalenichenko", "Weijun Wang", "Tobias Weyand", "Marco Andreetto", "Hartwig Adam" ], "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications", "venue": "arXiv preprint arXiv:1704.04861,", "year": 2017 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens van der Maaten", "Kilian Q. Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Nikita Klyuchnikov", "Ilya Trofimov", "Ekaterina Artemova", "Mikhail Salnikov", "Maxim Fedorov", "Evgeny Burnaev" ], "title": "Nas-bench-nlp: Neural architecture search benchmark for natural language processing", "venue": "arXiv preprint arXiv:2006.07116,", "year": 2020 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning Multiple Layers of Features from Tiny Images", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2009 }, { "authors": [ "Namhoon Lee", "Thalaiyasingam Ajanthan", "Philip HS Torr" ], "title": "Snip: Single-shot network pruning based on connection sensitivity", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "DARTS: Differentiable architecture search", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Abhinav Mehrotra", "Alberto Gil Ramos", "Sourav Bhattacharya", "Łukasz Dudziak", "Ravichander Vipperla", "Thomas Chau", "Mohamed S. Abdelfattah", "Samin Ishtiaq", "Nicholas D. Lane" ], "title": "NASBench-ASR: Reproducible Neural Architecture Search for Speech Recognition", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2021 }, { "authors": [ "Jieru Mei", "Yingwei Li", "Xiaochen Lian", "Xiaojie Jin", "Linjie Yang", "Alan Yuille", "Jianchao Yang" ], "title": "AtomNAS: Fine-grained end-to-end neural architecture search", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Joseph Mellor", "Jack Turner", "Amos Storkey", "Elliot J. Crowley" ], "title": "Neural architecture search without training", "venue": "arXiv preprint arXiv:2006.04647,", "year": 2020 }, { "authors": [ "Pavlo Molchanov", "Stephen Tyree", "Tero Karras", "Timo Aila", "Jan Kautz" ], "title": "Pruning convolutional neural networks for resource efficient inference", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y. Ng" ], "title": "Reading Digits in Natural Images with Unsupervised Feature Learning", "venue": "In NeurIPS Workshop on Deep Learning and Unsupervised Feature Learning,", "year": 2011 }, { "authors": [ "Barak A. Pearlmutter" ], "title": "Fast Exact Multiplication by the Hessian", "venue": "Neural Computation,", "year": 1993 }, { "authors": [ "Hieu Pham", "Melody Y. Guan", "Barret Zoph", "Quoc V. Le", "Jeff Dean" ], "title": "Efficient neural architecture search via parameter sharing, 2018", "venue": null, "year": 2018 }, { "authors": [ "Esteban Real", "Alok Aggarwal", "Yanping Huang", "Quoc V. Le" ], "title": "Regularized Evolution for Image Classifier Architecture Search", "venue": "In AAAI Conference on Artificial Intelligence (AAAI),", "year": 2019 }, { "authors": [ "Oleg Sémery" ], "title": "PyTorchCV Convolutional neural networks for computer vision, August 2020", "venue": "URL https://github.com/osmr/imgclsmob", "year": 2020 }, { "authors": [ "Mingxing Tan", "Quoc Le" ], "title": "EfficientNet: Rethinking model scaling for convolutional neural networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Mingxing Tan", "Quoc V. Le" ], "title": "Mixconv: Mixed depthwise convolutional kernels, 2019b", "venue": null, "year": 2019 }, { "authors": [ "Hidenori Tanaka", "Daniel Kunin", "Daniel L.K. Yamins", "Surya Ganguli" ], "title": "Pruning neural networks without any data by iteratively conserving synaptic flow", "venue": "arXiv preprint arXiv:2006.05467,", "year": 2020 }, { "authors": [ "Lucas Theis", "Iryna Korshunova", "Alykhan Tejani", "Ferenc Huszár" ], "title": "Faster gaze prediction with dense networks and fisher pruning", "venue": null, "year": 2018 }, { "authors": [ "Jack Turner", "Elliot J. Crowley", "Michael O’Boyle", "Amos Storkey", "Gavin Gray" ], "title": "Blockswap: Fisher-guided block substitution for network compression on a budget", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Chaoqi Wang", "Guodong Zhang", "Roger Grosse" ], "title": "Picking winning tickets before training by preserving gradient flow", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Chen Wei", "Chuang Niu", "Yiping Tang", "Jimin Liang" ], "title": "NPENAS: Neural predictor guided evolution for neural architecture search", "venue": "arXiv 2003.12857,", "year": 2003 }, { "authors": [ "Wei Wen", "Hanxiao Liu", "Hai Li", "Yiran Chen", "Gabriel Bender", "Pieter-Jan Kindermans" ], "title": "Neural predictor for neural architecture", "venue": null, "year": 1912 }, { "authors": [ "Chris Ying", "Aaron Klein", "Eric Christiansen", "Esteban Real", "Kevin Murphy", "Frank Hutter" ], "title": "NASbench-101: Towards reproducible neural architecture search", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Dongzhan Zhou", "Xinchi Zhou", "Wenwei Zhang", "Chen Change Loy", "Shuai Yi", "Xuesen Zhang", "Wanli Ouyang" ], "title": "Econas: Finding proxies for economical neural architecture search", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Barret Zoph", "Quoc V. Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Zhou" ], "title": "We acknowledge that slightly better correlations could have been achieved for econas and econas+ proxies in Figure 1 if the learning rate was annealed to zero over fewer epochs (20 and 15 epochs respectively). However, we do not anticipate the results to change significantly", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Instead of manually designing neural networks, neural architecture search (NAS) algorithms are used to automatically discover the best ones (Tan & Le, 2019a; Liu et al., 2019; Bender et al., 2018). Early work by Zoph & Le (2017) proposed using a reinforcement learning (RL) controller that constructs candidate architectures, these are evaluated and then feedback is provided to the controller based on the performance of the candidate. One major problem with this basic NAS methodology is that each evaluation is very costly – typically on the order of hours or days to train a single neural network fully. We focus on this evaluation phase – we propose using proxies that require a single minibatch of data and a single forward/backward propagation pass to score a neural network. This is inspired by recent pruning-at-initialization work by Lee et al. (2019), Wang et al. (2020) and Tanaka et al. (2020) wherein a per-parameter saliency metric is computed before training to inform parameter pruning. Can we use such saliency metrics to score an entire neural network? Furthermore, can we use these “single minibatch” metrics to rank and compare multiple neural networks for use within NAS? If so, how do we best integrate these metrics within existing NAS algorithms such as RL or evolutionary search? These are the questions that we hope to (empirically) tackle in this work with the goal of making NAS less compute-hungry. Our contributions are:\n• Zero-cost proxies We adapt pruning-at-initialization metrics for use with NAS. This requires these metrics to operate at the granularity of an entire network rather than individual parameters – we devise and validate approaches that aggregate parameter-level metrics in a manner suitable for ranking candidates during NAS search.\n• Comparison to conventional proxies We perform a detailed comparison between zerocost and conventional NAS proxies that use a form of reduced-computation training. First, we quantify the rank consistency of conventional proxies on large-scale datasets: 15k models vs. 50 models used in (Zhou et al., 2020). Second, we show that zero-cost proxies can match or exceed the rank consistency of conventional proxies.\n• Ablations on NAS benchmarks We perform ablations of our zero-cost proxies on five different NAS benchmarks (NAS-Bench-101/201/NLP/ASR and PyTorchCV) to both test the zero-cost metrics under different settings, and expose properties of successful metrics.\n• Integration with NAS Finally, we propose two ways to use zero-cost metrics effectively within NAS algorithms: random search, reinforcement learning, aging evolution and predictor-based search. For all algorithms and three NAS datasets we show significant speedups, up to 4× for NAS-Bench-101 compared to current state-of-the-art." }, { "heading": "2 RELATED WORK", "text": "NAS Efficiency To decrease NAS search time, various techniques were used in the literature. Pham et al. (2018) and Cai et al. (2018) use weight sharing between candidate models to decrease the training time during evaluation. Liu et al. (2019) and others use smaller datasets (CIFAR-10) as a proxy to the full task (ImageNet1k). In EcoNAS, Zhou et al. (2020) extensively investigated reduced-training proxies wherein input size, model size, number of training samples and number of epochs were reduced in the NAS evaluation phase. We compare to EcoNAS in this work to elucidate how well our zero-cost proxies perform compared to familiar and widely-used conventional proxies.\nPruning The goal is to reduce the number of parameters in a neural network, one way to do this is by identifying a saliency (importance) metric for each parameter, and the less-important parameters are removed. For example, Han et al. (2015), Frankle & Carbin (2019) and others use parameter magnitudes as the criterion while LeCun et al. (1990), Hassibi & Stork (1993) and Molchanov et al. (2017) use gradients. However, the aforementioned works require training before computing the saliency criterion. A new class of pruning-at-initialization algorithms, that require no training, were introduced by Lee et al. (2019) and extended by Wang et al. (2020) and Tanaka et al. (2020). A single forward/backward propagation pass is used to compute a saliency criterion which is successfully used to heavily prune neural networks before training. We extend these pruning-at-initialization criteria towards scoring entire neural networks and we investigate their use with NAS algorithms.\nIntersection between pruning and NAS Concepts from pruning have been used within NAS multiple times. For example, Mei et al. (2020) use channel pruning in their AtomNAS work to arrive at customized multi-kernel-size convolutions (mixconvs as introduced by Tan & Le (2019b)). In their Blockswap work, Turner et al. (2020) use Fisher information at initialization to score different lightweight primitives that are substituted into a neural network to decrease computation. This is the earliest work we could find that attempts to perform a type of NAS by scoring neural networks without training using a pruning criterion, More recently, Mellor et al. (2020) introduced a new metric for scoring neural networks at initialization based on the correlation of Jacobians with different inputs. They perform “NAS without training” by performing random search with their zero-cost metric (jacob cov) to rank neural networks instead of using accuracy. We include jacob cov in our analysis and we introduce five more zero-cost metrics in this work." }, { "heading": "3 PROXIES FOR NEURAL NETWORK ACCURACY", "text": "" }, { "heading": "3.1 CONVENTIONAL NAS PROXIES (ECONAS)", "text": "In conventional sample-based NAS, a proxy training regime is often used to predict a model’s accuracy instead of full training. Zhou et al. (2020) investigate conventional proxies in depth by computing the Spearman rank correlation coefficient (Spearman ρ) of a proxy task to final test accuracy. The proxy used is a reduced-computation training, wherein one of the following four variables is reduced: (1) number of epochs, (2) number of training samples, (3) input resolution (4) model size (controlled through the number of channels after the first convolution). Even though such proxies were used in many prior works, EcoNAS is the first systematic study of conventional proxy tasks that we found. One main finding by Zhou et al. (2020) is that using approximately 14 of the model size and input resolution, all training samples, and 110 the number of epochs was a reasonable proxy which yielded the best results for their experiment (Zhou et al., 2020)." }, { "heading": "3.2 ZERO-COST NAS PROXIES", "text": "We present alternative proxies for network accuracy that can be used to speed up NAS. A simple proxy that we use is grad norm in which we sum the Euclidean norm of the gradients after a\nsingle minibatch of training data. Other metrics listed below were previously introduced in the context of parameter pruning at the granularity of a single parameter – a saliency is computed to rank parameters and remove the least important ones. We adapt these metrics to score and rank entire neural network models for NAS." }, { "heading": "3.2.1 SNIP, GRASP AND SYNAPTIC FLOW", "text": "In their snip work, Lee et al. (2019) proposed performing parameter pruning based on a saliency metric computed at initialization using a single minibatch of data. This saliency criteria approximates the change in loss when a specific parameter is removed. Wang et al. (2020) attempted to improve on the snip metric by approximating the change in gradient norm (instead of loss) when a parameter is pruned in their grasp objective. Finally, Tanaka et al. (2020) generalized these so-called synaptic saliency scores and proposed a modified version (synflow) which avoids layer collapse when performing parameter pruning. Instead of using a minibatch of training data and cross-entropy loss (as in snip or grasp), with synflow we compute a loss which is simply the product of all parameters in the network; therefore, no data is needed to compute this loss or the synflow metric itself. These are the three metrics:\nsnip : Sp(θ) = ∣∣∣∣∂L∂θ θ ∣∣∣∣, grasp : Sp(θ) = −(H∂L∂θ ) θ, synflow : Sp(θ) = ∂L∂θ θ (1)\nwhere L is the loss function of a neural network with parameters θ, H is the Hessian1, Sp is the per-parameter saliency and is the Hadamard product. We extend these saliency metrics to score an entire neural network by summing over all parameters N in the model: Sn = ∑N i Sp(θ)i." }, { "heading": "3.2.2 FISHER", "text": "Theis et al. (2018) perform channel pruning by removing activation channels (and their corresponding parameters) that are estimated to have the least effect on the loss. They build on the work of Molchanov et al. (2017) and Figurnov et al. (2016). More recently, Turner et al. (2020) aggregated this fisher metric for all channels in a convolution primitive to quantify the importance of that primitive when it is replaced by a more efficient alternative. We further aggregate the fisher metric for all layers in a neural network to score an entire network as shown in the following equations:\nfisher : Sz(z) = ( ∂L ∂z z )2 , Sn = M∑ i=1 Sz(zi) (2)\nwhere Sz is the saliency per activation z, and M is the length of the vectorized feature map." }, { "heading": "3.2.3 JACOBIAN COVARIANCE", "text": "This metric was purpose-designed to score neural networks in the context of NAS – we refer the reader to the original paper for detailed reasoning and derivation of the metric which we call jacob cov (Mellor et al., 2020). In brief, this metric captures the correlation of activations within a network when subject to different inputs within a minibatch of data – the lower the correlation, the better the network is expected to perform as it can differentiate between different inputs well." }, { "heading": "4 EMPIRICAL EVALUATION OF PROXY TASKS", "text": "Generally, most of the proxies presented in the previous section try to capture how trainable a neural network is by inspecting the gradients at the beginning of training. In this work, we refrain from attempting to explain precisely why each metric works (or does not work) and instead focus on the empirical evaluation of those metrics in different scenarios. We use the Spearman rank correlation coefficient (Spearman ρ) to quantify how well a proxy ranks models compared to the ground-truth ranking produced by final test accuracy (Daniel, 1990).\n1The full Hessian does not need to be explicitly constructed as explained by Pearlmutter (1993)." }, { "heading": "4.1 NAS-BENCH-201", "text": "NAS-Bench-201 is a purpose-built benchmark for prototyping NAS algorithms (Dong & Yang, 2020). It contains 15,625 CNN models from a cell-based search space and corresponding training statistics. We first use NAS-Bench-201 to evaluate conventional proxies from EcoNAS, then we evaluate our zero-cost proxies and compare the two approaches." }, { "heading": "4.1.1 ECONAS PROXY ON NAS-BENCH-201", "text": "Even though Zhou et al. (2020) thoroughly investigated reduced-training proxies, they only evaluated a small model zoo consisting of 50 models. To study EcoNAS more extensively we evaluate it on all 15,625 models in NAS-Bench-201 search space (training details in A.1). The full configuration training of NAS-Bench-201 on CIFAR-10 uses input resolution r=32, number of channels in the stem convolution c=16 and number of epochs e=200 – we summarize this as: r32c16e200. According to the EcoNAS study, the most effective configuration divides both the input resolution and stem channels by ~4 and the number of epochs by 10, that is, r8c4e20 for NAS-Bench-201 models. Keeping that in mind we investigate r8c4 in Fig. 1 (labeled econas); however, this proxy training seems to suffer from overfitting as correlation to final accuracy started to drop after 20 epochs. Additionally, the Spearman ρ was a modest 0.61 when evaluated on all 15,625 models in NAS-Bench-201 – a far cry from the 0.87 achieved on the 50 models in the EcoNAS paper (Zhou et al., 2020). We additionally explore r8c8, r16c4 and r16c8 and find a very good proxy with r16c8e15, labeled in Fig. 1 as econas+. From the plots in Fig. 1, we would like to highlight that:\n1. A reduced-training proxy that works well on one search space may not work well on another as highlighted by the difference in Spearman ρ between econas and econas+. This occurs even though both tasks in this case were CIFAR-10 image classification.\n2. Even though EcoNAS-style proxies reduce computation load by a large factor (as seen in the middle plot in Fig. 1, this does not translate fully into actual runtime improvement when run on a nominal desktop GPU2. We therefore plot actual GPU speedup in the third subplot in Fig. 1. For example, notice that the point labeled econas (r8c4e20) has the same FLOPS as ~ 110 of a full training epoch, but when measured on a GPU, takes time equivalent to 5 full training epochs – a 50× gap between theoretical and actual speedup." }, { "heading": "4.1.2 ZERO-COST PROXIES ON NAS-BENCH-201", "text": "We now shift our focus towards our zero-cost NAS proxies which rely on gradient computations using a single minibatch of data at initialization. A clear advantage of zero-cost proxies is that they take very little time to compute – the forward/backward pass using a single minibatch of data. We ran the zero-cost proxies on all 15,625 models in NAS-Bench-201 for three image classification datasets and we summarize the results in Table 1. The synflow metric performed the best on all three datasets with a Spearman ρ consistently above 0.73, jacob cov was second best but was also very well-correlated to final accuracy. Next came grad norm and snip with a Spearman ρ close to 0.6. We add another metric that we simply label with vote that takes a majority vote between the three metrics synflow, jacob cov and\n2We used Nvidia Geforce GTX 1080 Ti and ran a random sample of 10 models for 10 epochs to get an average time-per-epoch for each proxy at different batch sizes. We discuss this further in Section A.2\nsnip when ranking two models. This performed better than any single metric with a Spearman ρ consistently above 0.8. At the cost of just 3 minibatches instead of ~1000, this is already performing slightly better than econas+, and much better than econas as shown in Fig. 2a. In Fig. 2 we also plot the rank correlation of validation accuracy (without any reduced training) over the first 10 epochs of training for the three datasets available in NAS-Bench-201. Having set a comparison point with EcoNAS and reduced-training proxies, we have shown that zero-cost proxies can match and outperform these conventional methods in a large-scale empirical analysis. However, different NAS search spaces may behave differently, so in the remainder of this section, we test the zero-cost proxies on different search spaces." }, { "heading": "4.2 MODELS IN THE WILD (PYTORCHCV)", "text": "To study zero-cost proxies in a different setting, we scored the models in the PyTorchCV database (Sémery, 2020). PytorchCV contains common state-of-the-art neural networks such as ResNets (He et al., 2016), DenseNets (Huang et al., 2017), MobileNets (Howard et al., 2017) and EfficientNets (Tan & Le, 2019a) – a representative assortment of top-performing models. We evaluated ~50 models for CIFAR-10, CIFAR-100 (Krizhevsky, 2009) and SVHN (Netzer et al., 2011), and ~200 models for ImageNet1k (Deng et al., 2009). Fig. 3 shows the resulting correlation for the zero-cost metrics. synflow, snip, fisher and grad norm all perform similarly well on all datasets, with the exception of SVHN where synflow outperforms other metrics by a large margin. However, grasp failed in this setting completely as shown by the low mean Spearman ρ and high variance as shown in Fig. 3. Curiously, jacob cov also failed in this setting even though it performed well on NASBench-201. This suggests that this metric is better at scoring models from within a search space (similar topology and size), but becomes worse when scoring unrelated models." }, { "heading": "4.3 OTHER SEARCH SPACES", "text": "We investigate our zero-cost metrics with other NAS benchmarks. Our goal is to empirically find a good metric to speed up NAS algorithms reliably on different tasks and datasets.\n• NAS-Bench-101: This is the first and largest NAS benchmark available with over 423k CNN models and training statistics on CIFAR-10 (Ying et al., 2019).\n• NAS-Bench-NLP: Klyuchnikov et al. (2020) investigate the architectures of 14k different recurrent cells in natural language processing (NLP) tasks such as next word prediction.\n• NAS-Bench-ASR: This is our in-house dataset for convolution-based automatic speech recognition models evaluated on the TIMIT dataset (Garofolo et al., 1993). The search space includes linear, convolution, zeroize and skip-connections, forming 8242 models (Mehrotra et al., 2021).\nCompared to NAS-Bench-201, these datasets are either much larger (NAS-Bench-101) or based on a different task (NAS-Bench-NLP/ASR). From Table 2 we would like to highlight that the synflow metric (highlighted in bold) is the only consistent one across all analyzed benchmarks. Additionally, even for the synflow metric, rank correlation is quite a bit lower than that for NAS-Bench-201 (~0.3 vs. ~0.8). Other than global rank correlation, we posit that ranking of top models from a search space is also critically important for NAS algorithms – this is because we ultimately care about finding those top models. In Section A.4 we perform an analysis of how top models are ranked by zero-cost proxies. Additionally, local rank correlation of top models could be important for NAS algorithms when two good models are compared using their proxy metric value. Tables 9 and 10 show that the only metric that maintains correct ranking among top models consistently across all NAS benchmarks is synflow. In Section 5 we deliberately evaluate 3 benchmarks that exhibit different levels of rank correlation: NAS-Bench-201/101/ASR to see if we can integrate synflow within NAS and achieve consistent gains for all three search spaces." }, { "heading": "5 ZERO-COST NAS", "text": "Mellor et al. (2020) proposed using jacob cov to score a set of randomly-sampled models and to greedily choose the model with the highest score. This “NAS without training” methodology is very attractive thanks to its simplicity and low computational cost. In this section, we evaluate our metrics in this setting that we simply call “random search” (RAND). We extend this methodology slightly: instead of just training the top model, we keep training models (from best to worst as ranked by the zero-cost metric) until the desired accuracy is achieved. However, this approach can only produce results that are as good as the metric being used – and we have no guarantees (just empirical evidence) that these metrics will perform well on all datasets. Therefore, we also investigate how to integrate zero-cost metrics within existing NAS algorithms such as reinforcement learning (RL) (Zoph & Le, 2017), aging evolution (AE) search (Real et al., 2019) and predictor-based search (Dudziak et al., 2020). More specifically, we investigate enhancing these search algorithms through either (a) zero-cost warmup phase or (b) zero-cost move proposal." }, { "heading": "5.1 ZERO-COST WARMUP", "text": "Generally speaking, by warmup we mean using the zero-cost proxies at the beginning of the search process to initialize the search algorithm without training any models or using accuracy. The main parameter in zero-cost warmup is the number of models for which we compute and use the zero-cost metric (N ), and the potential gain comes from the fact that this number can be usually much larger than the number of models we can afford to train (T N ).\nAging Evolution We score N random models with our proxy metric and choose the ones ranked highest as the initial population (pool) in the aging evolution (AE) algorithm (Real et al., 2019).\nReinforcement Learning In the REINFORCE algorithm (Zoph & Le (2017)), we sample N random models and use their zero-cost scores to reward the controller, thus biasing it towards selecting architectures which are likely to have higher values of the chosen metrics. During warmup, reward for the controller is calculated by linearly normalizing values returned by the proxy functions to the range [−1, 1] (with online adjustment of min and max).\nBinary Predictor We warm up a binary graph convolutional network (GCN) predictor from Dudziak et al. (2020) by training it to predict relative performance of two models by considering their zero-cost scores instead of accuracy. For N warmup points, we use the relative rankings (according to the zero-cost metric) of all pairs of models (0.5N(N−1) pairs) when performing warmup training for the predictor. As in (Dudziak et al., 2020), models ranked by the predictor after each training round (including the warmup phase) and the top models are evaluated." }, { "heading": "5.2 ZERO-COST MOVE PROPOSAL", "text": "Whereas warmup tries to leverage global correlation of the proxy metrics to the accuracy of models, move proposal focuses on a local neighborhood at each step. A common parameter for move proposal algorithms is denoted as R and means sample ratio, i.e., how many models can be checked using zero-cost metrics each time we select a model to train.\nAging Evolution The algorithm is enhanced by performing “guided” mutations. More specifically, each time a model is being mutated (in the baseline algorithm this is done randomly) we\nconsider all possible mutations with edit distance 1 from the current model, score them using the zero-cost proxies and select the best one to add to the pool.\nReinforcement Learning In the case of REINFORCE, move proposal is similar to warmup – instead of rewarding a controllerN time before the search begins, we interleaveR zero-cost rewards for each accuracy reward (R N )." }, { "heading": "5.3 RESULTS", "text": "For all NAS experiments, we repeat experiments 32 times and we plot the median and shade between the lower/upper quartiles. Our baselines are already heavily tuned and achieve the same or better results than those reported in the original NAS-Bench-101/201 papers. When adding zero-cost warmup or move proposal with synflow, we leave all search hyper-parameters unchanged.\nNAS-Bench-201 The global/top-10% rank correlations of synflow for this dataset are (0.76/0.42) so we expect this proxy to perform quite well. Indeed, as Figure 4 and Table 7 show, we improve search speed on all four types of searches using zero-cost warmup and move proposal. RAND and RL are both significantly improved, both in terms of sample efficiency and final achieved accuracy. But even more powerful algorithms like AE and BP exhibit 5.6× and 2.3× speedups respectively to arrive at 73.5% accuracy. Generally, the more zero-cost warmup, the better the results. This holds true for all algorithms except RL which degrades at 15k warmup points, suggesting that the controller is overfitting to the synflow metric instead of learning to optimize for accuracy.\nNAS-Bench-101 This dataset is an order of magnitude larger than NAS-Bench-201 and has lower global/top-10% rank correlations of (0.37/0.14). In many ways, this provides a true test as to whether these lower correlations are still useful with zero-cost warmup and move proposal. Table 3 shows a summary of the results and Figure 7 (in Section A.6) shows the full plots. As the table shows, even with modest correlations, there is a major boost to all searching algorithms thus outperforming the best previously published result by a large margin and setting a new state-of-the-art result on this dataset. However, it is worth noting that the binary predictor exhibits no improvement (but also no degradation). Perhaps this is because it was already very sample-efficient and synflow warmup couldn’t help further due to its relatively poor correlation on this dataset.\nNAS-Bench-ASR We repeat our evaluation on NAS-Bench-ASR with global/top-10% correlations (0.40/0.40). Even though this is a different task (speech recognition), synflow warmup and move proposal both yield large improvements in search speeds compared to all baselines in Figure 5 and Table 8. For example, to achieve a phoneme error rate (PER) of 21.3%, baseline RAND and RL required >1000 trained models, and AE required 138 trained models; however, this is reduced to 68, 173 and 87 trained models with 2000 models of zero-cost warmup." }, { "heading": "6 DISCUSSION", "text": "In this section we investigate why zero-cost NAS is effective in improving the sample efficiency of NAS algorithms by looking more closely at how top models are selected by the synflow proxy.\nWarmup Table 4 shows the number of top-5% most-accurate models ranked within the top 64 models by the synflow metric. If we compare random warmup versus zero-cost warmup with synflow, random warmup will only return 5% or ~3 models out of 64 that are within the top 5% of models whereas synflow warmup returns a higher number of top-5% models as listed in Table 4. This is key to the improvements observed when adding zero-cost warmup to algorithms like random search or AE. For example, with AE, the numbers in Table 4 are indicative of the models that may end up in the initial AE pool. By initializing the AE pool with many good models, it becomes more likely that a random mutation will lead to an even better model, thus allowing the search to find a top model more quickly. Note that synflow is able to rank many good models in its top 64 models even when global/local correlation is low (as it is the case for NAS-Bench-ASR).\nMove Proposal For a search algorithm like AE, search moves consist of random mutations (with edit distance 1 for our experiments) for a model from the AE pool. Zero-cost move proposal enhances this by trying out all possible mutations and selecting the best one according to synflow. To investigate how this improves search efficiency, we took 1000 random points and explored their local neighbourhood cluster of possible mutations. Table 5 shows the probability that the synflow proxy correctly identifies the top model. Indeed, synflow improves the chance of selecting the best mutation from ~4% to >30% for NAS-Bench-201 and 12% for NAS-Bench-101. Even for NAS-Bench-ASR a random mutation has a 7.7% chance (= 1/13) to select the best mutation, but this increases to 10% with the synflow proxy thus speeding up convergence to top models." }, { "heading": "7 CONCLUSION", "text": "In this paper, we introduced six zero-cost proxies, mainly based on recent pruning-at-initialization work, that are used to rank neural network models in NAS. First, we compared to conventional proxies (EcoNAS) that perform reduced-computation training and we found that zero-cost proxies such as synflow can outperform EcoNAS in maintaining rank consistency. Next, we verified our zerocost metrics on four additional datasets of varying sizes and tasks and found that indeed out of the six initially-considered zero-cost metrics, only synflow was robust across all datasets for both global and top-10% rank correlation. Finally, we proposed two ways to integrate synflow within NAS algorithms: zero-cost warmup and zero-cost move proposal. Both methods demonstrated significant speedups across four search algorithms and three NAS benchmarks, setting new state-of-the-art results for both NAS-Bench-101 and NAS-Bench-201 datasets. Our strong and consistent empirical results suggest that the synflow metric, when combined with warmup and move proposal can be an effective and reliable methodology for speeding up different NAS algorithms. We hope that our work lays a foundation for further zero-cost techniques that expose favourable model properties with little computation thus making NAS more readily accessible without exorbitant computing resources. The most immediate open question for future investigation is why the synflow proxy works well – analytical insights will enable further research in zero-cost NAS proxies." }, { "heading": "A APPENDIX", "text": "Because this paper is empirically-driven, there are many more results than what we presented in the main text of the paper. In the appendix we list many important results that support our main arguments and hypotheses in the main text of this paper.\nA.1 EXPERIMENTAL DETAILS\nIn Table 6 we list the hyper-parameters used in training the EcoNAS proxies to produce Figure 1. The only difference to the standard NAS-Bench-201 training pipeline (Dong & Yang, 2020) is our use of fewer epochs for the learning rate annealing schedule – we anneal the learning rate to zero over 40 epochs instead of 200. This is a common technique used in speeding up convergence for training proxies Zhou et al. (2020). We acknowledge that slightly better correlations could have been achieved for econas and econas+ proxies in Figure 1 if the learning rate was annealed to zero over fewer epochs (20 and 15 epochs respectively). However, we do not anticipate the results to change significantly.\nOne additional comment regarding Figure 1 in the main paper. While we run the training ourselves for all EcoNAS variants in the plot, we take the data for the line labeled baseline directly from the NAS-Bench-201 dataset. We are not sure why the line is not smooth like the lines for the EcoNAS variants that we trained but assume that this is an artifact of averaging over multiple seeds in the NAS-Bench-201 dataset. In any case, we do not anticipate that this would change any conclusions or observations that we draw from this plot. Finally, we would like to note some details about our NAS experiments in Section 5. NAS datasets provide multiple seeds of results for each model, so whenever we “train” a model, we query a random seed from the database to mimic a real NAS pipeline without caching. We refer the reader to (Dudziak et al., 2020), specifically Section S3.2 for more details on this.\nA.2 GPU RUNTIME FOR ECONAS\nFigure 6 shows the speedup of different EcoNAS proxies compared to baseline training. Even though r8c4 has 64× less computation compared to r32c16, it achieves a maximum of 4× real speedup even when the batch size is increased.\nA.3 TABULATED RESULTS\nThis subsection contains tabulated results from Figures 4 and 5 to facilitate comparisons with future work. Tables 7 and 8 highlight important data points about the NAS searches we conducted with NAS-Bench-201 and NAS-Bench-ASR respectively. We highlight results in two ways: First, we show the accuracy of the best model found after 50 trained models. Second, we indicate the number of trained models needed for each search method to reach a specific accuracy (73.5% CIFAR-10 classification accuracy for NAS-Bench-201 and 21.3% TIMIT PER.) We colour the best results (red) and the second best (blue) results in each table.\nA.4 ANALYSIS OF THE TOP 10% OF MODELS\nIn the main text we pointed to the fact that only synflow achieves consistent rank correlation for the top-10% of models across different datasets. Here, in Table 9 we provide the full results. Additionally, we hypothesized that a successful metric will rank many of the most-accurate models in its top models. In Table 10 we enumerate the percentage of top-10% most accurate models ranked as top-10% by each proxy metric. Again, synflow is the only consistent metric for all datasets, and performs best on average.\nA.5 ANALYSIS OF WARMUP AND MOVE PROPOSAL\nThis section provides more results relevant to our discussion in Section 6. Table 11 shows the number of top-5% models ranked in the top 64 models by each metric. This is an extension to Table 4 in the main text that only shows the results for synflow. As shown in the table, synflow is the most powerful metric that we tried. Table 12 shows the rank correlation coefficient of models within 1000 randomly-sampled local clusters of models. This result highlights that both grad norm and jacob cov work well in distinguishing between very similar models. However, synflow still consistently the best metric in this analysis. Furthermore, we measure the percentage of times that a metric correctly predicts the\ntop model within a local cluster of models in Table 13 This is an extension to Table 5 in the main text. The results are averaged over 1000 randomly-sampled local clusters. Again, synflow has the highest probability of selecting the top model compared to other zero-cost metrics.\nA.6 NAS-BENCH-101 SEARCH PLOTS\nFigure 7 shows the NAS search curves for all considered algorithms on NAS-Bench-101 dataset. Important points from this plot are summarized in Table 3 in the main text.\nA.7 SENSITIVITY ANALYSIS\nWe performed some sensitivity analysis to investigate how the zero-cost metrics perform on all points within NAS-Bench-201 with different initialization seed, initialization method and minibatch size. We comment on each table in its caption; however, to summarize, all metrics seem to be relatively unaffected when initialization and minibatch size are varied. The one exception can be seen in Table 15 where fisher benefits when biases are initialized with zeroes.\nA.8 RESULTS FOR ALL ZERO-COST METRICS\nHere we provide some NAS search results using all considered metrics for both RAND and AE searches on NAS-Bench-101/201 datasets. Our experiments point to synflow as the only effective zero-cost metric across different datasets; however, we provide the plots below for the reader to inspect how poorer metrics perform in NAS." } ]
2,021
null
SP:ee844974cf8fa5c95205cf27dfc9b80a277aa469
[ "This work proposes to explain graph neural networks using hard masking techniques. Specifically, it tries to find the node mask $V_s$ and feature mask $F_s$ which can identify the most important information of the input such that the masked information can yield a high fidelity score. This work proposes a greedy method, ZORRO, to explore these hard masks, which can be used as the explanations of the prediction. Experimental results are interesting and promising. " ]
Graph Neural Networks (GNNs) are a flexible and powerful family of models that build nodes’ representations on irregular graph-structured data. This paper focuses on explaining or interpreting the rationale underlying a given prediction of already trained graph neural networks for the node classification task. Existing approaches for interpreting GNNs try to find subsets of important features and nodes by learning a continuous mask. Our objective is to find discrete masks that are arguably more interpretable while minimizing the expected deviation from the underlying model’s prediction. We empirically show that our explanations are both more predictive and sparse. Additionally, we find that multiple diverse explanations are possible, which sufficiently explain a prediction. Finally, we analyze the explanations to find the effect of network homophily on the decisionmaking process of GNNs.
[]
[ { "authors": [ "Alexander Binder", "Grégoire Montavon", "Sebastian Lapuschkin", "Klaus-Robert Müller", "Wojciech Samek" ], "title": "Layer-wise relevance propagation for neural networks with local renormalization layers", "venue": "In International Conference on Artificial Neural Networks", "year": 2016 }, { "authors": [ "Brandon Carter", "Jonas Mueller", "Siddhartha Jain", "David Gifford" ], "title": "What made you do this? understanding black-box decisions with sufficient input subsets", "venue": "arXiv preprint arXiv:1810.03805,", "year": 2018 }, { "authors": [ "Jianbo Chen", "Le Song", "Martin J Wainwright", "Michael I Jordan" ], "title": "Learning to explain: An information-theoretic perspective on model interpretation", "venue": "arXiv preprint arXiv:1802.07814,", "year": 2018 }, { "authors": [ "Matthias Fey", "Jan E. Lenssen" ], "title": "Fast graph representation learning with PyTorch Geometric", "venue": "In ICLR Workshop on Representation Learning on Graphs and Manifolds,", "year": 2019 }, { "authors": [ "Thorben Funke", "Tian Guo", "Alen Lancic", "Nino Antulov-Fantulin" ], "title": "Low-dimensional statistical manifold embedding of directed graphs", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Maximilian Idahl", "Megha Khosla", "Avishek Anand" ], "title": "Finding interpretable concept spaces in node embeddings using knowledge bases. In Machine Learning and Knowledge Discovery in Databases - International Workshops of ECML PKDD", "venue": null, "year": 2019 }, { "authors": [ "Bo Kang", "Jefrey Lijffijt", "Tijl De Bie" ], "title": "Explaine: An approach for explaining network embeddingbased link predictions", "venue": "arXiv preprint arXiv:1904.12694,", "year": 2019 }, { "authors": [ "M. Khosla", "V. Setty", "A. Anand" ], "title": "A comparative study for unsupervised network representation learning", "venue": null, "year": 2019 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Johannes Klicpera", "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Predict then propagate: Graph neural networks meet personalized pagerank", "venue": "International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Bryan Perozzi", "Rami Al-Rfou", "Steven Skiena" ], "title": "Deepwalk: Online learning of social representations", "venue": "In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2014 }, { "authors": [ "Phillip E Pope", "Soheil Kolouri", "Mohammad Rostami" ], "title": "Explainability methods for graph convolutional neural networks", "venue": "In Proc. of the Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Marco Tulio Ribeiro", "Sameer Singh", "Carlos Guestrin" ], "title": " why should i trust you?” explaining the predictions of any classifier", "venue": "In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining,", "year": 2016 }, { "authors": [ "Marco Tulio Ribeiro", "Sameer Singh", "Carlos Guestrin" ], "title": "Anchors: High-precision model-agnostic explanations", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Chris R Sims" ], "title": "Rate–distortion theory and human", "venue": "perception. Cognition,", "year": 2016 }, { "authors": [ "Mukund Sundararajan", "Ankur Taly", "Qiqi Yan" ], "title": "Axiomatic attribution for deep networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Kelvin Xu", "Jimmy Ba", "Ryan Kiros", "Kyunghyun Cho", "Aaron Courville", "Ruslan Salakhudinov", "Rich Zemel", "Yoshua Bengio" ], "title": "Show, attend and tell: Neural image caption generation with visual attention", "venue": "In ICML,", "year": 2015 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Zhilin Yang", "William W. Cohen", "Ruslan Salakhutdinov" ], "title": "Revisiting semi-supervised learning with graph embeddings, 2016", "venue": null, "year": 2016 }, { "authors": [ "Rex Ying", "Dylan Bourgeois", "Jiaxuan You", "Marinka Zitnik", "Jure Leskovec" ], "title": "Gnn explainer: A tool for post-hoc explanation of graph neural networks", "venue": null, "year": 1903 }, { "authors": [ "Jinsung Yoon", "James Jordon", "Mihaela van der Schaar" ], "title": "Invase: Instance-wise variable selection using neural networks. 2018", "venue": null, "year": 2018 }, { "authors": [ "Hao Yuan", "Jiliang Tang", "Xia Hu", "Shuiwang Ji" ], "title": "Xgnn: Towards model-level explanations of graph neural networks", "venue": "In KDD", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Graph Neural Networks (GNNs) are a flexible and powerful family of models that build representations of nodes or edges on irregular graph-structured data and have experienced significant attention in recent years. These methods are based on the so-called “neighborhood aggregation” scheme in which a node representation is learned by aggregation of features from their neighbors and have achieved state-of-the-art performance on node and graph classification tasks. Despite their popularity, approaches investigating their interpretability have received limited attention. This paper focuses on explaining or interpreting the rationale underlying a given prediction of already trained graph neural networks.\nThere have been numerous approaches proposed in the literature for the general interpretability of machine learning models. The most popular approaches are feature attribution methods that intend to attribute importance to input features given an input prediction either agnostic to the model parameter (Ribeiro et al., 2018; 2016) or using model-specific attribution approaches (Xu et al., 2015; Binder et al., 2016; Sundararajan et al., 2017). However, models learned over graph-structured data have some unique challenges. Specifically, predictions on graphs are induced by a complex combination of nodes and paths of edges between them in addition to the node features. Thus explanations for a prediction should ideally be a small subgraph of the input graph and a small subset of node features that are most influential for the prediction (Ying et al., 2019).\nThe only existing approach for GNN explainability proposes to learn a real-valued graph mask that selects the important subgraph of the GNNs computation graph to maximize the mutual information with the GNNs prediction (Ying et al., 2019). We identify two crucial limitations of such an approach. Firstly, although mathematically tractable, a continuous mask does not ensure sparsity compared to a discrete mask – a desirable property for interpretability. Secondly, suitable notions of what constitutes an explanation in a GNN model and its evaluation are missing.\nThis paper proposes an alternate notion of interpretability for GNNs grounded in ideas from data compression in information theory. Specifically, we consider an explanation as a compressed form of the original feature matrix. The goodness of the explanation is measured by the expected deviation from the prediction of the underlying model. We formalize this idea of interpreting GNN decisions as an explicit optimization problem in a rate-distortion framework. A subgraph of the node’s computational graph and its set of features are relevant for a classification decision if the expected classifier score remains nearly the same when randomizing the remaining features. This\nformulation is arguably both a crisp, robust, and understandable notion of interpretability that is easy to evaluate. We propose a simple combinatorial procedure ZORRO that aims to find a sparse subset of features and nodes in the computational graph while adhering to a user-specified level of fidelity. Our method aims to find multiple disjoint explanations (whenever possible) that guarantee an acceptable lower bound on fidelity to the model’s decision.\nAnother key problem in post-hoc interpretability of GNNs is that of evaluating explanation methods. Current evaluation methods, such as those used by GNNEXPLAINER, are primarily anecdotal and lack principled metrics. Secondly, especially for real-world datasets, there is no ground truth for the explanation, making comparison difficult. We, on the other hand, posit that an explanation is faithful to the underlying model if it retains enough predictive power – a crisp and measurable quantity. To this extent, our optimization metric, fidelity, encodes an information-theoretic interpretation of explanation – if the explanation is highly predictive in expectation, then it is a high qualitative explanation.\nWe conducted extensive experimentation on three datasets and four diverse GNN approaches – Graph Convolution Networks (Kipf & Welling, 2017), Graph Attention Networks (Veličković et al., 2018), GIN (Xu et al., 2019), and APPNP (Klicpera et al., 2019). Our main key findings are as follows.\n1. We show that not one but multiple diverse explanations are possible that sufficiently explain a prediction. This multiplicity of explanations indicates the possible configurations that could be utilized by the model to arrive at a decision.\n2. Unlike earlier mutual-information preserving interpretability approaches, i.e. GNNEXPLAINER (Ying et al., 2019), we show that our explanations are both more predictive and sparse. We show that even with sparser explanations, our approach contains far more predictive capacity than GNNEXPLAINER.\n3. We then analyze the explanations across multiple GNN models to showcase differences between their learning behavior. We specifically show that GNN models rely heavily on homophily and that prediction errors are due inability to capture homophilic signals from their neighborhoods." }, { "heading": "2 RELATED WORK", "text": "Representation learning approaches on graphs encode graph structure with or without node features into low-dimensional vector representations, using deep learning and nonlinear dimensionality reduction techniques. These representations are trained in an unsupervised (Perozzi et al., 2014; Khosla et al., 2019; Funke et al., 2020) or semi-supervised manner by using neighborhood aggregation strategies and task-based objectives (Kipf & Welling, 2017; Veličković et al., 2018).\nThis work focuses on the post-hoc interpretability of decisions made by semi-supervised models based on graph convolution networks for node classification tasks. Inspired by the success of convolutional neural networks, graph convolution network (GCN)(Kipf & Welling, 2017) generalizes the convolution operation for irregular graph data. GCN and several of its variants follow a neighborhood aggregation strategy where they compute a node’s representation by recursive aggregation and transformation of feature representations of its neighbors. For the node classification task, the final node representations are then used to predict unlabelled nodes’ classes.\nInterpretability in Machine Learning. Post-hoc approaches to model interpretability are popularized by feature attribution methods that aim to assign importance to input features given a prediction either agnostic to the model parameters (Ribeiro et al., 2018; 2016) or using model specific attribution approaches (Xu et al., 2015; Binder et al., 2016; Sundararajan et al., 2017). Instance-wise feature selection (IFS) approaches (Chen et al., 2018; Carter et al., 2018; Yoon et al., 2018), on the other hand, focuses on finding a sufficient feature subset or explanation that leads to little or no degradation of the prediction accuracy when other features are masked. The advantage of this formulation is that the output explanation has a precise meaning in terms of the predictive power of the chosen subset. Applying these works directly for graph models is infeasible due to the complex form of explanation, which should consider the complex association among nodes in addition to the input features.\nInterpretability in GNNs. Model agnostic approaches like ours to interpretability in GNNs include GNNEXPLAINER (Ying et al., 2019) and XGNN(Yuan et al., 2020). GNNEXPLAINER learns a realvalued graph mask and feature mask such that the mutual information with GNN’s predictions is maximized. XGNN proposed a reinforcement learning-based graph generation approach to generate explanations for the predicted class for a graph. We instead focus on explaining node level decisions. As a model introspective approach, Pope et al. (2019) extended the gradient-based saliency map methods to GCNs, which rely on propagating gradients/relevance from the output to the original model’s input features. Other works (Kang et al., 2019; Idahl et al., 2019) focus on explaining unsupervised network representations, which is out of scope for the current work." }, { "heading": "3 PROBLEM DEFINITION AND APPROACH", "text": "" }, { "heading": "3.1 BACKGROUND ON GNNS", "text": "Let G = (V,E) be a graph where each node is associated with d dimensional input feature vector. Graph neural networks compute node representations by recursive aggregation and transformation of feature representations of its neighbors which are finally used for label prediction. Formally for a L-layer GNN, let x(`)n denote the feature representation of node n ∈ V at a layer ` ∈ L and Nn denotes the set of its 1-hop neighbors. x(0)n corresponds to the input feature vector of n. The `-th layer of a GNN can then be described as an aggregation of node features from the previous layer followed by a transformation operation.\nz(`)n = AGGREGATION (`) ({ x(`−1)n , { x (`−1) j | j ∈ Nn }}) (1)\nx(`)n = TRANSFORMATION (`) ( z(`)n ) (2)\nEach GNN defines its own aggregation function which is differentiable and usually a permutation invariant function. The transformation operation is usually a non-linear transformation employing ReLU non-linear activation. The final node’s embedding z(L)n is then used to make the predictions\nΦ(n)← argmaxσ(z(L)n W), (3) where σ is a sigmoid or softmax function depending on whether the node belongs to multiple or a single class. and W is a learnable weight matrix. The ith element of z(L)n W corresponds to the (predicted) probability that node n is assigned to some class i." }, { "heading": "3.2 PROBLEM FORMULATION", "text": "We are interested in explaining the prediction of a GNN Φ(n) for any node n. We note that for a particular node, n the subgraph taking part in the computation of neighborhood aggregation operation, see Eq. (1), fully determines the information used by GNN to predict its class. In particular, for a L-layer GNN, this subgraph would be the graph induced on nodes in the L-hop neighborhood of n. We will call this subgraph as the computational graph of the query node. We would like to pint out that the term computational graph should not be confused with the computational graph of the neural network. Let G(n) ⊆ G denote the computational graph of the node n. Let X(n), or briefly X denotes the feature matrix restricted to the nodes of G(n), where each row corresponds to a d-dimensional feature vector of the corresponding node in the computational graph.\nWe formulate the task of explaining the model prediction for a node n, as finding a partition of the components of its computational graph into a subset, S of relevant nodes and features, and its complement Sc of non-relevant components. In particular, the subset S should be such that fixing its value to the true values already determines the model output for almost all possible assignments to the non-relevant subset Sc. The subset S is then returned as an explanation. To quantify relevance, we compute the expected value of fidelity in model’s prediction for the noisy assignment to the non-relevant components.\nLet us denote with YS the new perturbed feature matrix obtained by fixing the components of the S to their actual values and otherwise noisy entries. The values of components in Sc are then drawn from some noisy distribution, N . Let S = {Vs, Fs} be the explanation with selected nodes Vs and selected features Fs. Let S be the mask matrix such that each element Si,j = 1 if and only if ith node (in G(n)) and jth feature are included in sets Vs and Fs respectively and 0 otherwise.\nYS = X S + Z (1− S), Z ∼ N , (4) where denotes an element-wise multiplication, and 1 a matrix of ones with the corresponding size. Figure 1 shows how the fixed elements are selected by Fs and Vs. Definition. The fidelity of explanation S with respect to the graph neural network Φ and the noise distribution N is given by\nF(S) = EYS |Z∼N [ 1Φ(X)=Φ(YS) ] . (5)\nBy fixing the fidelity to a certain user-defined threshold, say τ , we are then interested in all possible disjoint sets of explanations that would have the fidelity of at least τ . More precisely, our resulting set of explanations R is given as\nR = { S1,S2, . . . | ∀iF(Si) ≥ τ and ∩\ni Si = ∅\n} (6)\nCONNECTION TO THE RATE-DISTORTION THEORY\nOur problem formulation is inspired by rate-distortion theory (Sims, 2016) which addresses the problem of determining the minimal information of a source signal that should be communicated over a leaky channel so that the source (input signal) can be approximately reconstructed at the receiver (output signal) without exceeding an expected distortion D. In our problem, we are interested in finding a small subset S such that having knowledge only about the signal on S and filling in the rest of the information randomly will almost surely preserve the class prediction if our\nchosen subset contains the information that is relevant for the model’s decision. Rather than measuring distortion or disagreement in the model’s decisions, we instead measure fidelity or agreement among the model’s decisions with the original and the distorted signal, respectively. Distortion can be computed using fidelity as D = 1 − F . A schematic representation of our problem in terms of rate-distortion framework is shown in Figure 2." }, { "heading": "3.3 OUR APPROACH: ZORRO", "text": "We propose a simple but effective greedy combinatorial approach, which we call ZORRO, to find the set of disjoint explanations with a desired level of fidelity. The pseudocode is provided in Algorithm 1. Let for any node n, Vn denote the vertices in its computational graph G(n) and F denote the complete set of features. We start with zero-sized explanations and select as first element\nargmax f∈F F(Vn, {f}) or argmax v∈Vn F({v}, F ), (7)\nwhichever yields the highest fidelity value. We iteratively add new features or nodes to the explanation such that the fidelity is maximized over all evaluated choices. Let Vp and Fp respectively denote the set of possible candidate nodes and features that can be included in an explanation at any iteration. We save for each possible node v ∈ Vp and feature f ∈ Fp the ordering RVp and RFp given by the fidelity values F({v}, Fp) and F(Vp, {f}) respectively. To reduce the computational cost, we only evaluate each iteration the top K remaining nodes and features determined by RVp and RFp .\nOnce we found an explanation with the desired fidelity, we discard the chosen elements from the feature matrix X , i.e., we never consider them again as possible choices in computing the next explanation. We repeat the process by finding relevant selections completely disjoint from the ones already found. To ensure that disjoint elements of the feature matrix X are selected, we recursively call Algorithm 3 with either remaining (not yet selected in any explanation) set of nodes or features. Finally, we return the set of explanations such that the fidelity of τ cannot be reached by using all the remaining components that are not in any explanation. For a detailed explanation of the algorithm details and the reasoning behind various design choices, we refer to Appendix A.\nAlgorithm 1 ZORRO(n, τ,K) 1: Vn ← set of vertices in G(n) 2: F ← set of node features 3: return GetExplanations(τ,K, Vn, F )\nAlgorithm 2 F(Vs, Fs) 1: for i = 0, . . . , samples do 2: Set Y{Vs,Fs}, i.e. fix the selected values\nand otherwise retrieve random values from the respective columns of X 3: if Φ(Y{Vs,Fs}) matches the original prediction of the model then 4: correct+ = 1 5: return correctsamples\nAlgorithm 3 GetExplanations(τ,K, Vp, Fp) 1: S = ∅, Vr = Vp, Fr = Fp, Vs = ∅, Fs = ∅ 2: RVp ← list of v ∈ Vp sorted by F({v}, Fp) 3: RFp ← list of f ∈ Fp sorted by F(Vp, {f}) 4: Add maximal element to Vs or Fs as in (7) 5: while F(Vs, Fs) ≥ τ do 6: Ṽs = Vs ∪ argmax\nv∈topK(Vr) F({v} ∪ Vs, Fs)\n7: F̃s = Fs ∪ argmax f∈topK(Fr) F(Vs, {f} ∪ Fs) 8: if F(Ṽs, Fs) ≤ F(Vs, F̃s) then 9: Fr = Fr \\ {f}, Fs = F̃s 10: else 11: Vr = Vr \\ {v}, Vs = Ṽs 12: S = S ∪ {Vs, Fs} 13: S = S ∪ GetExplanations(τ,K, Vp, Fr) 14: S = S ∪ GetExplanations(τ,K, Vr, Fp) 15: return S\nThe pseudocode to compute fidelity is provided in Algorithm 2. Specifically we generate the obfuscated instance for a given explanation S = {Vs, Fs}, YS by setting the feature values for selected node-set Vs corresponding to selected features in Fs to their true values. Figure 1 visualizes how the fixed elements of the feature matrix are determined by choice of node mask Vs and feature mask Fs. To set the irrelevant values, we randomly choose a value from the set of all possible values for that particular feature in the dataset X . To approximate the expected value in Eq. (5), we generate a finite number of samples of YS . We then compute fidelity as the fraction of samples for which the\nmodel’s decision matches its original decision. Our implementation of the proposed algorithm will be made public after publication.\nCHOICE OF NOISY DISTRIBUTION N One might argue that the irrelevant components can be set to 0 rather than any specific noisy value. However, this might lead to several side effects: in the case of datasets for which a feature value of 0 is not allowed or have some specific semantics or for models with some specific pooling strategy, for example, minpool. More specifically, the idea of an irrelevant component is not that it is missing, but its value does not matter. Therefore to account for the irrelevancy of certain components given our explanation, we need to check for multiple noisy instantiations for the unselected components. Our choice of using the global distribution of features as the noisy distribution ensures that only plausible feature values are used. Next, our choice does not increase the bias towards specific values, which we would have by taking fixed values such as 0 or averages." }, { "heading": "4 EXPERIMENTS", "text": "In our experiments, we tried to answer three primary research questions: RQ 1. How often do multiple explanations exist for a given prediction? What are the sparsityfidelity trade-offs for our approach? RQ 2. How effective is ZORRO as compared to existing approaches in terms of fidelity and sparsity? RQ 3. Can we discover differences in model behavior by post-hoc analysis of explanations?\nTo answer these research question, we use the datasets Cora, CiteSeer and PubMed from Yang et al. (2016). We evaluate our approach on four different two-layer graph neural networks: graph convolutional network (GCN) (Kipf & Welling, 2017), graph attention network (GAT) (Veličković et al., 2018), the approximation of personalized propagation of neural predictions (APPNP) (Klicpera et al., 2019), and graph isomorphism network (GIN) (Xu et al., 2019). Table 5 shows the statistics of the datasets and models. For more details of our experimental setup, we refer to Appendix C." }, { "heading": "4.1 MULTIPLICITY AND SIZE OF EXPLANATIONS", "text": "Our first result is that multiple (disjoint) explanations are indeed possible and are frequent. Figure 3 shows the number of nodes having multiple explanations. We observe that, without exception, all GNN models yield multiple disjoint explanations with ≈ 50% of the 300 nodes under study have 2 to 10 explanations. The disjoint explanations produced by our algorithm can be understood as a disjoint piece of evidence that would lead the model to the same decision. We expect a much larger number of overlapping explanations if the restrictive condition on disjointness is relaxed. However, the objective here is to show that a decision can be reached in multiple ways, and each explanation is a practical realization of a possible combination of nodes and features that constitutes a decision. We are the first to establish the multiplicity of explanations for model predictions, unlike Ying et al. (2019) that outputs only one explanation as a soft mask over features and edges.\nIn general, shorter or sparser explanations are more human interpretable and hence more desirable. We conducted experiments using two fidelity thresholds (τ = 0.85 and τ = 0.98) and compared the\nsizes of the first found explanation (see Figure 4). A key observation in this result is that ZORRO is still able to find sparse explanations for high fidelity requirements, i.e., at τ = 0.98, suggesting that only a small number of nodes and features are required for most of the predictions. Although there are differences between datasets, the choice of τ has a limited influence on the explanations’ size. Finally, we interestingly find that mispredictions tend to require a larger explanation size where explanation size is the fraction of nodes/features selected by ZORRO in comparison to the entire computation graph for the node." }, { "heading": "4.2 COMPARISON WITH GNNEXPLAINER", "text": "We first note that unlike ZORRO an explanation from GNNEXPLAINER is a soft mask of importance scores ∈ [0, 1] for node features and edges of the computational graph. In principle, one can choose the top-k features of GNNEXPLAINER with the highest importance values for comparing both the fidelity of both approaches. However, we choose the best settings of GNNEXPLAINER, i.e., the soft mask, to compare the fidelity distributions of both approaches (see Figure 5).\nAs expected, the fidelity of explanations by ZORRO for all nodes is high [0.85, 0.95] or very high (0.95, 1]. However, GNNEXPLAINER exhibits low fidelity, i.e.,≤ 0.70 for a fairly large ≈ 40% of the nodes. The results show that our explanations are much likelier to preserve the model predictions than the soft explanations of GNNEXPLAINER.\nBut do the explanations from ZORRO have high fidelity because they are less sparse than GNNEXPLAINER? To systematically measure this, we computed the entropy of normalized probability distributions over feature masks output by both approaches as a measure of sparsity, see Table 1. Note that entropy is upper bounded by the log of the number of features (see Proposition 1). The high entropy for GNNEXPLAINER corresponds to mask distribution closer to a uniform distribution, i.e., all features would have equal importance. In the case of ZORRO, the entropy is precisely equal to the log of the number of selected elements. The much lower entropy (as compared to GNNEXPLAINER) achieved by ZORRO shows that the hard masks are sparse.\nProposition 1. Let p be the normalized distribution of explanation (feature) masks. Then H(p) ≤ log(|F |) where M corresponds to complete set of features. In particular for ZORRO we have H(p) = log(selection-size). [Proof see Appendix B]\nWe also visualize the output of the soft explanation of a node that achieves a fidelity of 1, see Figures 10 and 11. We observe that the mask values are distributed around a given value explaining the\nlow entropy. In such cases, when all masks for all features take low values, a small-sized explanation cannot be obtained as all components have similar importance. Effectively all features are kept in the input and no wonder that the highest value of fidelity is achieved. From the above two experiments, we conclude that ZORRO produces both sparse and high-fidelity explanations in comparison to GNNEXPLAINER." }, { "heading": "4.3 HOMOPHILY OF THE EXPLANATIONS", "text": "One of the motivations of post-hoc interpretability is to use explanations to derive insights into a model’s inner workings. Towards this, we investigate the homophily of the selected nodes from our explanations since GNNs are known to exploit the homophily in the neighborhood to learn powerful function approximators. We define homophily of the node as the fraction of the number of its neighbors, which share the same label as the node itself. Intuitively, it should be easier to label a node with the correct label if a larger fraction of nodes in its computational graph shares its label.\nIn what follows, we use homophily to refer to the homophily of a node with respect to the selected nodes in its first found explanation. True/Predicted homophily refers to the case when true/predicted node labels are used. We investigate the joint distribution of true and predicted homophily exhibited by the studied node sample. The results are shown in Figure 6. We make the following observations.\nObservation 1 - Nodes depicting the orange regions on the extreme left side of the plots are nodes that exhibited low true homophily but high predicted homophily. The class labels for such nodes are correctly predicted. However, the corresponding nodes in the explanation were assigned the wrong labels (if they were assigned the same labels as that of the particular node in question, its predicted homophily would have been increased).\nObservation 2 - Several vertices corresponding to blue regions spread over the bottom of the plots have low predicted homophily. These nodes are incorrectly predicted, and their label differs from those predicted for the nodes in their explanation set. The surprising fact is that even though some of them have high true homophily close to 1, their predicted homophily is low. This also points to the usefulness of our found explanation in which we conclude that nodes influencing the current node do not share its label.\nObservation 3 - We also note that for GIN and APPNP, we have some nodes with true homophily and predicted homophily close to 1 but are incorrectly predicted. This implies the node itself and the most influential nodes from its computational graph have been assigned the same label. We can conclude that the model based its decision on the right set of nodes but assigned the wrong class to the whole group." }, { "heading": "4.4 RETRAINING BASED ON LOCAL MASKS", "text": "To evaluate our ZORRO independent from our proposed fidelity measure, we use our local method as a global feature selection method and study the induced performance drop. We retrieved the explanations for all training nodes of the GNN on Cora and selected the top k features, which were most often in the first explanation. Similarly, we retrieved all explanations with GNNExplainer and selected the top k features with the highest summed feature mask values. As Table 2 shows, ZORRO outperforms GNNExplainer in all cases. Selecting the 100 most important features (with respect to ZORRO), has only a minor effect (∆ 0.01) on the test accuracy compared to the training on all 1433 features." }, { "heading": "4.5 EXPERIMENT ON SYNTHETIC DATASET", "text": "In GNNExplainer (Ying et al., 2019), the authors proposed to use synthetic datasets built from attaching motifs, such as ’house’, grid, or rings, to random Barabási–Albert (BA) graphs or regular trees. We evaluated our approach ZORRO on the synthetic dataset, which has features. The details of the experiment on the synthetic dataset are stated in Appendix E and includes some node explanations as well as the explanation of the random closest neighborhood baseline (in which we sample nodes randomly from the closest neighborhood). The task is to explain all the nodes from the house motif, and the ground truth are the five nodes from the corresponding house.\nTable 3 shows the performance of ZORRO with τ = .85 and τ = .98, GNNEXPLAINER and the random closest neighborhood heuristic. The heuristic outperforms all methods with respect to recall because it selects nodes only from the closest neighborhood. However, ZORRO achieves the highest precision and accuracy and outperforms GNNEXPLAINER." }, { "heading": "5 CONCLUSION", "text": "We propose ZORRO as a post-hoc explanation method for the decisions made by GNN models. Inspired by rate-distortion theory, we frame the problem of explaining GNN models as a feature and node selection problem so as to minimize the expected deviation from the original decision. We proposed a simple combinatorial procedure ZORRO, which retrieves disjoint explanations consisting of binary masks for the features and relevant nodes while trying to optimize for fidelity. With our extensive experiments, we show multiple explanations are possible for a given decision, unlike earlier approaches that provide a soft mask. Furthermore, our explanations are sparser and achieve higher fidelity than existing approaches. Finally, our analysis of the homophily in the explanations highlighted differences in the models’ behavior between correctly and wrongly predicted nodes." }, { "heading": "A ADDITIONAL DETAILS TO OUR ALGORITHM", "text": "In the design of our ZORRO algorithm, we have made several choices, which we explicitly want to explain in detail here. In general, we have to make the following design choices: initialization of first element, iterative adding further elements, recursive design. Table 4 contains the description (which we repeat for completeness) of all variables used within the Algorithms 1, 2, and 3.\nInitialization of first element. A single explanation {Vs, Fs} consist of selected nodes Vs and selected features Fs. The challenge to select the first node and feature is the following: Selecting\nonly a node or only a feature yields a non-informative value, i.e., F({v}, ∅) = c and F(∅, {f}) = c for all v ∈ Vn and f ∈ F and some constant c ∈ [0, 1]. The search for the optimal first pair would require |Vp||Fp| evaluations of the fidelity, which is in most cases too expensive. Therefore, we propose to use a different strategy, which also contains information for the following iterations. Instead of evaluating, which pair of feature and node yields the highest increase, we assess the nodes and features in a maximal setting of the other. To be more precise, we assume that, if we search for the best node, all (possible) features Fp were unmasked:\nargmax v∈Vp\nF({v}, Fp) (8)\nSimilarly for the features, we assume that all (possible) nodes are unmasked:\nargmax f∈Fp\nF(Vp, {f}) (9)\nWhichever of the nodes or features yields the highest value is the first element of our explanation. Consequently, the next selected element is of a different type than the first element, e.g., if we first choose a node, the next element is always that feature, which yields the highest fidelity based on that single node. We perform this initialization again for each explanation since for each explanation, the maximal sets of possible elements Vp and Fp are different.\nIterative search. The next part of our algorithm, which is the main contributor to the computational complexity, is the iterative search for additional nodes and features after the first element. A full search of all remaining nodes and features would require |Vr| + |Fr| fidelity computations. To significantly reduce this amount, we limited ourselves to a fixed number K nodes and features, see Algorithm 3. To systematically select the K elements, we use the information retrieved in the initialization by Eq. (8) and (9). We order the remaining nodes Vr and Fp by their values retrieved for Eq. (8) and (9) and only evaluate the topK. In Algorithm 3, we have denoted these orderings byRVp and RFp and the retrieval of the top K remaining elements by topK(Vr, RVp) and topK(Fr, RFp). We also experimented with evaluating all remaining elements but observed no performance gain or inferior performance to the above heuristic. As a reason, we could identify that in some cases, the addition of a single element (feature or node) could not increase the achieved fidelity. Using the ordering retrieved from the ”maximal setting”, we enforce that those elements are still selected, which contain valuable information with a higher likelihood. In addition, we experimented with refreshing the orderings RVp and RFp after some iterations but observed similar issues as in the unrestricted search.\nRecursive design. We explicitly designed our algorithm in a way such that we can retrieve multiple explanations, see line 13 and line 15 of Algorithm 3. We recursively call the Algorithm 3 twice, once with a disjoint node-set, the call in line 15 (only elements from the remaining set of nodes Vr can be selected), and similarly in line 13 with a disjoint feature set. Hence, the resulting explanation selects disjoint elements from the feature matrix since either the rows or columns are different from before. As greedy and fast stop criteria, we used each further iteration, the maximal reachable fidelity of F(Vp, Fp).\nComplexity analysis. The computational complexity of Algorithm 3 to retrieve an explanation {Vs, Fs} with possible nodes Vp and possible features Fp is\nO(#samples× (|Vp|+ |Fp|+K(|Vs|+ |Fs|))O(Φ)), where O(Φ) is the computational complexity for the forward pass of the GNN on the computational graph G(n). For the first explanation, we have\nO(#samples× (|Vn|+ |F |+K(|Vs|+ |Fs|))O(Φ))." }, { "heading": "B PROOF OF PROPOSITION 1", "text": "Proof. We first compute the normalized feature mask distribution, p(f) for f ∈ F (F is the complete set of features). In particular, denoting the mask value of f by mask(f), we have\np(f) = mask(f)∑\nf ′∈F mask(f)\nThen H(p) = −∑f∈F p(f) log p(f) which achieves its maximum value for the uniform distribution, i.e., p(f) = 1|F | . For ZORRO, let Fs be the set of selected features. For each f ∈ Fs, we then have p(f) = 1|Fs| and 0 otherwise. The computed entropy is then equal to log(|Fs|). We want to point out that the proposition also follows for the case of node masks." }, { "heading": "C EXPERIMENTAL SETUP", "text": "We focus on explaining the decisions of GNN models with respect to the task of node classification. We fix the number of layers to two for all models and keep the rest of the model architectures and parameters as in the original paper. We train the models 200 epochs with ADAM optimizer and a learning rate of 0.01 and a weight decay of 0.0005. We use the model and GNNExplainer implementations of PyTorch Geometric Library (Fey & Lenssen, 2019). For GNNExplainer, we use the default values of 100 epochs and a learning rate of 0.01. For each dataset, we randomly selected 300 nodes and retrieved the explanations of GNNExplainer, and our method for the fidelity values τ ∈ {0.85, 0.98}. We will publish the list of selected nodes together with the implementation. We used 100 samples to calculate the fidelity with Algorithm 2 and set K = 10 in our experiments.\nOur implementation is based on PyTorch Geometric 1.6 and Python 3.7. All methods were executed on a server with 128 GB RAM and Nvidia GTX 1080Ti.\nDatasets. Three well-known citation network datasets Cora, CiteSeer and PubMed from Yang et al. (2016) where nodes represent documents and edges represent citation links. The class label is described by a similar word vector or an index of category. Statistics for these datasets can be found in Table 1. We used the datasets, including their training and test split from the PyTorch Geometric Library, which corresponds to the data published by Yang et al. (2016).\nModels. As models we selected the well-known graph convolutional network (GCN) (Kipf & Welling, 2017) and graph attention network (GAT) (Veličković et al., 2018) as well as the approximation of personalized propagation of neural predictions (APPNP) (Klicpera et al., 2019), and graph isomorphism network (GIN) (Xu et al., 2019). APPNP utilizes a connection between PageRank and GCN, especially those with many layers, and extends GCNs based on the personalized PageRank. Xu et al. (2019) proposed GIN to match the representational power of the Weisfeiler-Lehman graph isomorphism test by extending the expressiveness of the feature aggregation." }, { "heading": "D ADDITIONAL EXPERIMENT RESULTS", "text": "This section contains additional visualizations:\nFigure 7 shows the explanation size of Cora, CiteSeer, and PubMed for the models GCN, GIN, GAT, and APPNP.\nFigure 8 and Figure 9 visualize the joint homophily distributions for the dataset Cora and CiteSeer.\nFigure 10 shows the feature mask distribution of GNNEXPLAINER for single nodes with fidelity 1. Figure 11 shows the feature mask distribution of GNNEXPLAINER for all explained nodes.\nTo be transparent about the time of executing ZORRO, Figure 12 visualizes the runtime recorded during our experiments to retrieve the first explanation for ZORRO. For the runtime experiments, we include the gradient approach used as baseline in GNNEXPLAINER: GRAD is a gradient-based\nmethod in which we compute gradient of the GNN’s loss function with respect to the adjacency matrix and the associated node features (Ying et al., 2019) As stated in the computational complexity above, we note that the runtimes do not directly follow the graph size. To be precise, our approach is strictly local, i.e., it is independent of the input graph, however large it might be. The fastest average runtime we observed on PubMed, which has the highest number of nodes. Secondly, we indeed have a tunable relationship between fidelity threshold and runtime.\nCurrently, our implementation follows the presented Algorithms 1-2, i.e., is designed for explained a single node. If multiple explanations V̄ ⊂ V are requested, this initialization step can be performed for all requested nodes at the same time (for the first explanation). Then Vn has to be replaced by ∪{Vn : n ∈ V̄ } and in each step of the fidelity, the prediction agreement with respect V̄ has to be checked and saved separately. Hence, the orderings RVp and RFp are computed for all nodes\nsimultaneously. Limiting the maximum number of features or neighbors would additionally reduce the runtime because the outliers would be avoided." }, { "heading": "E EXPERIMENTS ON SYNTHETIC DATASET", "text": "The synthetic dataset is generated by generating two communities consisting of house motifs attached to BA graphs. Each node has eight feature values drawn from N (0, 1) and two features drawn from N (−1, .5) for nodes of the first community or N (1, .5) otherwise. In addition, to follow the published implementation of GNNExplainer, the feature values are normalized within each community, and within each community, 0.01 % of the edges are randomly perturbed.\nThe eight labels are given by the following: for each community, the nodes of the BA graph form a class, the ’basis’ of the house forms a class, the ’upper’ nodes form a class, and the rooftop is a class. The used model is a three-layer GCN, which stacks each layer’s latent representation and uses a linear layer to make the final prediction. The training set includes 80% of the nodes.\nSince GNNEXPLAINER only returns soft edge mask, we sorted them and added both nodes from the highest-ranked edges until at least five nodes were selected. In this way, we retrieved hard node masks, which are necessary to compare with the ground truth.\nTo highlight this task’s insufficient design, we added as a very simple baseline, a random closest neighborhood heuristic, which randomly selects nodes from the nearest neighborhood. For example, if a node has two direct neighbors and 15 nodes in the second neighborhood, we select the two immediate neighbors and sample another three randomly from the second neighborhood.\nFor the synthetic dataset, we know how many features are not only randomly distributed but contain information about the community. Therefore, we selected the two features correlated with nodes’ community membership as ground truth and evaluated the methods’ performance with respect to their feature selection. For GNNEXPLAINER, we selected from the calculated soft feature mask those two features with the highest value. In contrast, ZORRO directly select this number during inference. Table 6 shows that GNNEXPLAINER fails to select the informative features and that ZORRO consistently selects the informative features. The configuration with a lower threshold (τ = .85) shows that we only need one of the two informative features in most cases. However, to reach a higher threshold of τ = .98, more features are required.\nTable 7 shows how connected the retrieved explanation is. In other words, how close is the explanation to a connected subgraph? We measure this by counting the number of connected components in the explanation with and without the explained node. As we can see, even though in GNNEXPLAINER the number of connected components is limited to four or three in the case with respectively without the explained nodes. This is the case since we select the nodes based on edges, i.e., each connected component consists of at least two nodes. The explanation tends to be way more disconnected than those of ZORRO.\nTo exemplify the performance of our method, the Figure 14 and Figure 15 show some examples of found explanations for correctly respectively wrongly predicted nodes. We compare the explanation of ZORRO against the baseline GNNEXPLAINER. The first node in Figure 14 shows that ZORRO finds multiple explanations, which correspond to the ground truth motif. In contrast, GNNEXPLAINER ranks four nodes from the BA graph among the highest, which should not affect the prediction of the GCN. The illustrations in c) and d) show a similar pattern. In addition, we see in c) the reason for a low recall of ZORRO. Often only 2-3 nodes of the ground truth are sufficient to reach a high fidelity.\nFigure 15 shows some explanations of wrongly predicted nodes. The first node 307 is not in the training set, and from ZORRO’s explanation, we get some reasoning about the prediction. The GCN only needs the value from the node itself and two neighbors, which are (wrongly) predicted as the ”basis” of the house. Hence, the node is predicted as the top wall, which would follow the following pattern: If from its first neighborhood and its second neighborhood each a node is the basis, then the node is a top wall node. However, both neighbors, which are important for the prediction, were predicted false, and hence the resulting prediction is also wrong. This observation agrees with the observed ”homophily of wrong predictions” of the real datasets. GNNEXPLAINER’s explanation of the same node is way larger and includes nodes from the house graph of the second community.\nThe second example in Figure 15 shows an example, where the explanations of ZORRO and GNNEXPLAINER are more similar. Both methods retrieve (mostly) members of the ground truth as an explanation for the false prediction." } ]
2,020
null
SP:0f29e5886a7840aacdbce931b6c795d43b545172
[ "This paper proposed three simple algorithms for sparse principal component analysis (SPCA): a) randomized matrix multiplication; b) deterministic thresholding scheme; and c) semidefinite programming relaxation. All of the proposed algorithms look like native combinations of existing techniques and simple sparsification steps. However, it is somewhat interesting to have novel theoretical guarantees for these simple strategies whose error bounds depend on the properties of the input matrix and the target sparsity." ]
Principal component analysis (PCA) is a widely used dimension reduction technique in machine learning and multivariate statistics. To improve the interpretability of PCA, various approaches to obtain sparse principal direction loadings have been proposed, which are termed Sparse Principal Component Analysis (SPCA). In this paper, we present three provably accurate, polynomial time, approximation algorithms for the SPCA problem, without imposing any restrictive assumptions on the input covariance matrix. The first algorithm is based on randomized matrix multiplication; the second algorithm is based on a novel deterministic thresholding scheme; and the third algorithm is based on a semidefinite programming relaxation of SPCA. All algorithms come with provable guarantees and run in low-degree polynomial time. Our empirical evaluations confirm our theoretical findings.
[ { "affiliations": [], "name": "SPARSE PRINCI" } ]
[ { "authors": [ "Farid Alizadeh" ], "title": "Interior Point Methods in Semidefinite Programming with Applications to Combinatorial Optimization", "venue": "SIAM Journal on Optimization,", "year": 1995 }, { "authors": [ "Arash A. Amini", "Martin J. Wainwright" ], "title": "High-dimensional Analysis of Semidefinite Relaxations for Sparse Principal Components", "venue": "Annals of Statistics,", "year": 2009 }, { "authors": [ "Ei Ando", "Toshio Nakata", "Masafumi Yamashita" ], "title": "Approximating the Longest Path Length of a Stochastic DAG by a Normal Distribution in Linear Time", "venue": "Journal of Discrete Algorithms,", "year": 2009 }, { "authors": [ "Megasthenis Asteris", "Dimitris Papailiopoulos", "George N Karystinos" ], "title": "Sparse Principal Component of a Rank-deficient Matrix", "venue": "In 2011 IEEE International Symposium on Information Theory Proceedings,", "year": 2011 }, { "authors": [ "Megasthenis Asteris", "Dimitris Papailiopoulos", "Anastasios Kyrillidis", "Alexandros G Dimakis" ], "title": "Sparse PCA via Bipartite Matchings", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Haim Avron", "Sivan Toledo" ], "title": "Randomized Algorithms for Estimating the Trace of an Implicit Symmetric Positive Semi-definite Matrix", "venue": "Journal of the ACM,", "year": 2011 }, { "authors": [ "Amir Beck", "Yakov Vaisbourd" ], "title": "The Sparse Principal Component Analysis Problem: Optimality Conditions and Algorithms", "venue": "Journal of Optimization Theory and Applications,", "year": 2016 }, { "authors": [ "Jorge Cadima", "Ian T. Jolliffe" ], "title": "Loading and Correlations in the Interpretation of Principal Components", "venue": "Journal of Applied Statistics,", "year": 1995 }, { "authors": [ "Siu On Chan", "Dimitris Papailliopoulos", "Aviad Rubinstein" ], "title": "On the Approximability of Sparse PCA", "venue": "In Proceedings of the 29th Conference on Learning Theory,", "year": 2016 }, { "authors": [ "Alexandre d’Aspremont", "Laurent El Ghaoui", "Michael I. Jordan", "Gert R.G. Lanckriet" ], "title": "A Direct Formulation for Sparse PCA using Semidefinite Programming", "venue": "SIAM Review,", "year": 2007 }, { "authors": [ "Petros Drineas", "Ravi Kannan" ], "title": "Fast Monte-Carlo Algorithms for Approximate Matrix Multiplication", "venue": "In Proceedings of the 42nd IEEE Symposium on Foundations of Computer Science,", "year": 2001 }, { "authors": [ "Petros Drineas", "Michael W. Mahoney" ], "title": "RandNLA: Randomized Numerical Linear Algebra", "venue": "Communications of the ACM,", "year": 2016 }, { "authors": [ "Petros Drineas", "Michael W. Mahoney" ], "title": "Lectures on Randomized Numerical Linear Algebra, volume 25 of The Mathematics of Data, IAS/Park City Mathematics Series", "venue": "American Mathematical Society,", "year": 2018 }, { "authors": [ "Petros Drineas", "Ravi Kannan", "Michael W. Mahoney" ], "title": "Fast Monte Carlo Algorithms for Matrices II: Computing a Low-Rank Approximation to a Matrix", "venue": "SIAM Journal on Computing,", "year": 2006 }, { "authors": [ "Alexandre d’Aspremont", "Francis Bach", "Laurent El Ghaoui" ], "title": "Optimal Solutions for Sparse Principal Component Analysis", "venue": "Journal of Machine Learning Research,", "year": 2008 }, { "authors": [ "Alexandre d’Aspremont", "Francis Bach", "Laurent El Ghaoui" ], "title": "Approximation Bounds for Sparse Principal Component Analysis", "venue": "Mathematical Programming,", "year": 2014 }, { "authors": [ "Kimon Fountoulakis", "Abhisek Kundu", "Eugenia-Maria Kontopoulou", "Petros Drineas" ], "title": "A Randomized Rounding Algorithm for Sparse PCA", "venue": "ACM Transactions on Knowledge Discovery from Data,", "year": 2017 }, { "authors": [ "John NR Jeffers" ], "title": "Two case studies in the application of principal component analysis", "venue": "Journal of the Royal Statistical Society: Series C (Applied Statistics),", "year": 1967 }, { "authors": [ "Ian T. Jolliffe" ], "title": "Rotation of principal components: Choice of Normalization Constraints", "venue": "Journal of Applied Statistics,", "year": 1995 }, { "authors": [ "Ian T. Jolliffe", "Nickolay T. Trendafilov", "Mudassir Uddin" ], "title": "A Modified Principal Component Technique Based on the LASSO", "venue": "Journal of Computational and Graphical Statistics,", "year": 2003 }, { "authors": [ "Michel Journée", "Yurii Nesterov", "Peter Richtárik", "Rodolphe Sepulchre" ], "title": "Generalized Power Method for Sparse Principal Component Analysis", "venue": "Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Volodymyr Kuleshov" ], "title": "Fast Algorithms for Sparse Principal Component Analysis Based on Rayleigh Quotient Iteration", "venue": "In Proceedings of the 30th International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Maria Teresa Landi", "Tatiana Dracheva", "Melissa Rotunno", "Jonine D. Figueroa", "Huaitian Liu", "Abhijit Dasgupta", "Felecia E. Mann", "Junya Fukuoka", "Megan Hames", "Andrew W. Bergen" ], "title": "Gene Expression Signature of Cigarette Smoking and Its Role in Lung Adenocarcinoma Development and Survival", "venue": "PloS one,", "year": 2008 }, { "authors": [ "Jun Z. Li", "Devin M. Absher", "Hua Tang", "Audrey M. Southwick", "Amanda M. Casto", "Sohini Ramachandran", "Howard M. Cann", "Gregory S. Barsh", "Marcus Feldman", "Luigi L. Cavalli-Sforza" ], "title": "Worldwide Human Relationships Inferred from Genome-Wide", "venue": "Patterns of Variation. Science,", "year": 2008 }, { "authors": [ "Malik Magdon-Ismail" ], "title": "NP-Hardness and Inapproximability of Sparse PCA", "venue": "Information Processing Letters,", "year": 2017 }, { "authors": [ "Michael W. Mahoney", "P. Drineas" ], "title": "CUR Matrix Decompositions for Improved Data Analysis", "venue": "In Proceedings of the National Academy of Sciences,", "year": 2009 }, { "authors": [ "Baback Moghaddam", "Yair Weiss", "Shai Avidan" ], "title": "Generalized Spectral Bounds for Sparse LDA", "venue": "In Proceedings of the 23rd International Conference on Machine learning, pp. 641–648,", "year": 2006 }, { "authors": [ "Baback Moghaddam", "Yair Weiss", "Shai Avidan" ], "title": "Spectral Bounds for Sparse PCA: Exact and Greedy Algorithms", "venue": "In Advances in Neural Information Processing Systems,", "year": 2006 }, { "authors": [ "Cameron Musco", "Christopher Musco" ], "title": "Randomized block krylov methods for stronger and faster approximate singular value decomposition", "venue": "In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Dimitris Papailiopoulos", "Alexandros Dimakis", "Stavros Korokythakis" ], "title": "Sparse PCA through Low-rank Approximations", "venue": "In Proceedings of the 30th International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Haipeng Shen", "Jianhua Z. Huang" ], "title": "Sparse Principal Component Analysis via Regularized Low Rank Matrix Approximation", "venue": "Journal of Multivariate Analysis,", "year": 2008 }, { "authors": [ "Bharath K. Sriperumbudur", "David A. Torres", "Gert R.G. Lanckriet" ], "title": "Sparse Eigen Methods by D.C. Programming", "venue": "In Proceedings of the 24th International Conference on Machine Learning,", "year": 2007 }, { "authors": [ "Martin J. Wainwright" ], "title": "https://www.stat.berkeley.edu/ ̃mjwain/stat210b/Chap2_TailBounds_Jan22_2015.pdf", "venue": "Foundations and Trends in Theoretical Computer Science,", "year": 2015 }, { "authors": [ "Ganzhao Yuan", "Li Shen", "Wei-Shi Zheng" ], "title": "A Decomposition Algorithm for the Sparse Generalized Eigenvalue Problem", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Xiao-Tong Yuan", "Tong Zhang" ], "title": "Truncated Power Method for Sparse Eigenvalue Problems", "venue": "Journal of Machine Learning Research,", "year": 2013 }, { "authors": [ "Dimitrios Zeimpekis", "Efstratios Gallopoulos" ], "title": "TMG: A MATLAB Toolbox for Generating TermDocument Matrices from Text Collections", "venue": "In Grouping Multidimensional Data,", "year": 2006 }, { "authors": [ "Hui Zou", "Trevor Hastie" ], "title": "Regularization and Variable Selection via the Elastic Net", "venue": "Journal of the Royal Statistical Society: Series B,", "year": 2005 }, { "authors": [ "Hui Zou", "Trevor Hastie", "Robert Tibshirani" ], "title": "Sparse Principal Component Analysis", "venue": "Journal of Computational and Graphical Statistics,", "year": 2006 } ]
[ { "heading": "1 INTRODUCTION", "text": "Principal Component Analysis (PCA) and the related Singular Value Decomposition (SVD) are fundamental data analysis and dimension reduction tools in a wide range of areas including machine learning, multivariate statistics and many others. They return a set of orthogonal vectors of decreasing importance that are often interpreted as fundamental latent factors that underlie the observed data. Even though the vectors returned by PCA and SVD have strong optimality properties, they are notoriously difficult to interpret in terms of the underlying processes generating the data (Mahoney & Drineas, 2009), since they are linear combinations of all available data points or all available features. The concept of Sparse Principal Components Analysis (SPCA) was introduced in the seminal work of (d’Aspremont et al., 2007), where sparsity constraints were enforced on the singular vectors in order to improve interpretability. A prominent example where sparsity improves interpretability is document analysis, where sparse principal components can be mapped to specific topics by inspecting the (few) keywords in their support (d’Aspremont et al., 2007; Mahoney & Drineas, 2009; Papailiopoulos et al., 2013).\nFormally, given a positive semidefinite (PSD) matrix A ∈ Rn×n, SPCA can be defined as follows:1\nZ∗ = maxx∈Rn, ‖x‖2≤1 x>Ax, subject to ‖x‖0 ≤ k. (1) In the above formulation, A is a covariance matrix representing, for example, all pairwise feature or object similarities for an underlying data matrix. Therefore, SPCA can be applied for either the object or feature space of the data matrix, while the parameter k controls the sparsity of the resulting vector and is part of the input. Let x∗ denote a vector that achieves the optimal value Z∗ in the above formulation. Then intuitively, the optimization problem of eqn. (1) seeks a sparse, unit norm vector x∗ that maximizes the data variance.\nIt is well-known that solving the above optimization problem is NP-hard (Moghaddam et al., 2006a) and that its hardness is due to the sparsity constraint. Indeed, if the sparsity constraint was removed, then the resulting optimization problem can be easily solved by computing the top left or right singular vector of A and its maximal value Z∗ is equal to the top singular value of A.\nNotation. We use bold letters to denote matrices and vectors. For a matrix A ∈ Rn×n, we denote its (i, j)-th entry by Ai,j ; its i-th row by Ai∗ and its j-th column by A∗j ; its 2-norm by\n1Recall that the p-th power of the `p norm of a vector x ∈ Rn is defined as ‖x‖pp = ∑n i=1 |xi| p for\n0 < p <∞. For p = 0, ‖x‖0 is a semi-norm denoting the number of non-zero entries of x.\n‖A‖2 = maxx∈Rn, ‖x‖2=1 ‖Ax‖2; and its (squared) Frobenius norm by ‖A‖ 2 F = ∑ i,j A 2 i,j . We use the notation A 0 to denote that the matrix A is symmetric positive semidefinite (PSD) and Tr(A) = ∑ iAi,i to denote its trace, which is also equal to the sum of its singular values. Given a PSD matrix A ∈ Rn×n, its Singular Value Decomposition is given by A = UΣUT , where U is the matrix of left/right singular vectors and Σ is the diagonal matrix of singular values." }, { "heading": "1.1 OUR CONTRIBUTIONS", "text": "We present three algorithms for SPCA and associated quality-of-approximation results (Theorems 2.2, 3.1, and 4.1). All three algorithms are simple, intuitive, and run in O ( n3.5 ) or less time. They return a vector that is provably sparse and, when applied to the input covariance matrix A, provably captures a fraction of the optimal solution Z∗. We note that in all three algorithms, the output vector has a sparsity that depends on k (the target sparsity of the original SPCA problem of eqn. (1)) and (an accuracy parameter between zero and one).\nThe first algorithm is based on randomized, approximate matrix multiplication: it randomly (but non-uniformly) selects a subset of O ( k/ 2 ) columns of A1/2 (the square root of the PSD matrix A) and computes its top right singular vector. The output of this algorithm is precisely this singular vector, padded with zeros to become a vector in Rn. It turns out that this simple algorithm, which, surprisingly has not been analyzed in prior work, returns an O ( k/ 2 ) sparse vector y ∈ Rn that satisfies (with constant probability that can be amplified as desired, see Section 2 for details):\ny>Ay ≥ 1 2 Z∗ −\n√ Z∗ ·\n√ Tr(A)\nk .\nNotice that the above bound depends on both Z∗ and it square root and therefore is not a relative error bound. The second term scales as a function of the trace of A divided by k, which depends on the properties of the matrix A and the target sparsity.\nThe second algorithm is a deterministic thresholding scheme. It computes a small number of the top singular vectors of the matrix A and then applies a deterministic thresholding scheme on those singular vectors to (eventually) construct a sparse vector z ∈ Rn that satisfies z>Az ≥ (1/2)Z∗−(3/2) Tr(A). Our analysis provides unconditional guarantees for the accuracy of the solution of this simple thresholding scheme. To the best of our knowledge, no such analyses have appeared in prior work (see Section 1.2 for details). The error bound of the second algorithm is weaker than the one provided by the first algorithm, but the second algorithm is deterministic and does not need to compute the square root (i.e., all singular vectors and singular values) of the matrix A.\nOur third algorithm provides novel bounds for the following standard convex relaxation of the problem of eqn. (1).\nmax Z∈Rn×n, Z 0\nTr(AZ) s.t. Tr(Z) ≤ 1 and ∑ |Zi,j | ≤ k. (2)\nIt is well-known that the optimal solution to eqn. (2) is at least the optimal solution to eqn. (1). We present a novel, two-step rounding scheme that converts the optimal solution matrix Z ∈ Rn×n to a vector z ∈ Rn that has expected sparsity2 Õ ( k2/ 2 ) and satisfies z>Az ≥ γZ(1− ) · Z∗ − . Here, γZ is a constant that precisely depends on the top singular value of Z, the condition number of Z, and the extent to which the SDP relaxation of eqn. (2) is able to capture the original problem (see Theorem 4.1 and the following discussion for details). To the best of our knowledge, this is the first analysis of a rounding scheme for the convex relaxation of eqn. (2) that does not assume a specific model for the covariance matrix A.\nApplications to Sparse Kernel PCA. Our algorithms have immediate applications to sparse kernel PCA (SKPCA), where the input matrix A ∈ Rn×n is instead implicitly given as a kernel matrix whose entry (i, j) is the value k(i, j) := 〈φ(Xi∗), φ(Xj∗)〉 for some kernel function φ that implicitly maps an observation vector into some high-dimensional feature space. Although A is not explicit,\n2For simplicity of presentation and following the lines of (Fountoulakis et al., 2017), we assume that the rows and columns of the matrix A have unit norm; this assumption was not necessary for the previous two algorithms and can be removed as in (Fountoulakis et al., 2017). We are also hiding a poly-logarithmic factor for simplicity, hence the Õ(·) notation. See Theorem 4.1 for a detailed statement.\nwe can query all O ( n2 ) entries of A using O ( n2 )\ntime assuming an oracle that computes the kernel function k. We can then subsequently apply our SPCA algorithms and achieve polynomial time runtime with the same approximation guarantees." }, { "heading": "1.2 PRIOR WORK", "text": "SPCA was formally introduced by (d’Aspremont et al., 2007); however, previously studied PCA approaches based on rotating (Jolliffe, 1995) or thresholding (Cadima & Jolliffe, 1995) the top singular vector of the input matrix seemed to work well, at least in practice, given sparsity constraints. Following (d’Aspremont et al., 2007), there has been an abundance of interest in SPCA. (Jolliffe et al., 2003) considered LASSO (SCoTLASS) on an `1 relaxation of the problem, while (Zou & Hastie, 2005) considered a non-convex regression-type approximation, penalized similar to LASSO. Additional heuristics based on LASSO (Ando et al., 2009) and non-convex `1 regularizations (Zou & Hastie, 2005; Zou et al., 2006; Sriperumbudur et al., 2007; Shen & Huang, 2008) have also been explored. Random sampling approaches based on non-convex `1 relaxations (Fountoulakis et al., 2017) have also been studied; we highlight that unlike our approach, (Fountoulakis et al., 2017) solved a non-convex relaxation of the SPCA problem and thus perhaps relied on locally optimal solutions. Additionally, (Moghaddam et al., 2006b) considered a branch-and-bound heuristic motivated by greedy spectral ideas. (Journée et al., 2010; Papailiopoulos et al., 2013; Kuleshov, 2013; Yuan & Zhang, 2013) further explored other spectral approaches based on iterative methods similar to the power method. (Yuan & Zhang, 2013) specifically designed a sparse PCA algorithm with early stopping for the power method, based on the target sparsity. Another line of work focused on using semidefinite programming (SDP) relaxations (d’Aspremont et al., 2007; d’Aspremont et al., 2008; Amini & Wainwright, 2009). Notably, (Amini & Wainwright, 2009) achieved provable theoretical guarantees regarding the SDP and thresholding approach of (d’Aspremont et al., 2007) in a specific, high-dimensional spiked covariance model, in which a base matrix is perturbed by adding a sparse maximal eigenvector. In other words, the input matrix is the identity matrix plus a “spike”, i.e., a sparse rank-one matrix. Despite the variety of heuristic-based sparse PCA approaches, very few theoretical guarantees have been provided for SPCA; this is partially explained by a line of hardness-of-approximation results. The sparse PCA problem is well-known to be NP-Hard (Moghaddam et al., 2006a). (Magdon-Ismail, 2017) shows that if the input matrix is not PSD, then even the sign of the optimal value cannot be determined in polynomial time unless P = NP, ruling out any multiplicative approximation algorithm. In the case where the input matrix is PSD, (Chan et al., 2016) shows that it is NP-hard to approximate the optimal value up to multiplicative (1 + ) error, ruling out any polynomialtime approximation scheme (PTAS). Moreover, they show Small-Set Expansion hardness for any polynomial-time constant factor approximation algorithm and also that the standard SDP relaxation might have an exponential gap. We conclude by summarizing prior work that offers provable guarantees (beyond the work of (Amini & Wainwright, 2009)), typically given some assumptions about the input matrix. (d’Aspremont et al., 2014) showed that the SDP relaxation can be used to find provable bounds when the covariance input matrix is formed by a number of data points sampled from Gaussian models with a single sparse singular vector. (Papailiopoulos et al., 2013) presented a combinatorial algorithm that analyzed a specific set of vectors in a low-dimensional eigenspace of the input matrix and presented relative error guarantees for the optimal objective, given the assumption that the input covariance matrix has a decaying spectrum. (Asteris et al., 2011) gave a polynomial-time algorithm that solves sparse PCA exactly for input matrices of constant rank. (Chan et al., 2016) showed that sparse PCA can be approximated in polynomial time within a factor of n−1/3 and also highlighted an additive PTAS of (Asteris et al., 2015) based on the idea of finding multiple disjoint components and solving bipartite maximum weight matching problems. This PTAS needs time npoly(1/ ), whereas all of our algorithms have running times that are a low-degree polynomial in n." }, { "heading": "2 SPARSE PCA VIA RANDOMIZED MATRIX MULTIPLICATION", "text": "Our first algorithm for SPCA leverages primitives and ideas from randomized matrix multiplication (Drineas & Kannan, 2001; Drineas et al., 2006; Drineas & Mahoney, 2016; 2018; Woodruff, 2014). Let P ∈ Rm×n and Q ∈ Rn×p and recall that their product PQ equals PQ = ∑n i=1 P∗iQi∗.\nRecall that P∗i denotes the i-th column of P and Qi∗ the i-th row of Q. A well-known approach to approximate the product PQ is to sample a subset of columns of P (we will do this without replacement) and the corresponding rows of Q (Drineas & Mahoney, 2018). Formally, let the random variables Zi\niid∼ Bernoulli(pi), i = 1 . . . n, denote whether the i-th column of P and the i-th row of Q are sampled. Define the diagonal sampling-and-rescaling matrix S ∈ Rn×n by S , diag {Z1/√p1, . . . , Zn/√pn}. The sampling probabilities {pi}ni=1 do not have to sum up to one. The number of sampled column/row pairs, denoted by s, satisfies E [s] = ∑n i=1 pi. (See Algorithm 1 for details.) The next lemma (see Appendix A.1 for its proof) presents accuracy bounds\nAlgorithm 1 Construct sampling-and-rescaling matrix S Input: Probabilities p̃i, i = 1 . . . n and integer s n. Output: Diagonal sampling-and-rescaling matrix S ∈ Rn×n.\n1: for i = 1 to n do 2: Sii ← {\n1/√pi, with probability pi = min{sp̃i, 1} 0, otherwise\nwhen Algorithm 1 is used to approximate matrix multiplication.\nLemma 2.1 Given matrices P ∈ Rm×n and Q ∈ Rn×p, let S ∈ Rn×n be constructed using Algorithm 1 with p̃i = ‖P∗i‖ 2 2/‖P‖2F , for i = 1 . . . n. Then,\nE [ ‖PS2Q−PQ‖2F ] ≤ 1 s ‖P‖2F ‖Q‖ 2 F . (3)\nOur SPCA algorithm uses the above primitive to approximate the product of the (square root) of the input matrix A and its top right singular vector v. Thus, the proposed SPCA algorithm sparsifies the top right singular vector of v of A without losing too much of the variance that is captured by v. Interestingly, this conceptually simple algorithm has not been formally analyzed in prior work. Algorithm 2 details our approach.\nAlgorithm 2 SPCA via randomized matrix multiplication\nInput: A ∈ Rn×n, sparsity parameter k, accuracy parameter ∈ (0, 1). Output: y ∈ Rn satisfying E [‖y‖2] ≤ 1 and E [‖y‖0] ≤ k/ 2.\n1. X← A1/2; 2. Use Algorithm 1 to construct S ∈ Rn×n with p̃i = ‖X∗i‖22/‖X‖2F and s = 4k/ 2; 3. Let v ∈ Rn be the top right singular vector of XS; 4. y← Sv;\nTheorem 2.2 Let k be the sparsity parameter and ∈ (0, 1] be the accuracy parameter. Let S ∈ Rn×n be the sampling matrix of Lemma 2.1 with s = 4k/ 2. Then, Algorithm 2 returns a vector y with expected sparsity at most s (i.e., E [‖y‖0] ≤ s) and expected two norm at most one (i.e., E [ ‖y‖22 ] ≤ 1) such that with probability at least 1/4, we have\ny>Ay ≥ 1/2Z∗ − √ Z∗ · √ Tr(A)/k. (4)\nSee Appendix A.1 for a proof of the above theorem. We note that the success probability of Algorithm 2 can be trivially amplified by repeating the algorithm t times and keeping the vector y that maximizes y>Ay. Then, the failure probability of the overall approach diminishes exponentially fast as a function of t to at most (3/4)t. Finally, the running time of Algorithm 2 is dominated by the computation of a square root of the matrix A in the first step, which takes O ( n3 )\ntime via the computation of the SVD of A." }, { "heading": "3 SPCA VIA THRESHOLDING", "text": "Our second algorithm is based on a thresholding scheme using the top ` right singular vectors of the PSD matrix A. Given A and an accuracy parameter , our approach first computes Σ` ∈ R`×`\n(the diagonal matrix of the top ` singular values of A) and U` ∈ Rn×` (the matrix of the top ` right singular vectors of A), for ` = 1/ . Then, it deterministically selects a subset of O ( k/ 2 ) columns\nof Σ1/2` U > ` using a simple thresholding scheme based on the norms of the columns of Σ 1/2 ` U > ` . (Recall that k is the sparsity parameter of the SPCA problem.) In the last step, it returns the top right singular vector of the matrix consisting of the chosen columns of Σ1/2` U > ` . Notice that this right\nsingular vector is an O ( k/ 2 ) -dimensional vector, which is finally expanded to a vector in Rn by appropriate padding with zeros. This sparse vector is our approximate solution to the SPCA problem of eqn. (1). This simple algorithm is somewhat reminiscent of prior thresholding approaches for SPCA. However, to the best of our knowledge, no provable a priori bounds were known for such algorithms without strong assumptions on the input matrix. This might be due to the fact that prior approaches focused on thresholding only the top right singular vector of A, whereas our approach thresholds the top ` = 1/ right singular vectors of A. This slight relaxation allows us to present provable bounds for the proposed algorithm. In more detail, let the SVD of A be A = UΣUT . Let Σ` ∈ R`×` be the diagonal matrix of the top ` singular values and let U` ∈ Rn×` be the matrix of the top ` right (or left) singular vectors. Let R = {i1, . . . , i|R|} be the set of indices of rows of U` that have squared norms at least /k and let R̄ be its complement. Here |R| denotes the cardinality of the set R and R ∪ R̄ = {1, . . . , n}. Let R ∈ Rn×|R| be a sampling matrix that selects3 the columns of U` whose indices are in the set R. Given this notation, we are now ready to state Algorithm 3.\nAlgorithm 3 SPCA via thresholding\nInput: A ∈ Rn×n, sparsity k, error parameter > 0. Output: y ∈ Rn such that ‖y‖2 = 1 and ‖y‖0 = k/ 2.\n1: `← 1/ ; 2: Compute U` ∈ Rn×` (top ` left singular vectors of A) and Σ` ∈ R`×` (square roots of the top `\nsingular values of A); 3: Let R = {i1, . . . , i|R|} be the set of rows of U` with squared norms at least /k and let\nR ∈ Rn×|R| be the associated sampling matrix (see text for details); 4: y ∈ R|R| ← argmax‖x‖2=1 ∥∥Σ`U>` Rx∥∥22; 5: return z = Ry ∈ Rn;\nNotice that Ry satisfies ‖Ry‖2 = ‖y‖2 = 1 (since R has orthogonal columns) and ‖Ry‖0 = |R|. Since R is the set of rows of U` with squared norms at least /k and ‖U`‖2F = ` = 1/ , it follows that |R| ≤ k/ 2. Thus, the vector returned by Algorithm 3 has k/ 2 sparsity and unit norm. Theorem 3.1 Let k be the sparsity parameter and ∈ (0, 1] be the accuracy parameter. Then, the vector z ∈ Rn (the output of Algorithm 3) has sparsity k/ 2, unit norm, and satisfies z>Az ≥ (1/2)Z∗ − (3/2) Tr(A). We defer the proof of Theorem 3.1 to Appendix A.2. The running time of Algorithm 3 is dominated by the computation of the top ` singular vectors and singular values of the matrix A. In practice, any iterative method, such as subspace iteration using a random initial subspace or the Krylov subspace of the matrix, can be used towards this end. However, our current analysis does not account for the inevitable approximation error incurred by such methods, which run inO (nnz(A)`) time. One could always use the SVD of the full matrix A (O ( n3 )\ntime) to compute the top ` singular vectors and singular values of A. Finally, we highlight that, as an intermediate step in the proof of Theorem 3.1, we need to prove the following lemma (see Appendix A.2 for its proof):\nLemma 3.2 Let A ∈ Rn×n be a PSD matrix and Σ ∈ Rn×n (respectively, Σ` ∈ R`×`) be the diagonal matrix of all (respectively, top `) singular values and let U ∈ Rn×n (respectively, U` ∈ Rn×`) be the matrix of all (respectively, top `) singular vectors. Then, for all unit vectors x ∈ Rn, ∥∥∥Σ1/2` U>` x∥∥∥2 2 ≥ ∥∥∥Σ1/2U>x∥∥∥2 2 − Tr(A).\n3Each column of R has a single non-zero entry (set to one), corresponding to one of the |R| selected columns. Formally, Rit,t = 1 for t = 1, . . . , |R|; all other entries of R are set to zero.\nThe above simple lemma is very much at the heart of our proof of Theorem 3.1 and, unlike prior work, allows us to provide provably accurate bounds for the thresholding Algorithm 3.\nUsing an approximate SVD solution. The guarantees of Theorem 3.1 in Algorithm 3 uses an exact SVD computation, which could take time O ( n3 ) . We can further improve the running time by using an approximate SVD algorithm such as the randomized block Krylov method of Musco & Musco (2015), which runs in nearly input sparsity runtime. Our analysis uses the relationships∥∥∥Σ1/2`,⊥∥∥∥2\n2 ≤ Tr(A)` and σ1(Σ`) ≤ Tr(A). The randomized block Krylov method of Musco & Musco (2015) recovers these guarantees up to a multiplicative (1 + ), using O ( logn 2 · nnz(A) ) runtime.\nThus by rescaling , we recover the same guarantees of Theorem 3.1 by using an approximate SVD with nearly input sparsity runtime." }, { "heading": "4 SPCA VIA A SEMIDEFINITE PROGRAMMING RELAXATION", "text": "Our third algorithm is based on the SDP relaxation of eqn. (2). Recall that solving eqn. (2) returns a PSD matrix Z∗ ∈ Rn×n that, by the definition of the semidefinite programming relaxation, satisfies Tr(AZ∗) ≥ Z∗, where Z∗ is the true optimal solution of SPCA in eqn. (2). We now need to convert the matrix Z∗ ∈ Rn×n into a sparse vector that will be the output of our approximate SPCA algorithm and will satisfy certain accuracy guarantees. Towards that end, we employ a novel two-step rounding procedure. First, a critical observation is that generating a random Gaussian vector g ∈ Rn and computing the vector Z∗g ∈ Rn results in an unbiased estimator for the trace of (Z∗)>AZ∗ in the following sense: E [ g>(Z∗)>AZ∗g ] = Tr((Z∗)>AZ∗).\nUsing von Neumann’s trace inequality, we can prove that E [ g>(Z∗)>AZ∗g ] = Tr((Z∗)>AZ∗) ≥ γZ∗ · Tr(AZ∗) ≥ γZ · Z∗. (5)\nHere γZ∗ is a constant that precisely depends on the top singular value of Z∗, the condition number of Z∗, and the extent to which the SDP relaxation of eqn. (2) is able to capture the original problem (see Theorem 4.1 for the exact expression of γZ∗). The above inequality implies that, at least in expectation, we could use the vector Z∗g as a “rounding” of the output of the semidefinite programming relaxation. However, there is absolutely no guarantee that the vector Z∗g is sparse. Thus, in order to sparsify Z∗g, we employ a separate sparsification procedure, where each entry of Z∗g is kept (and rescaled) with probability proportional to its magnitude. This procedure is similar to the one proposed in (Fountoulakis et al., 2017) and guarantees that larger entries of Z∗g are more likely to be kept, while smaller entries of Z∗g are more likely to be set to zero, without too much loss in accuracy. We also note that to ensure a sufficiently high probability of success for the overall approach, we generate multiple Gaussian vectors and keep the one that maximizes the quantity g>(Z∗)>AZ∗g. See Algorithm 4 and Algorithm 5 for a detailed presentation of our approach.\nAlgorithm 4 SPARSIFY Input: y ∈ Rn and sparsity parameter s. Output: z ∈ Rn with E [‖z‖0] ≤ s.\n1: for i = 1 . . . n do\n2: zi :=\n{ 1 pi yi, with probability pi = min { 1, s|yi|‖y‖1 } ,\n0 otherwise.\nThe running time of the algorithm is dominated by the time needed to solve the semidefinite programming relaxation of eqn. (2), which, in our setting, is O ( n3.5 ) (Alizadeh, 1995). We do note that SDP solvers such as the one in (Alizadeh, 1995) return an additive error approximation to the optimal solution. However, the running time dependence of SDP solvers on the additive error γ is logarithimic in 1/γ and thus highly accurate approximations can be derived without a significant increase in the number of iterations of the solver. Thus, for the sake of clarity, we initially omit this additive error from the analysis and address the approximate solution at the end.\nOur main quality-of-approximation result for Algorithm 5 is Theorem 4.1. For simplicity of presentation, and following the lines of (Fountoulakis et al., 2017), we assume that all rows and columns\nAlgorithm 5 Rounding-based SPCA\nInput: PSD matrix A ∈ Rn×n, error tolerance > 0, and sparsity parameter k. Output: x ∈ Rn with E [‖x‖0] = s.\n1: Let Z∗ be the optimal solution to the relaxed SPCA problem of eqn. (2); 2: M ← 80/ 2 and s← O\n( k2 log5/2(1/ )\n2\n) ; .See Theorem A.13 for the exact value of s.\n3: Generate M random Gaussian vectors g1, . . . , gM in Rn; 4: y← Z∗gj , where j ← argmaxi=1...M g>i (Z∗)>AZ∗gi; 5: z← SPARSIFY(y, s);\nof A have been normalized to have unit norms. This assumption can be relaxed as in (Fountoulakis et al., 2017). In the statement of the theorem, we will use the notation Z1 to denote the best rank-one approximation to the matrix Z.\nTheorem 4.1 Given a PSD matrix A ∈ Rn×n, a sparsity parameter k, and an error tolerance > 0, let Z be an optimal solution to the relaxed SPCA problem of eqn. (2). Assume that\nTr(AZ) ≤ αTr(AZ1) (6)\nfor some constant α ≥ 1. Then, Algorithm 5 outputs a vector z ∈ Rn that, with probability at least 5/8, satisfies E [‖z‖0] = O ( k2 log5/2(1/ )/ 2 ) , ‖z‖2 = O (√ log 1/ ) , and\nz>Az ≥ γZ(1− ) · Z∗ − . Here γZ = ( 1− ( 1− 1κ(Z) )( 1− 1α ))\nσ1(Z) with σ1(Z) and κ(Z) being the top singular value and condition number of Z respectively. Similar to Theorem 2.2, the probability of success can be boosted to 1− δ by repeating the algorithm O ( 1 δ ) times in parallel. Moreover by using Markov’s inequality, we can also guarantee a vector z\nwith sparsity O ( k2 log5/2(1/ )/ 2 ) with probability 1− δ, rather than just in expectation.\nWe now discuss the condition of eqn. (6) and the constant γZ . Our assumption simply says that much of the trace of the matrix AZ should be captured by the trace of AZ1, as quantified by the constant α. For example, if Z were a rank-one matrix, then the assumption would hold with α = 1. As the trace of AZ1 fails to approximate the trace of AZ (which intuitively implies that the SDP relaxation of eqn. (2) did not sufficiently capture the original problem) the constant α increases and the quality of the approximation decreases. More precisely, first notice that the constant γZ is upper bounded by one, because σ1(Z) ≤ 1 by the SDP relaxation. Second, the quality of the approximation increases as γZ approaches one. This happens if either the condition number of Z is close to one or if the constant α is close to one; at the same time, σ1(Z) also needs to be close to one. Clearly, these conditions are satisfied if Z is well approximated by Z1. In our experiments, we indeed observed that α is close to one and that the top singular value of Z is close to one, which imply that γZ is also close to one (Appendix, Table 6). The proof of Theorem 4.1 is delegated to Appendix A.3 (as Theorem A.13), but we outline here a summary of statements that lead to the final bound.\nLemma 4.2 Let y and z be defined as in Algorithm 5. If ‖y‖1 ≤ α and ‖y‖2 ≤ β, then\n|y>Ay − z>Az| ≤ 2|y>A(y − z)|+ |(y − z)>A(y − z)|.\nMoreover, with probability at least 7/8, we have |y>A(y− z)| ≤ 4αβ/√s and |(y− z)>A(y− z)| ≤√ 64α4\ns2 + 96αβ3 s .\nLemma 4.3 Let M = 80/ 2, α = k(1 + 2 √ logM), and β = 2 √\nlogM . If the sparsity parameter s is set to s = 450α2β3/ 2, then with probability at least 3/4, we have y>Ay ≤ z>Az + .\nLetting y = Z∗g, we now conclude the proof by combining eqn. (5) with the above lemma to bound (at least in expectation) the accuracy of Algorithm 5. To get a high probability bound, we leverage a result by (Avron & Toledo, 2011) on estimating the trace of PSD matrices. This approach allows us\nto properly analyze step 4 of Algorithm 5, which uses multiple random Gaussian vectors to achieve measure concentration (see Appendix A.3 for details).\nFinally, we can bound the `2 norm of the vector z of Algorithm 5 by proving that, with probability at least 3/4, ‖z‖2 = O (√ log 1/ ) . Notice that this slightly relaxes the requirement that z has unit\nnorm; however, even for accuracy close to machine precision, √ log 1/ is a small constant.\nUsing approximate SDP solution. The guarantees of Theorem 4.1 in Algorithm 5 uses an optimal solution Z∗ to the SDP relaxation in eqn. (2). In practice, we will only obtain an approximate solution Z̃ to eqn. (2) using any standard SDP solver, e.g. (Alizadeh, 1995), such that Tr(AZ̃) ≥ Tr(AZ∗)− afterO ( log 1 ) iterations. Since our analysis only uses the relationship (x∗)>Ax∗ ≤ Tr(AZ∗), then the additive guarantee can be absorbed into the other factors in the guarantees of Theorem 4.1. Thus, we recover the same guarantees of Theorem 4.1 by using an approximate solution to the SDP relaxation in eqn. (2)." }, { "heading": "5 EXPERIMENTS", "text": "We compare the outputs our algorithms with that of the state-of-the-art sparse PCA solvers such as the coordinate-wise optimization algorithm of Beck & Vaisbourd (2016) (cwpca) and the block decomposition algorithm of Yuan et al. (2019) (dec), along with other standard methods such as Papailiopoulos et al. (2013) (spca-lowrank), d’Aspremont et al. (2007) (dspca), and Zou et al. (2006) (spca). For the implementation of dec, we use coordinate descent method with workset: (6, 6) and for cwpca, we use greedy coordinate wise (GCW) method.\nFirst, in order to explore the sparsity patterns of the outputs of our algorithms and how close they are as compared to standard methods, we first apply our methods on the pit props data which was introduced in (Jeffers, 1967) and is a benchmark example used to test sparse PCA. It is a 13× 13 correlation matrix, originally calculated from 180 observations with 13 explanatory variables. While the existing algorithms aimed to extract top 6 principal components, we restrict only to the top principal component. In particular, we are interested to apply our algorithms on the Pit Props matrix in a view to extract the top pc having a sparsity pattern similar to that of Beck & Vaisbourd (2016), Yuan et al. (2019), (Zou et al., 2006), Papailiopoulos et al. (2013), and (d’Aspremont et al., 2007). It is known that the decomposition method of Yuan et al. (2019) can find the global optimal solution. We take k = 7 and Table 1 shows that while spca-d (Algorithm 3) and spca-r (Algorithm 2) perform very similar to spca or that of dspca with % of variace explained (PVE) uniformly better than spca, our SDP-based method spca-sdp (Algorithm 5) exactly recovers the optimal solution and the output matches with both dec and cwpca and very close to spca-lowrank. We also\napply our algorithms on another benchmark artificial example from (Zou et al., 2006), please see Appendix B.3 for a detailed discussion.\nTightness of our bounds. Now, we also verify the tightness of the theoretical lower bounds of our results with the guarantee of (Papailiopoulos et al., 2013) on the pit props data. We take = 0.1 and found that the lower bound of our spca-sdp (Algorithm 5) (dashed red line on the left) is indeed very close to that of Papailiopoulos et al. (2013) with d=3 (dashed grey line on the left) . Nevertheless, the accuracy parameter of Papailiopoulos et al. (2013) typically relies on the spectrum of A i.e., for a highly accurate output, can be much smaller depending on the structure of A, in which case the difference between the lower bounds\nof our Algorithm 5 and Papailiopoulos et al. (2013) becomes even smaller. Next, we further demonstrate the empirical performance of our algorithms on larger real-world datasets as well as on a synthetic dataset, similar to (Fountoulakis et al., 2017) (see Appendix B). We use human genetic data from the HGDP (Consortium, 2007) and HAPMAP (Li et al., 2008) (22 matrices, one for each chromosome). In addition, we also use a lung cancer gene expression dataset (107×22, 215) from (Landi et al., 2008) and a sparse document-term matrix (2, 858×12, 427) created using the Text-to-Matrix Generator (TMG) (Zeimpekis & Gallopoulos, 2006) (see Appendix B). Comparisons and metrics. We compare our Algorithm 2 (spca-r), Algorithm 3 (spca-d), and Algorithm 5 (spca-sdp) with the solutions returned by spca (Zou et al., 2006), as well as the simple MaxComp heuristic (Cadima & Jolliffe, 1995). We define the quantity f(y) = y>Ay/‖A‖2 to measure the quality of an approximate solution y ∈ Rn to the SPCA problem. Notice that 0 ≤ f(y) ≤ 1 for all y with ‖y‖2 ≤ 1. As f(y) gets closer to one, the vector y captures more of the variance of the matrix A that corresponds to its top singular value and corresponding singular vector. Our goal is to identify a sparse vector y with f(y) ≈ 1. Since the outputs of our Algorithm 2 and Algorithm 5 may have norms that slightly exceed one and in order to have a fair comparison between different methods, we normalize our outputs in the same way as in (Fountoulakis et al., 2017) (Appendix B). Results. In our experiments, for spca-d, spca-r, and spca-sdp, we fix the sparsity s to be equal to k, so that all algorithms return the same number of non-zero elements. In Figures 1a-1c we evaluate the performance of the different SPCA algorithms by plotting f(y) against ‖y‖0, i.e., the sparsity of the output vector, on data from chromosome 1, chromosome 2, and the gene expression data. Note that performance of our SDP-based method (spca-sdp) is indeed comparable with the state-of-the-art dec, cwpca, and spca-lowrank, while both spca-d and spca-r are better than or at least comparable to both spca and maxcomp. However, in practice, the running time of the SDP relaxation is substantially higher than our other methods, which highlights the interesting trade-offs between the accuracy and computation discussed in (Amini & Wainwright, 2009). See Apeendix B for more experimental results." }, { "heading": "6 CONCLUSION AND OPEN PROBLEMS", "text": "We present three provably accurate, polynomial time, approximation algorithms for SPCA, without imposing restrictive assumptions on the input covariance matrix. Future directions include: (i) extend the proposed algorithms to handle more than one sparse singular vector by deflation or other strategies and (ii) explore matching lower bounds and/or improve the guarantees of Theorems 2.2, 3.1, and 4.1." }, { "heading": "Appendix to Approximation Algorithms for Sparse Principal Component Analysis", "text": "" }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 SPCA VIA RANDOMIZED MATRIX MULTIPLICATION: PROOFS", "text": "First, we prove two lemmas that are crucial in proving Lemma 2.1.\nLemma A.1 Given matrices P ∈ Rm×n and Q ∈ Rn×p, let S ∈ Rn×n be constructed using Algorithm 1. Then,\nE [ (PS2Q)ij ] = (PQ)ij\nvar (PS2Q)ij = n∑ k=1 P2ikQ 2 kj pk − n∑ k=1 P2ikQ 2 kj .\nfor any indices i, j ∈ {1, . . . , n}.\nProof : For any i, j ∈ {1, . . . , n}, we have that\nE [ (PS2Q)ij ] = E [ n∑ k=1 PikS 2 kkQkj ] = E [ n∑ k=1 Pik ( Z2k pk ) Qkj ] = n∑ k=1 ( PikQkj pk ) E [ Z2k ]\n= n∑ k=1 PikQkj = (PQ)ij ,\nsince Z2k d = Zk ∼ Ber(p) and thus E [ Z2k ] = E [Zk] = pk, where d = denotes equality in distribution. By the independence of the Zk’s, and noting that var (Zk) = pk(1− pk), we have that\nvar (PS2Q)ij = var n∑ k=1 Pik ( Z2k pk ) Qkj\n= n∑ k=1 ( PikQkj pk )2 varZ2k = n∑ k=1 ( 1− pk pk ) P2ikQ 2 kj\n= n∑ k=1 P2ikQ 2 kj pk − n∑ k=1 P2ikQ 2 kj .\n2\nLemma A.2 Given matrices P ∈ Rm×n and Q ∈ Rn×p, let S ∈ Rn×n be constructed using Algorithm 1. Then,\nE [ ‖PS2Q−PQ‖2F ] = n∑ i=1 ‖P∗i‖22 · ‖Qi∗‖22 pi − n∑ i=1 ‖P∗i‖22 ‖Qi∗‖22 . (7)\nHere P∗i and Qi∗ are the i-th column of P and i-th row of Q respectively.\nProof : Using Lemma A.1, we have that\nE [ ‖PQ−PS2Q‖2F ] = m∑ i=1 p∑ j=1 E [( (PQ)ij − (PS2Q)ij )2] = m∑ i=1 p∑ j=1 var (PS2Q)ij\n= m∑ i=1 p∑ j=1 [ n∑ k=1 P2ikQ 2 kj pk − n∑ k=1 P2ikQ 2 kj ]\n= n∑ k=1 ( 1 pk − 1 )( m∑ i=1 P2ik ) p∑ j=1 Q2kj =\nn∑ k=1 ‖P∗k‖22 ‖Qk∗‖22 pk − n∑ k=1 ‖P∗k‖22 ‖Qk∗‖22 .\n2\nProof of Lemma 2.1: From Lemma A.2, E [ ‖PS2Q−PQ‖2F ] = n∑ i=1 ‖P∗i‖22 · ‖Qi∗‖22 pi − n∑ i=1 ‖P∗i‖22 ‖Qi∗‖22\n= ∑\n{i:p̃i≤1/s}\n‖P∗i‖22 · ‖Qi∗‖22 sp̃i\n+ ∑\n{i:p̃i>1/s}\n‖P∗i‖22 · ‖Qi∗‖22 − n∑ i=1 ‖P∗i‖22 ‖Qi∗‖22\n≤ ∑\n{i:p̃i≤1/s}\n‖P∗i‖22 · ‖Qi∗‖22 sp̃i ≤ n∑ i=1 ‖P∗i‖22 · ‖Qi∗‖22 sp̃i .\nWe conclude the proof by setting p̃i = ‖P∗i‖ 2 2/‖P‖2F .\nProof of Theorem 2.2. Proof : In Lemma 2.1, let P = X and Q = x∗ to get\nE [ ‖XS2x∗ −Xx∗‖22 ] ≤ 1 s ‖X‖2F · ‖x ∗‖22 ≤ 1 s ‖X‖2F . (8)\nThe last inequality follows from ‖x∗‖2 ≤ 1. Moreover, by Markov’s inequality, with probability at least 3/4,\n‖XS2x∗ −Xx∗‖22 ≤ 4\ns ‖X‖2F =\n2\nk ‖X‖2F . (9)\nLet x̃ = Sx∗. Taking square roots of both sides of the above inequality, and applying the triangle inequality on the left hand side of the above inequality, we get:\n|‖Xx∗‖2 − ‖XSx̃‖2| ≤ √ k ‖X‖F\n⇒ ‖XSx̃‖2 ≥ √ Z∗ − √\nk ‖X‖F\n⇒ ‖XSx̃‖22 ≥ Z∗ + 2\nk ‖X‖2F − 2 √ k\n√ Z∗ · ‖X‖F . (10)\nNote that Z∗ = ‖Xx∗‖22. Ignoring the non-negative term in eqn. (10), we conclude\n‖XSx̃‖22 ≥ Z∗ − 2 √ k\n√ Z∗ · ‖X‖F . (11)\nNext, using sub-multiplicativity on the left hand side of eqn. (11),\n‖XSx̃‖22 ≤ ‖XS‖22‖x̃‖22 = ‖XSv‖22 ‖x̃‖22 , (12)\nwhere v ∈ Rn is the top right singular vector of XS. Letting x∗i be the i-th entry of x∗, we have,\nE(‖x̃‖22) = E ( n∑ i=1 x∗2i Z 2 i pi ) = n∑ i=1 x∗2i pi pi = ‖x∗‖22 = 1 , (13)\nsince E [ Z2i ] = pi. Using Markov’s inequality, with probability at least 1/2,\n‖x̃‖22 ≤ 2. (14)\nConditioning on this event, we can rewrite eqn. (12) as follows:\n‖XSx̃‖22 ≤ 2‖XSv‖22 = 2 (Sv)>X>X(Sv) = 2 y>X>Xy. (15)\nCombining eqns. (11) and (15), we conclude\ny>X>Xy ≥ 1 2 Z∗ − √\nk\n√ Z∗ · ‖X‖F .\nUsing X>X = A and Tr(A) = ‖X‖2F concludes the proof of eqn. (4). Finally, following the lines of eqn. (13), we can prove E(‖y‖22) = E(‖Sv‖22) = 1. To conclude the proof of the theorem, notice that the failure probability is at most 1/4 + 1/2 = 3/4 from a union bound on the failure probabilities of eqns. (9) and (14). 2" }, { "heading": "A.2 SPCA VIA THRESHOLDING: PROOFS", "text": "We will use the notation of Section 3. For notational convenience, let σ1, . . . , σn be the diagonal entries of the matrix Σ ∈ Rn×n, i.e., the singular values of A.\nProof of Lemma 3.2: Let U`,⊥ ∈ Rn×(n−`) be a matrix whose columns form a basis for the subspace perpendicular to the subspace spanned by the columns of U`. Similarly, let Σ`,⊥ ∈ R(n−`)×(n−`) be the diagonal matrix of the bottom n− ` singular values of A. Notice that U = [U` U`,⊥] and Σ = [Σ` 0; 0 Σ`,⊥]; thus,\nUΣ1/2U> = U`Σ 1/2 ` U > ` + U`,⊥Σ 1/2 `,⊥U > `,⊥.\nBy the Pythagorean theorem,∥∥∥UΣ1/2U>x∥∥∥2 2 = ∥∥∥U`Σ1/2` U>` x∥∥∥2 2 + ∥∥∥U`,⊥Σ1/2`,⊥U>`,⊥x∥∥∥2 2 . Using invariance properties of the vector two-norm and sub-multiplicativity, we get∥∥∥Σ1/2` U>` x∥∥∥2 2 ≥ ∥∥∥Σ1/2U>x∥∥∥2 2 − ∥∥∥Σ1/2`,⊥∥∥∥2 2\n∥∥U>`,⊥x∥∥22 . We conclude the proof by noting that ∥∥Σ1/2U>x∥∥2 2\n= x>UΣU>x = x>Ax and∥∥∥Σ1/2`,⊥∥∥∥2 2 = σ`+1 ≤ 1 ` n∑ i=1 σi = Tr(A) ` .\nThe inequality above follows since σ1 ≥ σ2 ≥ . . . σ` ≥ σ`+1 ≥ . . . ≥ σn. We conclude the proof by setting ` = 1/ .\nProof of Theorem 3.1. Let R = {i1, . . . , i|R|} be the set of indices of rows of U` (columns of U>` ) that have squared norms at least /k and let R̄ be its complement. Here |R| denotes the cardinality of the set R and R ∪ R̄ = {1, . . . , n}. Let R ∈ Rn×|R| be the sampling matrix that selects the columns of U` whose indices are in the set R and let R⊥ ∈ Rn×(n−|R|) be the sampling matrix that selects the columns of U` whose indices are in the set R̄. Thus, each column of R (respectively R⊥) has a single non-zero entry, equal to one, corresponding to one of the |R| (respectively |R̄|) selected columns. Formally, Rit,t = 1 for all t = 1, . . . , |R|, while all other entries of R (respectively R⊥) are set to zero; R⊥ can be defined analogously. The following properties are easy to prove: RR> + R⊥R > ⊥ = In; R >R = I; R>⊥R⊥ = I; R > ⊥R = 0. Recall that x\n∗ is the optimal solution to the SPCA problem from eqn. (1). We proceed as follows:∥∥∥Σ1/2` U>` x∥∥∥2 2 = ∥∥∥Σ1/2` U>` (RR> + R⊥R>⊥)x∥∥∥2 2\n≤ 2 ∥∥∥Σ1/2` U>` RR>x∗∥∥∥2\n2 + 2 ∥∥∥Σ1/2` U>` R⊥R>⊥x∗∥∥∥2 2\n≤ 2 ∥∥∥Σ1/2` U>` RR>x∗∥∥∥2\n2 + 2σ1 ∥∥U>` R⊥R>⊥x∗∥∥22 . (16) The above inequalities follow from the Pythagorean theorem and sub-multiplicativity. We now bound the second term in the right-hand side of the above inequality.∥∥U>` R⊥R>⊥x∗∥∥2 = ‖ n∑\ni=1\n(U>` R⊥)∗i(R > ⊥x ∗)i‖2\n≤ n∑ i=1 ‖(U>` R⊥)∗i‖2 · |(R>⊥x∗)i| ≤ √ k n∑ i=1 |(R>⊥x∗)i|\n≤ √\nk ‖R>⊥x∗‖1 ≤\n√\nk\n√ k = √ . (17)\nIn the above derivations we use standard properties of norms and the fact that the columns of U>` that have indices in the set R̄ have squared norm at most /k. The last inequality follows from ‖R>⊥x∗‖1 ≤ ‖x∗‖1 ≤ √ k, since x∗ has at most k non-zero entries and Euclidean norm at most one.\nRecall that the vector y of Algorithm 3 maximizes ‖Σ1/2` U>` Rx‖2 over all vectors x of appropriate dimensions (including Rx∗) and thus\n‖Σ1/2` U > ` Ry‖2 ≥ ∥∥∥Σ1/2` U>` RR>x∗∥∥∥ 2 . (18)\nCombining eqns. (16), (17), and (18), we get 1\n2 ∥∥∥Σ1/2` U>` x∗∥∥∥2 2 ≤ ‖Σ1/2` U > ` z‖22 + Tr(A). (19)\nIn the above we used z = Ry (as in Algorithm 3) and σ1 ≤ Tr(A). Notice that\nU`Σ 1/2 ` U > ` z + U`,⊥Σ 1/2 `,⊥U > `,⊥z = UΣ 1/2U>z,\nand use the Pythagorean theorem to get\n‖U`Σ1/2` U > ` z‖22 + ‖U`,⊥Σ 1/2 `,⊥U > `,⊥z|22 = ‖UΣ1/2U>z‖22.\nUsing the unitary invariance of the two norm and dropping a non-negative term, we get the bound\n‖Σ1/2` U > ` z‖22 ≤ ‖Σ1/2U>z‖22. (20)\nCombining eqns. (20) and (19), we conclude 1\n2 ∥∥∥Σ1/2` U>` x∗∥∥∥2 2 ≤ ‖Σ1/2U>z‖22 + Tr(A). (21)\nWe now apply Lemma 3.2 to the optimal vector x∗ to get∥∥∥Σ1/2U>x∗∥∥∥2 2 − Tr(A) ≤ ∥∥∥Σ1/2` U>` x∗∥∥∥2 2 .\nCombining with eqn. (21) we get\nz>Az ≥ 1 2 Z∗ − 3 2 Tr(A).\nIn the above we used ‖Σ1/2U>z‖22 = z>Az and ∥∥Σ1/2U>x∗∥∥2 2 = (x∗)>Ax∗ = Z∗." }, { "heading": "A.3 SPCA VIA A SEMIDEFINITE PROGRAMMING RELAXATION: PROOFS", "text": "We start with a lemma arguing that the sparsification procedure of Algorithm 5 does not significantly distort the `2 norm of the input/output vectors.\nLemma A.3 Let y and z be defined as in Algorithm 5. If ‖y‖1 ≤ α, then, with probability at least 15/16,\n‖z− y‖22 ≤ 16α2\ns .\nProof : Notice that E [ ‖z− y‖22 ] = n∑ i=1 ( 1 pi − 1 ) y2i ≤ n∑ i=1 y2i pi ≤ ‖y‖1 n∑ i=1 yi s = ‖y‖21 s ,\nwhich is at most α 2\ns from our assumptions. The lemma follows by Markov’s inequality. 2\nTo prove Lemma 4.3, we start with the following consequence of the triangle inequality, e.g., Lemma 3 from (Fountoulakis et al., 2017).\nLemma A.4 Let y and z be defined as in Algorithm 5. Then\n|y>Ay − z>Az| ≤ 2|y>A(y − z)|+ |(y − z)>A(y − z)|.\nProof : This is Lemma 3 from (Fountoulakis et al., 2017). 2\nWe now proceed to upper bound the two terms in the above lemma separately. The following lemma bounds the first term.\nLemma A.5 Let y and z be defined as in Algorithm 5. If ‖y‖1 ≤ α and ‖y‖2 ≤ β, then, with probability at least 15/16, |y>A(y − z)| ≤ 4αβ/√s.\nProof : Recall that we set zi = yi/pi with probability pi and zero otherwise, for all i = 1 . . . n. Then,\nE [ (y>A(y − z))2 ] = n∑ i=1 ( 1 pi − 1 ) y2i (Ai∗y) 2.\nUsing |Ai∗y| ≤ ‖Ai∗‖2 ‖y‖2 ≤ β (from our assumption on the `2 norm of y as well as our assumption that the rows/columns of A have unit norm), it follows that\nE [ (y>A(y − z))2 ] ≤ β2 n∑ i=1 y2i pi ≤ β2 ‖y‖21 s ≤ α 2β2 s .\nThe lemma follows from Markov’s inequality. 2\nThe next lemma provides an upper bound for the second term in the right hand side of Lemma A.4.\nLemma A.6 Let y and z be defined as in Algorithm 5. If ‖y‖1 ≤ α and ‖y‖2 ≤ β, then, with probability at least 15/16,\n|(y − z)>A(y − z)| ≤ √ 64α4\ns2 +\n96αβ3\ns .\nProof : Let ζi = 1pi with probability pi and zero otherwise for all i = 1 . . . n. Then, E [( (y − z)>A(z− y) )2] = ∑ a,b,c,d Aa,cAb,dyaybycyd · E [(1− ζa)(1− ζb)(1− ζc)(1− ζd)] .\nWe immediately have E [1− ζi] = 0. Thus, if any of the indices a, b, c, d appears only once in the above summation, then E [(1− ζa)(1− ζb)(1− ζc)(1− ζd)] = 0. Let\nB1 = ∑ a 6=b A2a,by 2 ay 2 bE [ (1− ζa)2(1− ζb)2 ] ,\nB2 = ∑ a 6=b Aa,aAb,by 2 ay 2 bE [ (1− ζa)2(1− ζb)2 ] ,\nB3 = ∑ a 6=b Aa,bAb,ay 2 ay 2 bE [ (1− ζa)2(1− ζb)2 ] ,\nB4 = n∑ a=1 A2a,ay 4 aE [ (1− ζa)4 ] .\nIt now follows that\nE [( (y − z)>A(z− y) )2] = 4∑ i=1 Bi.\nUsing |Ai,j | ≤ 1 for all i, j, we can bound B1, B2, and B3 by\nmax i=1,2,3 {Bi} ≤ n∑ a=1 y2aE [ (1− ζa)2 ] n∑ b=1 y2bE [ (1− ζb)2 ] .\nUsing E [ (1− ζi)2 ] = 1pi − 1 for all i, we get\nmax i=1,2,3\n{Bi}) ≤ (∑ a=1 ( 1 pa − 1 ) y2a )2 ≤ ( ‖y‖21 s )2 ≤ α 2 s2 ,\nwhere the inequality follows by ‖y‖1 ≤ α. To bound B4, use ‖y‖1 ≤ α and ‖y‖2 ≤ β, to get\nB4 = n∑ a=1 A2a,ay 4 aE [ (1− ζa)4 ] ≤ n∑ a=1 y4aE [ (1− ζa)4 ] = n∑ a=1 y4a ( 1 p3a − 4 p2a + 6 pa − 4 + pa )\n≤ n∑ a=1 y4a ( ‖y‖31 |y3a|s3 + 6 ‖y‖1 |ya|s ) ≤ ‖y‖41 s3 + 6 ‖y‖3 ‖y‖ 3 2 s ≤ α 4 s3 + 6αβ3 s .\nThe last inequality follows from properties of norms, namely that ‖y‖33 ≤ ‖y‖ 3 2. Thus,\nE [( (y − z)>A(z− y) )2] ≤ α4\ns3 +\n3α4\ns2 +\n6αβ3\ns ≤ 4α\n4\ns2 +\n6αβ3\ns .\nUsing Markov’s inequality, we conclude\nPr [ |(z− y)>A(z− y)| ≥ 4 √ 4α4\ns2 +\n6αβ3\ns\n] ≤ 1\n16 .\n2\nProof of Lemma 4.2: Observe that Lemma 4.2 follows immediately from Lemma A.4, Lemma A.5, Lemma A.6, and a union bound. 2\nThe next two lemmas bound the `2 and `1 norms of the vectors yi for all i = 1 . . .M . We will bound the norm of a single vector yi (we will drop the index) and then apply a union bound on all M vectors.\nLemma A.7 Let y be defined as in Algorithm 5. Then, Pr [ ‖y‖2 ≥ 2 √ logM ] ≤ 1 M2 .\nProof : Let Z = UΣV> be the singular value decomposition of Z and let σi = Σi,i, for i = 1 . . . n, be the singular values of Z. Since Tr(Z) = 1, it follows that ∑n i=1 σi = 1 and also σi ≤ 1 for all i = 1 . . . n. Additionally, n∑ i=1 σ2i ≤ n∑ i=1 σi ≤ 1. (22)\nThen, ‖y‖22 = ‖Zg‖ 2 2 = g >Z>Zg = g>VΣ2V>g = ∥∥ΣV>g∥∥2 2 . The rotational invariance of the Gaussian distribution implies that y2 ∼ h2, where h is a random vector whose i-th entry hi satisfies hi ∼ N (0, σ2i ). Hence,\nE [ ‖y‖22 ] = E [ ‖h‖22 ] = n∑ i=1 σ2i ≤ 1.\nNow, from Markov’s inequality, for any C > 0,\nPr [ ‖y‖2 ≥ t+ logM\nt\n] = Pr [ eC‖y‖2 ≥ eCt+C logM/t ] ≤ E [ eC‖y‖2 ] eCtMC/t .\nThen, E [ eC‖y‖2 ] = E [ eC‖h‖2 ]\n≤ n∏ i=1 2√ 2πσi ∫ ∞ 0 eCxe−x 2/2σ2i dx\n= n∏ i=1 ( 2√ 2πσi eC 2σ2i /2 ∫ ∞ 0 e − ( x√ 2σi −Cσi√ 2 )2 dx )\n= n∏ i=1 ( 2√ π eC 2σ2i /2 ∫ ∞ 0 e−t 2 dt ) = n∏ i=1 eC 2σ2i /2 = exp ( n∑ i=1 C2σ2i /2 ) .\nUsing eqn. (22), we get\nE [ eC‖y‖2 ] ≤ eC 2/2.\nSetting C = 2t and ≤ 1, we get\nPr [ ‖y‖2 ≥ t+ logM\nt\n] ≤ e C2/2\neCtMC/t =\n1\nM2 .\nSetting t = √ logM concludes the proof. 2\nPrior to bounding the `1 norm of y, we present a measure concentration result that will be useful in our proof. First, recall the definition of L-Lipschitz functions.\nDefinition A.8 Let f : Rn → R be any function. If ‖f(x)− f(y)‖2 ≤ L ‖x− y‖2 for all x,y ∈ Rn, then f is L-Lipschitz.\nTheorem A.9 (Gaussian Lipschitz Concentration) (Wainwright, 2015) Let f be an L-Lipschitz function and let g ∈ Rn be a vector of i.i.d. Gaussians. Then f(x) is sub-Gaussian with variance L2 and, for all t ≥ 0,\nPr [|f(x)− E [f(x)] | ≥ t] ≤ 2e−t 2/2L2 .\nLemma A.10 Let y be defined as in Algorithm 5. Then, Pr [ ‖y‖1 ≥ k(1 + 2k √ logM) ] ≤ 1 M2 .\nProof : Since gj ∼ N (0, 1) for all j = 1 . . . n, the 2-stability of the Gaussian distribution implies that\nE [‖Zg‖1] = n∑ i=1 ∣∣∣∣∣∣ n∑ j=1 Zi,jgj ∣∣∣∣∣∣ = n∑ i=1 √ 2 π ‖Zi∗‖2 = √ 2 π ‖Z‖1,2 .\nLet f(x) = ‖Zx‖1. The triangle inequality implies that\n| ‖Zx‖1 − ‖Zy‖1 | ≤ n∑ i=1 |Zi∗x− Zi∗y| = n∑ i=1 |Zi∗(x− y)| .\nThus, by Cauchy-Schwarz,\n| ‖Zx‖1 − ‖Zy‖1 | ≤ n∑ i=1 ‖Zi∗‖2 ‖x− y‖2 ,\nand f(x) is ‖Z‖1,2-Lipschitz4. Using Theorem A.9,\nPr [∣∣∣∣∣‖y‖1 − √ 2 π ‖Z‖1,2 ∣∣∣∣∣ ≥ t ] ≤ 2e−t 2/2‖Z‖21,2 ,\n4Recall that the Lp,q norm of A is ‖A‖p,q = (∑n i=1 (∑n j=1 |Ai,j | q ) p q ) 1 p , e.g., ‖A‖F = ‖A‖2,2.\nfor all t ≥ 0. Setting t = 2 √\nlogM and noting that ‖Z‖1,2 ≤ ‖Z‖1,1 ≤ k, we get\nPr [ ‖y‖1 ≥ k(1 + 2 √ logM) ] ≤ 1 M2 .\n2\nProof of Lemma 4.3: Using Lemma A.7 and Lemma A.10, we conclude that ‖y‖1 ≤ α and ‖y‖2 ≤ β both hold with probability at least 1− 2 M . Using Lemma A.4, we get\n|y>Ay − z>Az| ≤ 2|y>A(y − z)|+ |(y − z)>A(y − z)|.\nSince |y>A(y − z)| ≤ 4αβ√ s with probability at least 1516 (by Lemma A.5) and\n|(y − z)>A(y − z)| ≤ √ 64α4\ns2 +\n96αβ3\ns\nwith probability at least 1516 (by Lemma A.6), setting s = 450α2β3 2 , we get\ny>Ay ≤ z>Az + ,\nwith probability at least 14/16− 2/M ≥ 3/4, since M ≥ 16. 2\nWe now prove an inequality that was used in eqn. (5) to compare y>Ay and Tr(AZ).\nLemma A.11 Let Z,A ∈ Rn×n be PSD matrices and Tr(AZ) ≤ αTr(AZ1) for some α ≥ 1, where Z1 is the best rank-1 approximation of Z. Then,\nTr(ZAZ) ≥ γZ .Tr(AZ) . Here γZ = ( 1− ( 1− 1κ(Z) )( 1− 1α )) σ1(Z) with σ1(Z) and κ(Z) being the top singular value and condition number of Z respectively.\nProof : For simplicity of exposition, assume that rank(Z) = n. Let Z = UΣU be the SVD of Z. Suppose U = (U1 U1,⊥) and Σ = (\nΣ1 0 0 Σ1,⊥\n) such that we have Z1 = U1Σ1U>1 and Z1,⊥ =\nU1,⊥Σ1,⊥U > 1,⊥. As Z1 is the best rank-1 approximation of Z, we have Z1Z1,⊥ = Z1,⊥Z1 = 0. Using this, we rewrite Tr(ZAZ) as the following\nTr(ZAZ) = Tr ((Z1 + Z1,⊥)A(Z1 + Z1,⊥))\n= Tr(Z1AZ1) + Tr(Z1,⊥AZ1,⊥) + Tr(Z1AZ1,⊥) + Tr(Z1,⊥AZ1)\n= Tr(Z1AZ1) + Tr(Z1,⊥AZ1,⊥) + Tr(AZ1,⊥Z1) + Tr(AZ1Z1,⊥)\n= Tr(Z1AZ1) + Tr(Z1,⊥AZ1,⊥) , (23)\nwhere the third equality follows from the invariance of matrix trace under cyclic permutations and the last step is due to Z1Z1,⊥ = Z1,⊥Z1 = 0.\nNext, we rewrite Tr(AZ) as\nTr(AZ) = Tr(AZ1) + Tr(AZ1,⊥) = Tr(AZ1Z † 1Z1) + Tr(AZ1,⊥)\n= Tr(Z†1Z1AZ1) + Tr(AZ1,⊥) ≤ σ1(Z†1) · σ1(Z1AZ1) + Tr(AZ1,⊥) (24)\nwhere Z†1 is the pseudo-inverse of Z1 and we have used the fact Z1Z † 1Z1 = Z1 and the last inequality follows from the von Neumann’s trace inequality. Now, noting that σ1(Z † 1) = 1 σ1(Z)\nalong with the fact that σ1(Z1AZ1) ≤ Tr(Z1AZ1) applying eqn. (23), we have\nTr(AZ) ≤ 1 σ1(Z)\n( Tr(ZAZ)− Tr(Z1,⊥AZ1,⊥) ) + Tr(AZ1,⊥) (25)\nNext, we will show that Tr(Z1,⊥AZ1,⊥) ≥ σn(Z)·Tr(AZ1,⊥). First, note that Σ1,⊥ < σn(Z)·In−1, as σn(Z) ≤ σi(Z) for all i = 2, . . . , n. Therefore, pre- and post-multiplying both sides by Σ 1/2 1,⊥, we further have Σ21,⊥ < σn(Z) ·Σ1,⊥. Again, pre- and post-multiplying both sides by U1,⊥ and U>1,⊥, we have:\nZ21,⊥ = U1,⊥Σ 2 1,⊥U > 1,⊥ < σn(Z) ·U1,⊥Σ1,⊥U>1,⊥ = σn(Z) · Z1,⊥ . (26)\nAs the matrix A is also PSD, it has a PSD square-root A1/2 such that A = A1/2 ·A1/2. Now, preand post-multiplying both sides of eqn. (26) by A1/2, we have\nA 1/2Z21,⊥A 1/2 < σn(Z) ·A 1/2Z1,⊥A 1/2 (27)\nNext, we rewrite Tr(Z1,⊥AZ1,⊥) as follows\nTr(Z1,⊥AZ1,⊥) = Tr(AZ 2 1,⊥) = Tr(A 1/2Z21,⊥A 1/2) = n∑ i=1 e>i A 1/2Z21,⊥A 1/2ei\n≥ σn(Z) · n∑ i=1 e>i A 1/2Z1,⊥A 1/2ei = σn(Z) · Tr ( A 1/2Z1,⊥A 1/2 )\n= σn(Z) · Tr (AZ1,⊥) . (28) In the above, e1, e2, . . . , en ∈ Rn are the canonical basis vectors and we have used the invariance property of matrix trace under cyclic permutations. Finally, the inequality in eqn. (28) directly follows from eqn. (27), as eqn. (27) boils down to x>A1/2Z21,⊥A\n1/2x ≥ σn(Z) · x>A1/2Z1,⊥A1/2x for any vector x 6= 0.\nNext, we combine eqns. (25) and (28) and replacing κ(Z) = σ1(Z)σn(Z) to get\nTr(AZ) ≤ Tr(ZAZ) σ1(Z) +\n( 1− 1\nκ(Z)\n) Tr(AZ1,⊥)\n= Tr(ZAZ)\nσ1(Z) +\n( 1− 1\nκ(Z)\n) (Tr(AZ)− Tr(AZ1))\n≤ Tr(ZAZ) σ1(Z) +\n( 1− 1\nκ(Z)\n) ( 1− 1\nα\n) Tr(AZ) , (29)\nwhere the equality is holds as Tr(AZ) = Tr(AZ1) + Tr(AZ1,⊥) and the last inequality is due to our assumption that Tr(AZ) ≤ αTr(AZ1). This concludes the proof. 2\nTo finalize our proof, we use the following result of (Avron & Toledo, 2011) for estimating the trace of PSD matrices.\nTheorem A.12 (Avron & Toledo, 2011) Given a PSD matrix A ∈ Rn×n, let M = 80/ 2. Let gi (for i = 1 . . .M ) be standard Gaussian random vectors. Then with probability at least 7/8,∣∣∣∣∣Tr(A)− 1M M∑ i=1 g>i Agi ∣∣∣∣∣ ≤ · Tr(A). We are now ready to prove the correctness of Algorithm 5 by establishing Theorem A.13.\nTheorem A.13 Let Z be an optimal solution to the relaxed SPCA problem of eqn. (2) and Assume that Tr(AZ) ≤ αTr(AZ1) for some constant α ≥ 1, where Z1 is the best rank-1 approximation of Z. Then, there exists an algorithm that takes as input a PSD matrix A ∈ Rn×n, an approximation parameter > 0, and a parameter k, and outputs a vector z such that with probability at least 5/8,\nE [‖z‖0] ≤ s = 450α2β3 2 , ‖z‖2 ≤ β + α√ s , z>Az ≥ γZ(1− ) · Z∗ − .\nIn the above, α = k(1 + 2 √ logM), β = 2 √ logM , and M = 80/ 2, and γZ =( 1− ( 1− 1κ(Z) )( 1− 1α )) σ1(Z) with σ1(Z) and κ(Z) being the top singular value and condition number of Z respectively.\nProof : Consider Algorithm 5 and let Z∗ be an optimal solution to the SPCA semidefinite relaxation of eqn. (2). Then, as already discussed, (x∗)>Ax∗ ≤ Tr(AZ∗), where x∗ is the optimal solution to the SPCA problem of eqn. (1). Then, using Lemma A.11, it follows that\nγZ∗ Tr(AZ ∗) ≤ Tr ( (Z∗)>AZ∗ ) .\nApplying Theorem A.12 to the matrix (Z∗)>AZ∗ and using our choice of y in Algorithm 4, we get\ny>Ay ≥ 1 M M∑ i=1 g>i (Z ∗)>AZ∗gi ≥ (1− ) Tr ( (Z∗)>AZ∗ ) ,\nwith probability at least 7/8. By Lemma 4.3, we have y>Ay ≤ z>Az + with probability at least 3/4. Thus, with probability at least 5/8,\n(1− )γZ∗ · Z∗ = (1− )γZ∗ · (x∗)>Ax∗ ≤ z>Az + .\nTo conclude the proof, we need to bound the `2 norm of the solution vector z. Let E be the event that ‖Zgi‖1 ≤ k(1 + 2 √ logM) and ‖Zgi‖2 ≤ 2 √ logM for all i = 1 . . .M . From Lemma A.7 and Lemma A.10 and the union bound, we have Pr [E ] ≥ 1− 2M . Conditioned on E , Lemma A.3 implies that, with probability at least 15/16,\n‖y − z‖22 ≤ 16k2(1 + 2\n√ logM)2\ns .\nTherefore, with probability at least 1516 − 2 M ≥ 3 4 (since M ≥ 16), an application of the triangle inequality gets\n‖z‖2 ≤ ‖y‖2 + ‖z− y‖2 ≤ 2 √ logM + 4k(1 + 2 √ logM)√ s .\nUsing our chosen values for α and β concludes the proof. 2" }, { "heading": "B ADDITIONAL NOTES ON EXPERIMENTS", "text": "In addition, we normalize the outputs of Algorithm 2 and Algorithm 5 by keeping the rows and the columns of A corresponding to the nonzero elements of the output vectors and then getting the top singular vector of the induced matrix and padding it with zeros. The above two considerations make our comparisons fair in terms of function f(y) (see Section 5 for the definition of f(y)). For Algorithm 3, we fix the threshold parameter ` to 30 for human genetic data, as well as for the text data; we set ` = 10 for the gene expression data. Finally, for Algorithm 5, we fix M (the number of random Gaussian vectors) to 300 and we use Python’s cvxpy package to solve eqn. (2). All the experiments were implemented on a single-core Intel(R) Xeon(R) Gold 6126 CPU @ 2.60GHz." }, { "heading": "B.1 REAL DATA", "text": "Population genetics data. We use population genetics data from the Human Genome Diversity Panel (Consortium, 2007) and the HAPMAP (Li et al., 2008). In particular, we use the 22 matrices (one for each chromosome) that encode all autosomal genotypes. Each matrix contains 2,240 rows and a varying number of columns that is equal to the number of single nucleotide polymorphisms (SNPs, well-known biallelic loci of genetic variation across the human genome) in the respective chromosome. The columns of each matrix were mean-centered as a preprocessing step. See Table 4 for summary statistics.\nGene expression data. We also use a lung cancer gene expression dataset (GSE10072) from from the NCBI Gene Expression Omnibus database (Landi et al., 2008). This dataset contains 107 samples (58 cases and 49 controls) and 22,215 features. Both the population genetics and the gene expression datasets are interesting in the context of sparse PCA beyond numerical evaluations, since the sparse components can be directly interpreted to identify small sets of SNPs or genes that capture the data variance.\nText classification data. We also evaluate our algorithms on a text classification dataset used in (Fountoulakis et al., 2017). This consists of two publicly available standard test collections for ad hoc information retrieval system evaluation: the Cranfield collection that contains 1, 398 abstracts of aerodynamics journal articles and the CISI (Centre for Inventions and Scientific Information) data that contains 1, 460 information science abstracts. Finally, using these two collections, a sparse, 2, 858 × 12, 427 document-term matrix was created using the Text-to-Matrix Generator (TMG) (Zeimpekis & Gallopoulos, 2006), with the entries representing the weight of each term in the corresponding document. See Table 5 for summary statistics." }, { "heading": "B.2 SYNTHETIC DATA", "text": "We also use a synthetic dataset generated using the same mechanism as in (Fountoulakis et al., 2017). Specifically, we construct the m× n matrix X such that X = UΣV> + Eσ. Here, Eσ is a noise matrix, containing i.i.d. Gaussian elements with zero mean and we set σ = 10−3; U ∈ Rm×m is a Hadamard matrix with normalized columns; Σ = (Σ̃ 0) ∈ Rm×n such that Σ̃ ∈ Rm×m is a diagonal matrix with Σ̃11 = 100 and Σ̃ii = e−i for i = 2, . . . ,m; V ∈ Rn×n such that V = Gn(θ)Ṽ, where Ṽ ∈ Rn×n is also a Hadamard matrix with normalized columns and\nGn(θ) = G(i1, i1 + 1, θ) G(i2, i2 + 1, θ) . . .G(in/4, in/4 + 1, θ),\nis a composition of n4 Givens rotation matrices with ik = n 2 + 2k − 1 for k = 1, 2, . . . , n 4 . Here G(i, j, θ) ∈ Rn×n be a Givens rotation matrix, which rotates the plane i − j by an angle θ. For θ ≈ 0.27π and n = 212, the matrix Gn(θ) rotates the bottom n2 components of the columns of Ṽ, making half of them almost zero and the rest half larger. Figure 2 shows the absolute values of the elements of the first column of the matrices V and Ṽ." }, { "heading": "B.3 ADDITIONAL EXPERIMENTS", "text": "Additionally, in order to further explore the sparsity patterns of the outputs of our algorithms and how close they are as compared to standard methods, we further apply our methods on a simulation example proposed by (Zou et al., 2006). We describe them below:\nArtificial Data of (Zou et al., 2006). In this example, three hidden factors V1, V2, and V3 are created in the following way:\nV1 ∼ N (0, 290), V2 ∼ N (0, 300),\nV3 = −0.3V1 + 0.925V2 + ε , ε ∼ N (0, 1) and\nV1, V2, and ε are independent.\nNext, we create 10 observable variables X1, X2, . . . , X10 in the following way:\nXi = V1 + εi1 , εi1 ∼ N (0, 1) , i = 1, 2, 3, 4 , Xi = V2 + εi2 , εi2 ∼ N (0, 1) , i = 5, 6, 7, 8 , Xi = V3 + εi3 , εi3 ∼ N (0, 1) , i = 9, 10 ,\nεij are independent i = 1, 2, . . . , 10; j = 1, 2, 3 .\nWe take A to be the exact covariance matrix of (X1 X2 . . . X10) to compute the top principal component. As the first two factors i.e., V1 and V2 are associated with four variables while the last one i.e., V3 is associated with only two variables and noting that all the three factors V1, V2, and V3 roughly have the same variance, V1 and V2 are almost equally important, and they are both significantly more important than V3. Therefore, for the first sparse principal component, the ideal solution would be to use either (X1, X2, X3, X4) or (X5, X6, X7, X8). Using the true covariance matrix and the oracle knowledge that the ideal sparsity is k = 4, we apply our algorithms and compare it with spca of (Zou et al., 2006) as well as the SDP-based algorithm of (d’Aspremont et al., 2007). We found that while two of our methods, namely, spca-d (Algorithm 3) and spca-sdp (Algorithm 5) are able to identify the correct sparsity pattern of the optimal solution, spca-r (Algorithm 2) wrongly includes the variable X10 instead of X8, possibly due to high correlation between V2 and V3 (see Table 2 for details). However, the output of the spca-d is much more interpretable, even though it has slightly lower PVE than spca-r.\nIn our additional experiments on the large datasets, Figure 2b shows the performance of various SPCA algorithms on the synthetic data. Notice that the performance of the maxcomp heuristic is worse than spca as well as our algorithms. This is quite evident from the way we constructed the synthetic data. In particular, turning the bottom n4 elements of Ṽ into large values guarantees that these would not be good elements to retain in the construction of the output vector in maxcomp, as they fail to capture the right sparsity pattern. On the other hand, our algorithms perform better than or comparable to spca. Similar to the real data, performance of spca-sdp closely matches with that of dec, cwpca, and spca-lowrank. In Figure 3, we demonstrate how our algorithms perform on CHR 3 and CHR 4 of the population genetics data. We see a similar behavior as observed for CHR 1 and CHR 2 in Figures 1a-1b. In Table 3, we report the variance f(y) captured by the output vectors of different methods for the text data, which again validates the accuracy of our algorithms." } ]
2,020
null
SP:c7c0fc5a3d6319117b445707e7818c6f292bf533
[ "The paper makes an analysis of calibration in ensembles of deep learning models. Through some theoretical developments, the paper supports that a given ensemble cannot be more confident than the average individual members for regions where the ensemble is well calibrated. Empirical results, on CIFAR-100 and three different deep models, report a comparison of ensemble calibration, where calibration is done over all members in order to achieved a calibrated ensemble decision, over individual calibration of members with no feedback from the ensemble decisions. Results show that individual member calibration does not lead to calibrated ensembles, and as such calibrating directly on the ensemble output is required for obtained a proper evaluation of its uncertainty. Different ensemble calibration approaches are also compared." ]
Underlying the use of statistical approaches for a wide range of applications is the assumption that the probabilities obtained from a statistical model are representative of the “true” probability that event, or outcome, will occur. Unfortunately, for modern deep neural networks this is not the case, they are often observed to be poorly calibrated. Additionally, these deep learning approaches make use of large numbers of model parameters, motivating the use of Bayesian, or ensemble approximation, approaches to handle issues with parameter estimation. This paper explores the application of calibration schemes to deep ensembles from both a theoretical perspective and empirically on a standard image classification task, CIFAR-100. The underlying theoretical requirements for calibration, and associated calibration criteria, are first described. It is shown that well calibrated ensemble members will not necessarily yield a well calibrated ensemble prediction, and if the ensemble prediction is well calibrated its performance cannot exceed that of the average performance of the calibrated ensemble members. On CIFAR-100 the impact of calibration for ensemble prediction, and associated calibration is evaluated. Additionally the situation where multiple different topologies are combined together is discussed.
[]
[ { "authors": [ "Arsenii Ashukha", "Alexander Lyzhov", "Dmitry Molchanov", "Dmitry Vetrov" ], "title": "Pitfalls of in-domain uncertainty estimation and ensembling in deep learning", "venue": "arXiv preprint arXiv:2002.06470,", "year": 2020 }, { "authors": [ "Mariusz Bojarski", "Davide Del Testa", "Daniel Dworakowski", "Bernhard Firner", "Beat Flepp", "Prasoon Goyal", "Lawrence D Jackel", "Mathew Monfort", "Urs Muller", "Jiakai Zhang" ], "title": "End to end learning for self-driving cars", "venue": "arXiv preprint arXiv:1604.07316,", "year": 2016 }, { "authors": [ "Jochen Bröcker" ], "title": "Estimating reliability and resolution of probability forecasts through decomposition of the empirical score", "venue": "Climate dynamics,", "year": 2012 }, { "authors": [ "Tilmann Gneiting", "Fadoua Balabdaoui", "Adrian E Raftery" ], "title": "Probabilistic forecasts, calibration and sharpness", "venue": "Journal of the Royal Statistical Society: Series B (Statistical Methodology),", "year": 2007 }, { "authors": [ "Chuan Guo", "Geoff Pleiss", "Yu Sun", "Kilian Q Weinberger" ], "title": "On calibration of modern neural networks", "venue": null, "year": 2017 }, { "authors": [ "Dan Hendrycks", "Thomas Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": null, "year": 2019 }, { "authors": [ "Dan Hendrycks", "Mantas Mazeika", "Thomas Dietterich" ], "title": "Deep anomaly detection with outlier exposure", "venue": null, "year": 2019 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Xiaoqian Jiang", "Melanie Osl", "Jihoon Kim", "Lucila Ohno-Machado" ], "title": "Calibrating predictive model estimates to support personalized medicine", "venue": "Journal of the American Medical Informatics Association,", "year": 2012 }, { "authors": [ "Volodymyr Kuleshov", "Percy S Liang" ], "title": "Calibrated structured prediction", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Meelis Kull", "Telmo Silva Filho", "Peter Flach" ], "title": "Beta calibration: a well-founded and easily implemented improvement on logistic calibration for binary classifiers", "venue": "In Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "Meelis Kull", "Miquel Perello Nieto", "Markus Kängsepp", "Telmo Silva Filho", "Hao Song", "Peter Flach" ], "title": "Beyond temperature scaling: Obtaining well-calibrated multi-class probabilities with dirichlet calibration", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ananya Kumar", "Percy S Liang", "Tengyu Ma" ], "title": "Verified uncertainty calibration", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Aviral Kumar", "Sunita Sarawagi", "Ujjwal Jain" ], "title": "Trainable calibration measures for neural networks from kernel mean embeddings", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Gongbo Liang", "Yu Zhang", "Nathan Jacobs" ], "title": "Neural network calibration for medical imaging classification using dca regularization", "venue": "In ICML UDL,", "year": 2020 }, { "authors": [ "Mahdi Pakdaman Naeini", "Gregory F Cooper", "Milos Hauskrecht" ], "title": "Obtaining well calibrated probabilities using bayesian binning", "venue": "In Proceedings of the... AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Khanh Nguyen", "Brendan O’Connor" ], "title": "Posterior calibration and exploratory analysis for natural language processing models", "venue": null, "year": 2015 }, { "authors": [ "Alexandru Niculescu-Mizil", "Rich Caruana" ], "title": "Predicting good probabilities with supervised learning", "venue": "In Proceedings of the 22nd international conference on Machine learning,", "year": 2005 }, { "authors": [ "Jeremy Nixon", "Michael W Dusenberry", "Linchuan Zhang", "Ghassen Jerfel", "Dustin Tran" ], "title": "Measuring calibration in deep learning", "venue": "In CVPR Workshops,", "year": 2019 }, { "authors": [ "John Platt" ], "title": "Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods", "venue": "Advances in large margin classifiers,", "year": 1999 }, { "authors": [ "Adrian E Raftery", "Tilmann Gneiting", "Fadoua Balabdaoui", "Michael Polakowski" ], "title": "Using bayesian model averaging to calibrate forecast ensembles", "venue": "Monthly weather review,", "year": 2005 }, { "authors": [ "Rahul Rahaman", "Alexandre H Thiery" ], "title": "Uncertainty quantification and deep ensembles", "venue": "arXiv preprint arXiv:2007.08792,", "year": 2020 }, { "authors": [ "Asa Cooper Stickland", "Iain Murray" ], "title": "Diverse ensembles improve calibration", "venue": "ICML 2020 workshop on Uncertainty Robustness in Deep Learning,", "year": 2020 }, { "authors": [ "David Stutz", "Matthias Hein", "Bernt Schiele" ], "title": "Confidence-calibrated adversarial training: Generalizing to unseen attacks", "venue": "ICML 2020 workshop on Uncertainty Robustness in Deep Learning,", "year": 2020 }, { "authors": [ "Gia-Lac Tran", "Edwin V Bonilla", "John Cunningham", "Pietro Michiardi", "Maurizio Filippone" ], "title": "Calibrating deep convolutional gaussian processes", "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Juozas Vaicenavicius", "David Widmann", "Carl Andersson", "Fredrik Lindsten", "Jacob Roll", "Thomas B Schön" ], "title": "Evaluating model calibration in classification", "venue": "Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "Yeming Wen", "Ghassen Jerfel", "Rafael Muller", "Michael W Dusenberry", "Jasper Snoek", "Balaji Lakshminarayanan", "Dustin Tran" ], "title": "Improving calibration of batchensemble with data augmentation", "venue": "ICML 2020 workshop on Uncertainty Robustness in Deep Learning,", "year": 2020 }, { "authors": [ "David Widmann", "Fredrik Lindsten", "Dave Zachariah" ], "title": "Calibration tests in multi-class classification: A unifying framework", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Bianca Zadrozny", "Charles Elkan" ], "title": "Obtaining calibrated probability estimates from decision trees and naive bayesian classifiers", "venue": "In Icml,", "year": 2001 }, { "authors": [ "Bianca Zadrozny", "Charles Elkan" ], "title": "Transforming classifier scores into accurate multiclass probability estimates", "venue": "In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2002 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": null, "year": 2018 }, { "authors": [ "Wenliang Zhong", "James T Kwok" ], "title": "Accurate probability calibration for multiple classifiers", "venue": "In Twenty-Third International Joint Conference on Artificial Intelligence. Citeseer,", "year": 2013 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep learning approaches achieve state-of-the-art performance in a wide range of applications, including image classification. However, these networks tend to be overconfident in their predictions, they often exhibit poor calibration. A system is well calibrated, if when the system makes a prediction with probability of 0.6 then 60% of the time that prediction is correct. Calibration is very important in deploying system, especially in risk-sensitive tasks, such as medicine (Jiang et al., 2012), auto-driving (Bojarski et al., 2016), and economics (Gneiting et al., 2007). It was shown by Niculescu-Mizil & Caruana (2005) that shallow neural networks are well calibrated. However, Guo et al. (2017) found that more complex neural network model with deep structures do not exhibit the same behaviour. This work motivated recent research into calibration for general deep learning systems. Previous research has mainly examined calibration based on samples from the true data distribution {x(i), y(i)}Ni=1 ∼ p(x,ω), y(i) ∈ {ω1, ..., ωK} (Zadrozny & Elkan, 2002; Vaicenavicius et al., 2019). This analysis relies on the limiting behaviour as N → +∞ to define a well calibrated system\nP(y = ŷ|P(ŷ|x;θ) = p) = p ⇐⇒ lim N→+∞ ∑ i∈Spj δ(y(i), ŷ(i)) |Spj | = p (1)\nwhere Spj = {i|P(ŷ(i) = j|x(i);θ) = p, i = 1, ..., N} and ŷ(i) the model prediction for x(i). δ(s, t) = 1 if s = t, otherwise 0. However, Eq. (1) doesn’t explicitly reflect the relation between P(y = ŷ|P(ŷ|x;θ) = p) and the underlying data distribution p(x, y). In this work we examine this explicit relationship and use it to define a range of calibration evaluation criteria, including the standard sample-based criteria.\nOne issue with deep-learning approaches is the large number of model parameters associated with the networks. Deep ensembles (Lakshminarayanan et al., 2017) is a simple, effective, approach for handling this problem. It has been found to improve performance, as well as allowing measures of uncertainty. In recent literature there has been “contradictory” empirical observations about the relationship between the calibration of the members of the ensemble and the calibration of the final\nensemble prediction (Rahaman & Thiery, 2020; Wen et al., 2020). In this paper, we examine the underlying theory and empirical results relating to calibration with ensemble methods. We found, both theoretically and empirically, that ensembling multiple calibrated models decreases the confidence of final prediction, resulting in an ill-calibrated ensemble prediction. To address this, strategies to calibrate the final ensemble prediction, rather than individual members, are required. Additionally we empiricaly examine the situation where the ensemble is comprised of models with different topologies, and resulting complexity/performance, requiring non-uniform ensemble averaging.\nIn this study, we focus on post-hoc calibration of ensemble, based on temperature annealing. Guo et al. (2017) conducted a thorough comparison of various existing post-hoc calibration methods and found that temperature scaling was a simple, fast, and often highly effective approach to calibration. However, standard temperature scaling acts globally for all regions of the input samples, i.e. all logits are scaled towards one single direction, either increasing or decreasing the distribution entropy. To address this constraint, that may hurt some legitimately confident predictions, we investigate the effect of region-specific temperatures. Empirical results demonstrate the effectiveness of this approach, with minimal increase in the number of calibration parameters." }, { "heading": "2 RELATED WORK", "text": "Calibration is inherently related to uncertainty modeling. Two of the most important scopes of calibration are calibration evaluation and calibration system construction. One method to assessing calibration is the reliability diagram (Vaicenavicius et al., 2019; Bröcker, 2012). Though informative, It is still desirable to have an overall metric. Widmann et al. (2019) investigate different distances in the probability simplex for estimating calibration error. Nixon et al. (2019) point out the problem of fixed spaced binning scheme, bins with few predictions may have low-bias but high-variance measurement. Calibration error measure adaptive to dense populated regions have also been proposed (Nixon et al., 2019). Vaicenavicius et al. (2019) treated the calibration evaluation as hypotheses tests. All these approaches examine calibration criteria from a sample-based perspective, rather than as a function of the underlying data distribution which is used in the thoretical analysis in this work.\nThere are two main approaches to calibrating systems. The first is to recalibrate the uncalibrated systems with post-hoc calibration mapping, e.g. Platt scaling (Platt et al., 1999), isotonic regression (Zadrozny & Elkan, 2002), Dirichlet calibration (Kull et al., 2017; 2019). The second is to directly build calibrated systems, via: (i) improving model structures, e.g. deep convolutional Gaussian processes (Tran et al., 2019); (ii) data augmentation, e.g. adversarial samples (Hendrycks & Dietterich, 2019; Stutz et al., 2020) or Mixup (Zhang et al., 2018); (iii) minimize calibration error during training (Kumar et al., 2018). Calibration based on histogram binning (Zadrozny & Elkan, 2001), Bayesian binning (Naeini et al., 2015) and scaling binning (Kumar et al., 2019) are related to our proposed dynamic temperature scaling, in the sense that the samples are divided into regions and separate calibration mapping are applied. However, our method can preserve the property that all predictions belonging to one sample sum to 1. The region-based classifier by Kuleshov & Liang (2015) is also related to our approach.\nEnsemble diversity has been proposed for improved calibration (Raftery et al., 2005; Stickland & Murray, 2020). In Zhong & Kwok (2013), ensembles of SVM, logistic regressor, boosted decision trees are investigated, where the combination weights of calibrated probabilities is based on AUC of ROC. However, AUC is not comparable between different models as discussed in Ashukha et al. (2020). In this work we investigate the combination of different deep neural network structures. The weights assigned to the probabilities is optimised using a likelihood-based metric." }, { "heading": "3 CALIBRATION FRAMEWORK", "text": "Let X ⊆ Rd be the d-dimensional input space and Y = {ω1, ..., ωK} be the discrete output space consisting of K classes. The true underlying joint distribution for the data is p(x, ω) = P(ω|x)p(x),x ∈ X , ω ∈ Y . Given some training data D ∼ p(x,ω), a model θ is trained to predict the distribution P(ω|x;θ) given observation features. For a calibrated system the average predicted posterior probability should equate to the average posterior of the underlying distribution for a specific probability region. Two extreme cases will always yield perfect calibration. First when the predictions that are the same, and equal to the class prior for all inputs, P(ωj |x;θ) = P(ωj). Sec-\nond the minimum Bayes’ risk classifier is obtained, P(ωj |x;θ) = p(x,ωj)∑K k=1 p(x,ωk)\n. Note that perfect calibration doesn’t imply high accuracy, as shown by the system predicting the prior distribution." }, { "heading": "3.1 DISTRIBUTION CALIBRATION", "text": "A system is calibrated if the predictive probability values can accurately indicate the portion of correct predictions. Perfect calibration for a system that yields P(ω|x;θ) when the training and test data are obtained form the joint distribution p(x,ω) can be defined as:∫\nx∈Rpj (θ, ) P(ωj |x;θ)p(x)dx = ∫ x∈Rpj (θ, ) P(ωj |x)p(x)dx ∀p, ωj , → 0 (2)\nRpj (θ, ) = { x ∣∣∣|P(ωj |x;θ)− p| ≤ ,x ∈ X} (3)\nRpj (θ, ) denotes the region of input space where the system predictive probability for class ωj is sufficiently close, within error of , to the probability p. A perfectly calibrated system will satisfy this expression for all regions, the expected predictive probability (left side of Eq. (2)) is identical to the expected correctness, i.e., expected true probability (right side of Eq. (2)).\nRpj (θ, ) defines the region in which calibration is defined. For top-label calibration, only the most probable class is considered and the region defined in Eq. (3) is modified to reflect this:\nR̃pj (θ, ) = R p j (θ, ) ∩ { x ∣∣∣ωj = argmax\nω P(ω|x;θ),x ∈ X\n} (4)\nEq. (4) is a strict subset of Eq. (3). As the two calibration regions are different between calibration and top-label calibration, perfect calibration doesn’t imply top-label calibration, and vise versa. A simple illustrative example of this property is given in A.3. Binary classification, K = 2, is an exception to this general rule, as the regions for top-label calibration are equivalent to those for perfect calibration, i.e. R̃pj (θ, ) = R p j (θ, ). Hence, perfect calibration is equivalent to top-label calibration for binary classification (Nguyen & O’Connor, 2015).\nEq. (2) defines the requirements for a perfectly calibrated system. It is useful to define metrics that allow how close a system is to perfect calibration to be assessed. Let the region calibration error be:\nCpj (θ, ) = ∫ x∈Rpj (θ, ) (P(ωj |x;θ)− P(ωj |x))p(x)dx (5)\nThis then allows two forms of expected calibration losses to be defined\nACE(θ) = 1\nK ∫ 1 0 ∣∣∣∣∣ K∑ j=1 Cpj (θ, ) ∣∣∣∣∣dp; ACCE(θ) = 1K K∑ j=1 ∫ 1 0 ∣∣∣Cpj (θ, )∣∣∣dp (6) All Calibration Error (ACE) only considers the expected calibration error for a particular probability, irrespective of the class associated with the data1 (Hendrycks et al., 2019). Hence, All Class Calibration Error (ACCE) that requires that all classes minimises the calibration error for all probabilities is advocated by Kull et al. (2019); Kumar et al. (2019). Nixon et al. (2019) propose the Thresholded Adaptive Calibration Error (TACE) to consider only the prediction larger than a threshold, and it can be described as a special case of ACCE by replacing the integral range. Naeini et al. (2015) also propose to only consider the region with maximum error.\nThough measures such as ACE and ACCE require consistency of the expected posteriors with the true distribution, for tasks with multiple classes, particularly large numbers of classes, the same weight is given to the ability of the model to assign low probabilities to highly unlikely classes, and high probabilities to the “correct” class. For systems with large numbers of classes this can yield artificially low scores. To address this problem it is more common to replace the regions in Eq. (5) with the top-label regions in Eq. (4), to give a top-label calibration error C̃pj (θ, ). This then yields\n1In this section the references given refer to the sample-based equivalent versions of the distributional calibration expressions in this paper using the same concepts, rather than identical expressions.\nthe expected top-label equivalents of ACCE and ACE, Expected Class Calibration Error (ECCE) and Expected Calibration Error (ECE). Here for example ECE by Guo et al. (2017) is expressed as\nECE(θ) = ∫ 1 0 ∣∣∣∣∣ K∑ j=1 ∫ x∈R̃pj (θ, ) (P(ωj |x;θ)− P(ωj |x))p(x)dx ∣∣∣∣∣dp (7) =\n∫ 1 0 O(θ, p)|Conf(θ, p)− Acc(θ, p)|dp (8)\nwhereO(θ, p) = ∑K j=1 ∫ x∈R̃pj (θ, ) p(x)dx is the fraction observations that are assigned to that particular probability and Conf(θ, p) and Acc(θ, p) are the ideal distribution accuracy and confidences from the model for that probability. For more details see the appendix." }, { "heading": "3.2 SAMPLE-BASED CALIBRATION", "text": "Usually only samples from the true joint distribution are available. Any particular training set is drawn from the distribution to yield\nD = { {x(i), y(i)} }N i=1 , {x(i), y(i)} ∼ p(x,ω), y(i) ∈ {ω1, ..., ωK}.\nThe region defined in Eq. (3) is now changed to be indices of the samples: Spj (θ, ) = { i ∣∣∣|P(ωj |x(i);θ)− p| ≤ ,x(i) ∈ D}, (9)\nThe sample-based version of “perfect” calibration in Eq. (2) can then be expressed as:\n1 |Spj (θ, )| ∑\ni∈Spj (θ, )\nP(ωj |x(i);θ) = 1 |Spj (θ, )| ∑\ni∈Spj (θ, )\nδ(y(i), ωj), ∀p, ωj , → 0 (10)\nas N → ∞. When considering finite data, in this case N samples, it is important to set appropriately. Setting different yields different regions and leads to different calibration results (Kumar et al., 2019). Thus it is important to specify when defining calibration for a system.\nSimilarly, the distribution form of top-label calibration can be written in terms of samples as Eq. (4), with different regions considered:\nS̃pj (θ, ) = S p j (θ, ) ∩ { i ∣∣∣ωj = argmax\nω P(ω|x(i);θ),x(i) ∈ D\n} (11)\nThe sample-based calibration losses in region Spj (θ, ) can be defined based on Eq. (10). For example ACE in Eq. (6) can be expressed in its sample-based form (Hendrycks et al., 2019)\nACE(θ, ) = 1\nNK ∑ p∈P( ) ∣∣∣∣∣ K∑ j=1 ∑ i∈Spj (θ, ) ( P(ωj |x(i);θ)− δ(y(i), ωj) )∣∣∣∣∣ (12) where P( ) = {p|p = min{1, (2z − 1) }, z ∈ Z+}, and Z+ is the set of positive integers. The measure of ECE relating to Eq. (7), which only considers the top regions in Eq. (11) can be defined as Guo et al. (2017)\nECE(θ, ) = 1\nN ∑ p∈P( ) ∣∣∣∣∣ K∑ j=1 ∑ i∈S̃pj (θ, ) ( P(ωj |x(i);θ)− δ(y(i), ωj) )∣∣∣∣∣ (13) =\n∑ p∈P( )\n(∑K j=1 |S̃ p j (θ, )| ) N ∣∣∣∣∣Conf(θ, p)− Acc(θ, p) ∣∣∣∣∣ (14)\nIt should be noted that for a finite number of samples, the regions Spj (θ, ) and S̃ p j (θ, ) derived from the samples can be different from the theoretical regions, leading to difference between theoretical calibration error measures and the values estimated from the finite samples. This is also referred to\nas “estimator randomness” by Vaicenavicius et al. (2019). An example is given in A.3 to illustrate this mismatch.\nThe simplest region specification for calibration is to set = 1. In this case, |Spj (θ, 1)| = N , and the “minimum” perfect calibration requirement for a system with parameters θ becomes\n1\nN N∑ i=1 P(ωj |x(i);θ) = 1 N N∑ i=1 δ(y(i), ωj), ∀ωj (15)\nThis is also referred to as global calibration in this paper. Similarly, global top-label calibration can be defined as\n1\nN N∑ i=1 P(ŷ(i)|x(i);θ) = 1 N N∑ i=1 δ(y(i), ŷ(i)), ŷ(i) = argmax ω P(ω|x(i);θ) (16)" }, { "heading": "4 ENSEMBLE CALIBRATION", "text": "An interesting question when using ensembles is whether calibrating the ensemble members is sufficient to ensure calibrated predictions. Initially the ensemble model will be viewed as an approximation to Bayesian parameter estimation. Given training data D , the prediction of class ωj is:\nP(ωj |x∗,D) = Eθ∼p(θ|D)[P(ωj |x∗;θ)] = ∫ P(ωj |x∗;θ)p(θ|D)dx\n≈ P (ωj |x∗;Θ) = 1\nM M∑ m=1 P(ωj |x∗;θ(m)); θ(m) ∼ p(θ|D) (17)\nwhere Eq. (17) is an ensemble, Monte-Carlo, approximation to the full Bayesian integration, with θ(m) the m-th ensemble member parameters in the ensemble Θ. The predictions of ensemble and members are ŷ∗m = argmaxω{P(ω|x∗;θ(m))}, ŷ∗E = argmaxω { 1 M ∑M m=1 P(ω|x∗;θ(m)) } ." }, { "heading": "4.1 THEORETICAL ANALYSIS", "text": "For ensemble methods it is only important that the final ensemble prediction, ŷE, is well calibrated, rather than the individual ensemble members. It is useful to examine the relationship between this ensemble prediction and the predictions from the individual models when the ensemble members are calibrated. Consider a particular top-label calibration region for the ensemble prediction, R̃p(Θ, ), related to Eq. (4), the following expression is true∫\nx∈R̃p(Θ, )\n1\nM M∑ m=1 P(ŷE|x;θ(m))p(x)dx ≤ ∫ x∈R̃p(Θ, ) 1 M M∑ m=1 P(ŷm|x;θ(m))p(x)dx (18)\nwhere the ensemble region is defined as R̃p(Θ, ) = { x ∣∣∣|P(ŷE|x;Θ) − p| ≤ ,x ∈ X}. For all\nregions R̃p(Θ, ) the ensemble is no more confident than the average confidence of individual member predictions. This puts bounds on the ensemble prediction performance if the resulting ensemble prediction is top-label calibrated, and all ensemble members yield the same region R̃p(Θ, ). Here∫\nx∈R̃p(Θ, ) P(ŷE|x;Θ)p(x)dx = ∫ x∈R̃p(Θ, ) P(ŷE|x)p(x)dx (19)\nFrom Eq. (18) the left hand-side of this expression, the ensemble prediction confidence, cannot be greater that than the average ensemble member confidence. If the regions associated with the ensemble prediction and members are the same, then for top-label calibrated members this average confidence is the same as the average ensemble member accuracy. Furthermore, if the ensemble prediction is top-label calibrated, then this average ensemble member accuracy bounds the ensemble prediction accuracy. Under these conditions ensembling the members yields no performance gains.\nThe above bound holds with the assumption that the members are calibrated on the same regions. Proposition 3 in Appendix describes one trivial case when all members are calibrated on the same regions. Another case is the calibration on global regions. As shown in Proposition 1, at the global level, ensemble accuracy is still bounded.\nProposition 1. If all members and the corresponding ensemble are globally top-label calibrated, the ensemble performance is no better than the average performance of the members:\n1\nN N∑ i=1 δ(y(i), ŷ (i) E ) ≤ 1 M M∑ m=1 ( 1 N N∑ i=1 δ(y(i), ŷ(i)m ) )\n(20)\nProof. If all members and the ensemble are globally top-label calibrated,\n1\nN N∑ i=1 P(ŷ(i)m |x(i);θ(m)) = 1 N N∑ i=1 δ(y(i), ŷ(i)m ), m = 1, ...,M (21)\n1\nN N∑ i=1\n( 1\nM M∑ m=1 P(ŷ (i) E |x(i);θ(m))\n) = 1\nN N∑ i=1 δ(y(i), ŷ (i) E ) (22)\nBy definition,\nP(ŷ (i) E |x(i);θ(m)) ≤ P(ŷ(i)m |x(i);θ(m)) (23)\nHence,\n1\nN N∑ i=1 δ(y(i), ŷ (i) E ) ≤ 1 M M∑ m=1 ( 1 N N∑ i=1 δ(y(i), ŷ(i)m ) )\n(24)\nHowever, this is not true for all-label calibration. In both cases, all-label calibrated members always yield all-label calibrated ensemble, no matter whether the ensemble accuracy exceeds the mean accuracy of members or not (Example 2 in Appendix gives illustration on a synthetic dataset).\nProposition 2. If all members are global all-label calibrated, then the overall ensemble is global all-label calibrated.\nProof. If all members are global all-label calibrated, then\n1\nN N∑ i=1 P(ωj |x(i);θ(m)) = 1 N N∑ i=1 δ(y(i), ωj), ∀ωj , m = 1, ...,M (25)\nHence,\n1\nN N∑ i=1 P(ωj |x(i);Θ) = 1 M M∑ m=1\n( 1\nN N∑ i=1 δ(y(i), ωj)\n) = 1\nN N∑ i=1 δ(y(i), ωj) (26)\nIn general the regions are not the same, the ensemble accuracy is not bounded in the above way. However, note that global level calibration is the minimum requirement of calibration. The above discussion based on regions still sheds light on the question of should the members be calibrated or not, though the final theoretical answer is still absent. It should be also noted that, global all-label calibration does not imply global top-label calibration, because the regions considered are different (as illustrated by Example 1 in Appendix). For the discussion so far, the ensemble members are combined with uniform weights, motivated from a Bayesian approximation perspective. When, for example, multiple different topologies are used as members of the ensemble, a non-uniform averaging of the members of the ensemble, reflecting the model complexities and performance may be useful. Propositions 1 and 2 will still apply." }, { "heading": "4.2 TEMPERATURE ANNEALING FOR ENSEMBLE CALIBRATION", "text": "Calibrating ensembles can be performing using a function f ∈ F with some parameters, t, F : [0, 1]→ [0, 1] for scaling probabilities. There are two modes for calibrating an ensemble: Pre-combination Mode. the function is applied to the probabilities predicted by members, prior to combining the members to obtain ensemble prediction using a set of calibration parameters T .\nPpre(ŷE|x;Θ,T ) = 1\nM M∑ m=1 f ( P(ŷE|x;θ(m)), t(m) ) (27)\nPost-combination Mode. the function is applied to the ensemble predicted probability after combining members’ predictions.\nPpost(ŷE|x;Θ, t) = f\n(( 1\nM M∑ m=1 P(ŷE|x;θ(m))\n) , t ) (28)\nThere are many functions for transforming predicted probability in the calibration literature, e.g. histogram binning, Platt scaling and temperature annealing. However, histogram binning shouldn’t be adopted in the pre-combination mode as scaling function f for calibrating multi-class ensemble, as the transformed values may not yield a valid PMF.\nAs shown in Guo et al. (2017), temperature scaling is a simple, effective, option for the mapping function F , which scales the logit values associated with the posterior by a temperature t, f(z; t) = exp{z/t}/ ∑ j exp{zj/t}. Here a single temperature is used for scaling logits for all samples. This leads to the problem that the entropy of the predictions for all regions are either increased or decreased. From Eq. (2) the temperature can be made region specific.\nfdyn(z; t) = exp{z/tr}∑ j exp{zj/tr} , if max i exp{zi}∑ j exp{zj} ∈ Rr (29)\nTo determine the optimal set of temperatures, the samples in the validation set are divided into R regions based on the ensemble predictions (e.g. R1 = [0, 0.3),R2 = [0.3, 0.6), andR3 = [0.6, 1]). Each region has an individual temperature for scaling {Rr, tr}Rr=1." }, { "heading": "4.3 EMPIRICAL RESULTS", "text": "Experiments were conducted on CIFAR-100 (and CIFAR-10 in the A.4). The data partition was 45,000/5,000/10,000 images for train/validation/test. We train LeNet (LEN) (LeCun et al., 1998), DenseNet 100 and 121 (DSN100, DSN121), (Huang et al., 2017) and Wide ResNet 28 (RSN) (Zagoruyko & Komodakis, 2016) following the original training recipes in each paper (more details in A.4). The results presented are slightly lower than that in the original papers, as 5,000 images were held-out to enable calibration parameter optimisation.\nFigure 1 examines the empirical performance of ensemble calibration on CIFAR-100 test set using the three trained networks. The top row shows that, with appropriate temperature scaling, the members are calibrated on different regions (because otherwise the accuracy values should be the same). The middle row shows the ECE of ensemble members and ensemble prediction at different temperatures. The optimal calibration temperature for the ensemble prediction are consistently smaller than those associated with the ensemble members. This indicates that the ensemble predictions are less confident than those of the members, as stated in Eq. (23). The bottom row of figures show the reliability curves when the ensemble members are calibrated with optimal temperature values, and the resulting combination. It is clear that calibrating the ensemble members, using temperature, does not yield a calibrated ensemble prediction. Furthermore for all models the ensemble prediction is less confident than it should be, the line is above the diagonal. As discussed in Proposition 1, this is necessary, or the ensemble prediction is no better, which is clearly not the case for the performance plots in the top row. This ensemble performance is relatively robust to poorly calibrated ensemble members, with consistent performance over a wide range of temperatures.\nTable 1 shows the calibration performance using three temperature scaling methods, pre-, postand dynamic post-combination. The temperatures are optimized to minimize ECE (Liang et al.,\n2020) on the validation data. We use the unbiased quadratic version of squared kernel calibration error (SKCE) with Laplacian kernel and kenel bandwidth chosen by median heuristic as one of the calibration error metrics(Widmann et al., 2019) . All three methods effectively improve the ensemble prediction calibration, with the dynamic approach yielding the best performance. We further investigate the impact of region numbers on the dynamic approach, as shown in Figure 3. It can be found that increasing the region number tends to improve the calibration performance, while requiring more parameters.\nFinally, for the topology ensemble, weights were optimised using either maximum likelihood (Max LL) or area under curve (AUC) Zhong & Kwok (2013) (results in A.4). In Figure 2, the ensemble of calibrated structures is shown to be uncalibrated, with reliability curves typically slightly above the diagonal line. When the ensemble prediction is calibrated it can be seen that the calibration for the ensemble prediction is lower than the individual calibration errors in Table 1 (“post” lines)." }, { "heading": "5 CONCLUSIONS", "text": "State-of-the-art deep learning models often exhibit poor calibration performance. In this paper two aspects of calibration for these models are investigated: the theoretical definition of calibration and associated attributes for both general and top-label calibration; and the application of calibration to ensemble methods that are often used in deep-learning approaches for improved performance and uncertainty estimation. It is shown that calibrating members of the ensemble is not sufficient to ensure that the ensemble prediction is itself calibrated. The resulting ensemble predictions will be under-confident, requiring calibration functions to be optimised for the ensemble prediction, rather than ensemble members. These theoretical results are backed-up by empirical analysis on CIFAR100 deep-learning models, with ensemble performance being robust to poorly calibrated ensemble members but requiring calibration even with well calibrated members." }, { "heading": "A APPENDIX", "text": "A.1 THEORETICAL PROOF\nProposition 3. If all members are calibrated and the regions are the same, i.e., for different members θ(m) and θ(m ′)\nRpj (θ (m), ) = Rpj (θ (m′), ) ∀p, ωj , → 0 then the ensemble is also calibrated on the same regions∫\nx∈Rpj (Θ, ) P(ωj |x;Θ)p(x)dx = ∫ x∈Rpj (Θ, ) P(ωj |x)p(x)dx, ∀p, ωj , → 0\nProof. If Rpj (θ (m), ) = Rpj (θ (m′), ) ∀p, ωj , → 0\nThe ensemble is also calibrated and the regions are the same:\nRpj (Θ, ) =\n{ x ∣∣∣∣∣ ∣∣∣∣ 1M M∑ m=1 P(ωj |x;θ(m))− p ∣∣∣∣ ≤ } = Rpj (θ (m)) ∀p, ωj , → 0 (30)\n∫ x∈Rpj (Θ, ) 1 M M∑ m=1 P(ωj |x;θ(m))p(x)dx = 1 M M∑ m=1 ∫ x∈Rpj (θ(m), ) P(ωj |x;θ(m))p(x)dx\n= 1\nM M∑ m=1 ∫ x∈Rpj (θ(m), ) P(ωj |x)p(x)dx\n= ∫ x∈Rpj (θ(m), ) P(ωj |x)p(x)dx\n= ∫ x∈Rpj (Θ, ) P(ωj |x)p(x)dx\nProposition 4. When class number K > 2, if all members are globally top-label calibrated, then the ensemble is not necessarily global top-label calibrated.\nProof. Assume globally top-label calibrated members imply globally top-label calibrated ensemble, that is, given\n1\nN N∑ i=1 P(ŷ(i)m |x(i);θ(m)) = 1 N N∑ i=1 δ(y(i), ŷ(i)m ), m = 1, ...,M (31)\nthe following is true\n1\nN N∑ i=1 P(ŷ (i) E |x(i);Θ) = 1 N N∑ i=1 δ(y(i), ŷ (i) E ) (32)\nIf ∃n, m̃, τ > 0, such that ŷ(n)E 6= ŷ (n) m̃ , then it is possible to write\nP(ŷ (n) E |x(n);Θ) = 1 M ∑ m 6=m̃ P(ŷ (n) E |x(n);θ(m)) + 1 M P(ŷ (n) E |x(n);θ(m̃)) (33)\nFor top-label calibration there are no constraints on the second term in Eq. (33) as it is not the toplabel for model θ(m̃). Thus there are a set of models that satisfy the top-label calibration constraints for member m̃ that only need to satisfy the following constraints\n0 ≤ P(ŷ(n)E |x(n);θ(m̃)) < P(ŷ (n) m̃ |x (n);θ(m̃)) ≤ 1 (34)\nand the standard sum-to-one constraint over all classes. Consider replacing member m̃ of the ensemble with a member having parameters θ̃(m̃), to yield Θ̃, that satisfies\nmax ω\n{ P(ω|x(n); θ̃(m̃)) } = max\nω\n{ P(ω|x(n);θ(m̃)) } = ŷ\n(n) m̃ (35)\nP(ŷ (n) m̃ |x (n); θ̃(m̃)) = P(ŷ (n) m̃ |x (n);θ(m̃)) (36)\nP(ŷ (n) E |x(n); θ̃(m̃)) = P(ŷ (n) E |x(n);θ(m̃)) + τ (37)\nwhere τ > 0, and the standard sum-to-one constraint is satisfied, and all other predictions are unaltered. This results in the following constraints\nmax ω\n{ P(ω|x(n); Θ̃) } = max\nω\n{ P(ω|x(n);Θ) } = ŷ\n(n) E (38)\nP(ŷ (n) E |x(n);Θ) < P(ŷ (n) E |x(n); Θ̃) (39)\nThe accuracy of the two ensembles Θ and Θ̃ are the same from Eq. (38), but the probabilities associated with those predictions cannot be the same from Eq. (39), so both ensemble predictions cannot be calibrated, as assuming that the ensemble prediction for Θ is calibrated\n1\nN N∑ i=1 P(ŷ (i) E |x(i); Θ̃) > 1 N N∑ i=1 P(ŷ (i) E |x(i);Θ) = 1 N N∑ i=1 δ(y(i), ŷ (i) E ) (40)\nHence there are multiple values of P(ŷ(n)E |x(n);Θ) for which all the models satisfy the topcalibration constraints, but these cannot all be consistent with Eq. (40). For the situation where there is no sample or model where ŷ(n)E 6= ŷ (n) m̃ then the predictions for all models for all samples are the same as the ensemble prediction, so by definition there can be no performance gain.\nA.2 GLOBAL GENERAL CALIBRATION AND TOP-LABEL CALIBRATION\nTo demonstrate the differences between global top-label calibration and global calibration, a set of ensemble member predictions were generated using Algorithm 1, this ensures that the predictions are perfectly calibrated. Since the member predictions are perfectly calibrated, the ensemble members will be globally calibrated. Figure 4 (a) shows the performance in terms of ACE of the ensemble prediction as the value of increases, note when = 1 this is a global calibration version of ACE. It can be seen that as increases ACE decreases, and for the global case reduces to zero for the ensemble predictions as the theory states.\nIn terms of top-label calibration, as the ensemble members are perfectly calibrated, they will again be global top-label calibrated. This is illustrated in Figure 4 (b) where ECE is zero for all ensemble members. For top-label calibration the value of ECE does not decrease to zero as the → 1, again as the theory states. This is because the underlying probability regions associated with each of the members of the ensemble are different. Hence, even for perfectly calibrated ensemble members, the ensemble prediction is not global top-label calibrated.\nA.3 TOY DATASETS\nExample 1. In this example, we show the difference between all-label calibration and top-label calibration which consider the different regions in Eq. (3) and Eq. (4).\nAssuming p(x) ∝ 1, the whole input space X is consisted of three regionsR1,R2 andR3, and∫ x∈R1 p(x)dx = ∫ x∈R2 p(x)dx = ∫ x∈R3 p(x)dx. (41)\nThe corresponding system prediction P̂ and the true distribution P is:\nP̂ = ( P(ωj |x ∈ Rr;θ) ) = ω1 ω2 ω3 ω4( ) 0.5 0.4 0.05 0.05 R1 0.3 0.4 0.2 0.1 R2 0.3 0.3 0.35 0.05 R3\n(42)\nP =\nω1 ω2 ω3 ω4( ) 0.5 0.4− τ 0.05 0.05 + τ R1\n0.3− τ 0.4 + τ 0.2 0.1 R2 0.3 + τ 0.3 0.35 0.05− τ R3 , τ > 0 (43)\nIt can be verified that∫ x∈Rpj (θ, ) P(ωj |x,θ)p(x)dx = ∫ x∈Rpj (θ, ) P(ωj |x)p(x)dx, ∀ωj , p, → 0 (44)\nHowever, when p = 0.4, j = 2∫ x∈R̃pj (θ, ) P(ωj |x,θ)p(x)dx 6= ∫ x∈R̃pj (θ, ) P(ωj |x,θ)p(x)dx, → 0 (45)\nExample 2. This example shows that combination of calibrated members yields uncalibrated ensemble. Algorithm 1 generates the true data distribution p by sampling from a Dirichlet distribution with equal concentration parameters of 1. To generate the member predictions, the N samples are randomly assigned to N/b bins with size of b. In each bin, the member predictions p̂ are equally the average of true data distribution of the associated samples. This ensures that for each region, Eq. (2) holds for each member. However, regions for different member are different due to the random assignment. Therefore, the corresponding ensemble is not automatically calibrated. Figure 4 shows that the ensemble is uncalibrated with ACE of 0.0697.\nExample 3. In this example, we show that for a finite number of samples, the regions Spj (θ, ) and S̃pj (θ, ) derived from the samples is different from the theoretical regions, leading to difference between theoretical calibration error measures and the values estimated from the finite samples. Algorithm 2 generates data with difference between finite sample-based calibration error and theoretical error. The theoretical ACE of the predicted probabilities in Algorithm 2 is 932 . However, the finite sample-based ACE is 0.\nThe true data distribution is P(ωj |x) = 14 , p(x) ∝ 1. The samples in D are assigned to bins of size 2. The type of bin that a sample is assigned to determines the predicted probability of sample. Considering each class ωj , there are three types of bins: Bp=1: both samples belong to class ωj . Bp=0.5: only one sample is class ωj .\nAlgorithm 1: Algorithm for generating calibrated members that yield uncalibrated ensemble Result: {p(i)}Ni=1,{p̂(i), ŷ(i)}Ni=1 b = 2; //bin size, number of samples in one bin; K = 4; //number of classes; M = 10; //number of members; N = 1000000; //number of samples; p(i) ∼ Dirichlet(α = 1), i = 1, ..., N ; //true data distribution, sampled from Dirichlet\ndistribution with equal concentration parameters of 1; I = [1, 2, ..., N ]; // index vector; for m in [1, ...,M ] do\nĨ ← shuffle(I); for j in [1, ..., dN/be] do\nBj = (b ∗ (j − 1),min{b ∗ j,N}]; p̂Bj = 1 b ∑ l∈Bj p\n(Ĩl); end for i in [1, ..., N ] do\nj = d ibe; p̂(Ĩi,m) = p̂Bj ;\nend end\nBp=0: both samples are not class ωj .\nP(x ∈ B1) = 1\n16 , P(x ∈ B0.5) =\n6\n16 , P(x ∈ B0) =\n9\n16 (46)\nthen ∫ x∈Rpj (θ,0) ( P(ωj |x;θ)− P(ωj |x) ) p(x)dx = ∫ x∈Rpj (θ,0) ( p− 1 4 ) ) p(x)dx (47)\n= (p− 1 4 )P(x ∈ Bp) (48)\ntherefore,\nACE(θ) = 1\n4 ∫ 1 0 ∣∣∣ 4∑ j=1 ∫ x∈Rpj (θ,0) ( P(ωj |x;θ)− P(ωj |x) ) p(x)dx ∣∣∣dp (49) =\n∑ p∈{0,0.5,1} ∣∣∣(p− 1 4 )P(x ∈ Bp) ∣∣∣ = 9 32\n(50)\nA.4 ADDITIONAL EXPERIMENTAL RESULTS\nIn this section, we show some comparison experiments to the empirical results in Section 4.3. We conducted experiments on CIFAR-100 and CIFAR-10 dataset. Table 3 and 4 display the performance of inidividual models, LeNet, DenseNet 100, DenseNet 121, and wide ResNet 28-10. All systems are trained with data augmentation of random cropping and horizontal flipping, and simple mean/std normalization. The original training/test images in CIFAR datasets is 50,000/10,000. We hold out 5,000 images from training set as validation set (10%) for temperature and combination weights optimization, this leads to a slight accuracy degradation compared to training with all 50,000 images. We have 10 runs of our experiment to obtain the deviations.\nIn Section 4.3, we presented ensemble calibration on CIFAR-100. The counterpart on CIFAR-10 is given in Table 5. Other than separately specified, all sample based evaluation criteria in this paper use 15 bins following previous literature (Guo et al., 2017). The temperatures in pre-, post- and dynamic post-combination modes are optimized on the validation set by minimizing ECE (Liang et al.,\nAlgorithm 2: Algorithm for generating data with difference between finite sample-based ACE and theoretical ACE. Result: {p̂(i), ŷ(i)}Ni=1 b = 2; //bin size, number of samples in one bin; K = 4; //number of classes; g = [g1, ..., gN ]; //vector of ground-truth labels , where g ∈ {1, ...,K}. Numbers of labels for\ndifferent classes are equal; I = [1, 2, ..., N ]; // index vector; Ĩ ← shuffle(I); for j in [1, ..., dN/be] do\nBj = (b ∗ (j − 1),min{b ∗ j,N}]; p̂Bj = 1 b ∑ l∈Bj [δ(1, gĨl), δ(2, gĨl), ..., δ(K, gĨl)];\nend for i in [1, ..., N ] do\nj = d ibe; p̂(Ĩi) = p̂Bj ;\nend\n2020), using SGD with learning rate of 0.1 for 400 iterations. It can be observed that combination of DenseNet and ResNet improves the calibration performance, while combination of LeNet doesn’t help. This is because LeNet is not as over-confident as DenseNet and ResNet (as shown in Figure 1). Hence the simple ensemble combination doesn’t help, but aggravates the calibration. The three temperature-based calibration methods effectively improve the system calibration performance on CIFAR-10 as well.\nTable 6 gives the ensemble combination based on AUC weights (Zhong & Kwok, 2013). The AUC weights are much even than the Max LL weights in Table 2. The structures combined are first calibrated, nevertheless, applying post-combination calibration to the ensemble obtains gains. We evaluated post-combination for topology combination in Table 7. The post- and dynamic postcombination methods are applied to calibrate the topologies and the topology ensemble. The dynamic temperature method shows clear advantage in obtaining calibrated ensemble of mutiple topologies." } ]
2,020
null
SP:2e8e7fca411be533fbe6069ba360c17189be2fee
[ "This paper proposes an interesting method for being able to act and plan robustly in a multiagent simulation and be robust to the reality gap between training time and testing time for agents in a marl setting. the method does show improvements in terms of being able to train the policy for this use case and being more robust to some out of distribution configuration of the environment however these improvements appear to be rather limited. in addition, the organization and writing for the paper is very technical and could be improved with additional background information on the uses of metrics and environments as well as better flow between the content in the paper to understand the importance of the different aspects of the method. These improvements could help the reader understand the novelty and important aspects of the method that are difficult to measure. At the moment it comes across as a mix of different methods combined to be able to support this more robust method without a very clear story about the primary problem the paper is trying to solve or the more significant technical aspect of the method that provides this novel solution." ]
Policies for real-world multi-agent problems, such as optimal taxation, can be learned in multi-agent simulations with AI agents that emulate humans. However, simulations can suffer from reality gaps as humans often act suboptimally or optimize for different objectives (i.e., bounded rationality). We introduce Robust Multi-Agent Simulation (ERMAS), a robust optimization framework to learn AI policies that are robust to such multi-agent reality gaps. The objective of ERMAS theoretically guarantees robustness to the -Nash equilibria of other agents – that is, robustness to behavioral deviations with a regret of at most . ERMAS efficiently solves a first-order approximation of the robustness objective using meta-learning methods. We show that ERMAS yields robust policies for repeated bimatrix games and optimal adaptive taxation in economic simulations, even when baseline notions of robustness are uninformative or intractable. In particular, we show ERMAS can learn tax policies that are robust to changes in agent risk aversion, improving policy objectives (social welfare) by up to 15% in complex spatiotemporal simulations using the AI Economist (Zheng et al., 2020).
[]
[ { "authors": [ "Dirk Bergemann", "Stephen Morris" ], "title": "Robust mechanism design", "venue": "Econometrica, 73(6):1771–1813,", "year": 2005 }, { "authors": [ "Eric Bonabeau" ], "title": "Agent-based modeling: Methods and techniques for simulating human systems", "venue": "Proceedings of the National Academy of Sciences,", "year": 2002 }, { "authors": [ "Paul Christiano", "Zain Shah", "Igor Mordatch", "Jonas Schneider", "Trevor Blackwell", "Joshua Tobin", "Pieter Abbeel", "Wojciech Zaremba" ], "title": "Transfer from Simulation to Real World through Learning Deep Inverse Dynamics Model", "venue": "URL https://arxiv.org/abs/1610. 03518v1", "year": 2016 }, { "authors": [ "Paul Dütting", "Zhe Feng", "Harikrishna Narasimhan", "David Parkes", "Sai Srivatsa Ravindranath" ], "title": "Optimal auctions through deep learning", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Tanner Fiez", "Benjamin Chasnov", "Lillian J. Ratliff" ], "title": "Convergence of Learning Dynamics in Stackelberg Games. arXiv, jun 2019", "venue": "URL http://arxiv.org/abs/1906.01217", "year": 1906 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks", "venue": "[cs],", "year": 2017 }, { "authors": [ "C. Gini" ], "title": "Variabilità e mutabilità", "venue": null, "year": 1912 }, { "authors": [ "Irina Higgins", "Arka Pal", "Andrei A Rusu", "Loic Matthey", "Christopher P Burgess", "Alexander Pritzel", "Matthew Botvinick", "Charles Blundell", "Alexander Lerchner. Darla" ], "title": "Improving zero-shot transfer in reinforcement learning", "venue": "arXiv preprint arXiv:1707.08475,", "year": 2017 }, { "authors": [ "John H. Holland", "John H. Miller" ], "title": "Artificial Adaptive Agents in Economic Theory", "venue": "American Economic Review,", "year": 1991 }, { "authors": [ "Linfang Hou", "Liang Pang", "Xin Hong", "Yanyan Lan", "Zhiming Ma", "Dawei Yin" ], "title": "Robust Reinforcement Learning with Wasserstein Constraint", "venue": "URL http://arxiv.org/abs/2006.00945", "year": 2020 }, { "authors": [ "Peter Howitt" ], "title": "What have central bankers learned from modern macroeconomic theory", "venue": "Journal of Macroeconomics,", "year": 2012 }, { "authors": [ "Stephen James", "Paul Wohlhart", "Mrinal Kalakrishnan", "Dmitry Kalashnikov", "Alex Irpan", "Julian Ibarz", "Sergey Levine", "Raia Hadsell", "Konstantinos Bousmalis" ], "title": "Sim-to-real via sim-to-sim: Dataefficient robotic grasping via randomized-to-canonical adaptation networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Kun Ho Kim", "Yihong Gu", "Jiaming Song", "Shengjia Zhao", "Stefano Ermon" ], "title": "Cross Domain Imitation Learning. arXiv:1910.00105 [cs, stat], September 2019", "venue": "URL http://arxiv.org/ abs/1910.00105", "year": 1910 }, { "authors": [ "Alan P. Kirman" ], "title": "Whom or What Does the Representative Individual Represent", "venue": "Journal of Economic Perspectives,", "year": 1992 }, { "authors": [ "Shihui Li", "Yi Wu", "Xinyue Cui", "Honghua Dong", "Fei Fang", "Stuart Russell" ], "title": "Robust Multi-Agent Reinforcement Learning via Minimax Deep Deterministic Policy Gradient", "venue": "Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Jun Morimoto", "Kenji Doya" ], "title": "Robust Reinforcement Learning", "venue": "Advances in Neural Information Processing Systems", "year": 2001 }, { "authors": [ "Roger B. Myerson" ], "title": "Mechanism Design, pp. 1–13", "venue": "ISBN 978-1-349-95121-5", "year": 2016 }, { "authors": [ "Ofir Nachum", "Michael Ahn", "Hugo Ponte", "Shixiang Gu", "Vikash Kumar" ], "title": "Multi-Agent Manipulation via Locomotion using Hierarchical Sim2Real", "venue": "[cs],", "year": 2019 }, { "authors": [ "Alex Nichol", "Joshua Achiam", "John Schulman" ], "title": "On First-Order Meta-Learning Algorithms", "venue": "[cs],", "year": 2018 }, { "authors": [ "Santiago Paternain", "Luiz F.O. Chamon", "Miguel Calvo-Fullana", "Alejandro Ribeiro" ], "title": "Constrained Reinforcement Learning Has Zero Duality Gap", "venue": "URL http://arxiv.org/abs/1910.13393", "year": 2019 }, { "authors": [ "Xue Bin Peng", "Marcin Andrychowicz", "Wojciech Zaremba", "Pieter Abbeel" ], "title": "Sim-to-Real Transfer of Robotic Control with Dynamics Randomization", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2018 }, { "authors": [ "Wolfgang Pesendorfer" ], "title": "Behavioral economics comes of age: A review essay on advances in behavioral economics", "venue": "Journal of Economic Literature,", "year": 2006 }, { "authors": [ "Lerrel Pinto", "James Davidson", "Rahul Sukthankar", "Abhinav Gupta" ], "title": "Robust Adversarial Reinforcement Learning", "venue": "[cs],", "year": 2017 }, { "authors": [ "Lillian J. Ratliff", "Samuel A. Burden", "S. Shankar Sastry" ], "title": "On the Characterization of Local Nash Equilibria in Continuous Games", "venue": "[math],", "year": 2014 }, { "authors": [ "Andrei A. Rusu", "Mel Vecerik", "Thomas Rothörl", "Nicolas Heess", "Razvan Pascanu", "Raia Hadsell" ], "title": "Sim-to-Real Robot Learning from Pixels with Progressive Nets", "venue": "URL https: //arxiv.org/abs/1610.04286v2", "year": 2016 }, { "authors": [ "John Schulman", "Sergey Levine", "Philipp Moritz", "Michael I. Jordan", "Pieter Abbeel" ], "title": "Trust Region Policy Optimization", "venue": "[cs],", "year": 2017 }, { "authors": [ "Florian Schäfer", "Anima Anandkumar" ], "title": "Competitive Gradient Descent", "venue": "[cs, math],", "year": 2020 }, { "authors": [ "Herbert A. Simon" ], "title": "From substantive to procedural rationality", "venue": "Years of Economic Theory: Retrospect and prospect,", "year": 1976 }, { "authors": [ "Herbert A. Simon", "Jonathan Schaeffer" ], "title": "The Game of Chess. Technical report, CARNEGIEMELLON UNIV PITTSBURGH PA ARTIFICIAL INTELLIGENCE AND PSYCHOLOGY", "venue": "URL https://apps.dtic.mil/sti/citations/ ADA225613", "year": 1990 }, { "authors": [ "Young-Ho Suh", "Sung-Pil Woo", "Hyunhak Kim", "Dong-Hwan Park" ], "title": "A sim2real framework enabling decentralized agents to execute MADDPG tasks", "venue": "In Proceedings of the Workshop on Distributed Infrastructures for Deep Learning,", "year": 2019 }, { "authors": [ "Richard S. Sutton", "Andrew G. Barto" ], "title": "Reinforcement learning: An introduction", "venue": "MIT press,", "year": 2018 }, { "authors": [ "Chen Tessler", "Yonathan Efroni", "Shie Mannor" ], "title": "Action Robust Reinforcement Learning and Applications in Continuous Control. arXiv:1901.09184 [cs, stat], May 2019", "venue": "URL http:// arxiv.org/abs/1901.09184", "year": 1901 }, { "authors": [ "Josh Tobin", "Rachel Fong", "Alex Ray", "Jonas Schneider", "Wojciech Zaremba", "Pieter Abbeel" ], "title": "Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World", "venue": "[cs],", "year": 2017 }, { "authors": [ "Eric Tzeng", "Coline Devin", "Judy Hoffman", "Chelsea Finn", "Pieter Abbeel", "Sergey Levine", "Kate Saenko", "Trevor Darrell" ], "title": "Adapting Deep Visuomotor Representations with Weak Pairwise Constraints", "venue": "URL https://arxiv.org/abs/1511.07111v5", "year": 2015 }, { "authors": [ "Stephan Zheng", "Alexander Trott", "Sunil Srinivasa", "Nikhil Naik", "Melvin Gruesbeck", "David C. Parkes", "Richard Socher" ], "title": "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies", "venue": "URL http://arxiv", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning (RL) offers a tool to optimize policy decisions affecting complex, multiagent systems; for example, to improve traffic flow or economic productivity. In practice, the need for efficient policy evaluation necessitates training on simulations of multi-agent systems (MAS). Agents in these systems can be emulated with fixed behavioral rules, or by optimizing for a reward function using RL (Zheng et al., 2020). For instance, the impact of economic policy decisions are often estimated with agent-based models (Holland & Miller, 1991; Bonabeau, 2002). This commonly introduces a reality gap as the reward function and resulting behavior of simulated agents might differ from those of real people (Simon & Schaeffer, 1990). This becomes especially problematic as the complexity of the simulation grows, for example, when increasing the number of agents, or adding agent affordances (Kirman, 1992; Howitt, 2012). As a result, policies learned in imperfect simulations need to be robust against reality gaps in order to be effective in the real world.\nWe introduce -Robust Multi-Agent Simulation (ERMAS), a robust optimization framework for training robust policies, termed planners, that interact with real-world multi-agent systems. ERMAS trains robust planners by simulating multi-agent systems with RL and sampling worst-case behaviors from the worst-case agents. This form of multi-agent robustness poses a very challenging multilevel (e.g., max-min-min) optimization problem. Existing techniques which could be applied to ERMAS’s multi-agent robustness objective, e.g., naive adversarial robustness (Pinto et al., 2017) and domain randomization (Tobin et al., 2017; Peng et al., 2018), are intractable as they would require an expensive search through a large space of agent reward functions. Alternative frameworks improve robustness, e.g., to changes in environment dynamics, observation or action spaces (Pinto et al., 2017; Li et al., 2019; Tessler et al., 2019), but do not address reality gaps due to reward function mismatches, as they use inappropriate metrics on the space of adversarial perturbations.\nTo solve this problem, ERMAS has three key features: 1) It formulates a multi-agent robustness objective equivalent to finding the worst case -Nash equilibria. 2) It optimizes a tractable dual problem to the equivalent objective. 3) It approximates the dual problem using local solution concepts and first-order meta-learning techniques (Nichol et al., 2018; Finn et al., 2017). ERMAS ultimately yields policies that are robust to other agents’ behavioral deviations, up to a regret of .\nWe show that ERMAS learns robust policies in repeated bimatrix games by finding the worst-case reality gaps, corresponding to highly adversarial agents, which in turn leads to more robust planners. We further consider a challenging, large-scale spatiotemporal economy that features a social planner that learns to adjust agent rewards. In both settings, we show policies trained by ERMAS are more robust by testing them in perturbed environments with agents that have optimized for reward functions unused during ERMAS training. This generalization error emulates the challenge faced in transferring policies to the real world. In particular, we show ERMAS can find AI Economist tax policies that achieve higher social welfare across a broad range of agent risk aversion objectives. In all, we demonstrate ERMAS is effective even in settings where baselines fail or become intractable.\nContributions To summarize, our contributions are:\n• We derive a multi-agent adversarial robustness problem using -Nash equilibria, which poses a challenging nested optimization problem. • We describe how ERMAS efficiently solves the nested problem using dualization, trust-\nregions, and first-order meta-learning techniques. • We empirically validate ERMAS by training robust policies in two multi-agent problems:\nsequential bimatrix games and economic simulations. In particular, ERMAS scales to complex spatiotemporal multi-agent simulations." }, { "heading": "2 ROBUSTNESS AND REALITY GAPS IN MULTI-AGENT ENVIRONMENTS", "text": "We seek to learn a policy πp for an agent, termed the planner, that interacts with an environment featuring N other agents. The planner’s objective depends both on its own policy and the behavior of other agents in response to that policy; this is a multi-agent RL problem in which the planner and agents co-adapt. In practice, evaluating (and optimizing) πp requires use of a simulation with agents that emulate those in the environment of interest (i.e. the real world), which might contain agents whose reward function differs from those used in the simulation. Our goal is to train planner policies that are robust to such reality gaps.\nFormally, we build on partially-observable multi-agent Markov Games (MGs) (Sutton & Barto, 2018), defined by the tuple M := (S,A, r, T , γ, o, I), where S and A are the state and action spaces, respectively, and I are agent indices. Since the MG played by the agents depends on the choice of planner policy, we denote the MG given by πp as M [πp]. MGs proceed in episodes that last H + 1 steps (possibly infinite), covering H transitions. At each time t ∈ [0, H], the world state is denoted st. Each agent i = 1, . . . , N receives an observation oi,t, executes an action ai,t and receives a reward ri,t. The environment transitions to the next state st+1, according to the transition distribution T (st+1|st,at).1 Each agent observes oi,t, a part of the state st. Agent policies πi are parameterized by θi while the planner policy πp is parameterized by θp.\nThe Nash equilibria of M [πp] are agent policies where any unilateral deviation is suboptimal:\nANE(πp) := {π | ∀i ∈ [1, N ], π̃i ∈ Π : Ji(π̃i, π−i, πp) ≤ Ji(πi, π−i, πp)}, (1) where Ji(π, πp) := Eπ,πp [∑H t=0 γ tr (i) t ] denotes the objective of agent i. Hence, a rational agent would not unilaterally deviate from π ∈ ANE(πp). To evaluate a fixed planner policy πp, we simply sample outcomes using policies π ∈ ANE(πp). Also optimizing πp introduces a form of two-level learning. Under appropriate conditions, this can be solved with simultaneous gradient descent (Zheng et al., 2020; Fiez et al., 2019).\nRobustness Objective As noted before, we wish to learn planner policies πp that are robust to reality gaps arising from changes in agent reward functions, e.g., when agents are boundedly rational.2 We develop a robustness objective for the planner by formalizing such reality gaps as perturbations\n1Bold-faced quantities denote vectors or sets, e.g., a = (a1, . . . , aN ), the action profile for N agents. 2This type of reality gap occurs when the simulated environment’s reward function r fails to rationalize the actual behavior of the agents in the real environment, i.e., when agents in the real world act suboptimally with respect to the simulation’s reward function.\nξi ∈ Ξ to agent objectives, where the uncertainty set Ξ : (S,A)H → R is the space of possible perturbations and represents uncertainty about the objectives of other agents. We extend ANE(πp, ξ) to condition the Nash equilibria on perturbations ξ:\nANE(πp, ξ) := {π | ∀i ∈ [1, N ], π̃i ∈ Π : Jξi (π̃i, π−i, πp) ≤ J ξ i (πi, π−i, πp)}, (2)\nJξi (π̃i, π−i, πp) := Ji(π̃i, π−i, πp) + Eτi∼π̃i,π−i,πp [ξi(τi)] (3)\nwhere τi is a trajectory (sequence of state-action pairs). Following Morimoto & Doya (2001), a robust planner optimizes its reward, subject to agents playing a perturbed Nash equilibrium ANE(πp, ξ) that maximally penalizes the planner:\nπ∗p = arg max πp min ξ∈Ξ min π∈ANE(πp,ξ) Jp(π, πp). (4)\nNote that agent policies π ∈ ANE(πp, ξ) describes agents that optimize their own reward function, and we assume an adversary chooses ξ.\nBounded Uncertainty Set There are two challenges with Equation 4. First, if the adversary can arbitrarily choose Ξ, the worst case is uninformative.3 Second, depending on the complexity of Π, the uncertainty set Ξ may be high-dimensional and intractable to search. We address these issues by upper-bounding the size of the uncertainty set, L∞ norm of ξi ∈ Ξ, by the term . Thus upper-bounds the difference between the reward functions of agents in the training and testing environments, e.g., between simulation and the real world. This bounded uncertainty set is:\nΞ := { ξ ∣∣∣∣∣ supπ,πp |ξi(π, πp)| < , for all i ∈ I } . (5)\nThis uncertainty set is equivalent to the -equilibria of M [πp]:\nANE(πp, ) := {π | ∀i ∈ [1, N ], π̃i ∈ Π : Ji(π̃i, π−i, πp) ≤ Ji(πi, π−i, πp) + }. (6)\nis a tunable hyperparameter—this is the case with most robust RL (Pinto et al., 2017; Li et al., 2019)—but a good starting value is the anticipated error in reward objective estimates (applicationspecific). Using 6, the robustness objective becomes the following constrained optimization problem:\narg max πp J∗p,min(πp, )︸ ︷︷ ︸ Planner-OPT , where J∗p,min(πp, ) := min π∈ANE(πp, ) Jp(π, πp)︸ ︷︷ ︸ Agent-Adv-Search . (7)\nUsing ANE(πp, ) replaces the problem of intractably searching through Ξ with searching through ANE(πp, ), and thus merges the two nested min operations in Equation 4. Conceptually, this transfers the worst-case search problem to the agents: Agent-Adv-Search agents find an adversarial equilibrium; Planner-OPT optimizes the planner given adversarial agents. Note that the constraint set in Eq 7 is non-empty for ≥ 0; the constraints (Eq 6) simply upper-bound the regret of agents. By definition, for non-empty bounded Π, there exists an optimal policy with zero regret." }, { "heading": "3 ERMAS: ROBUST POLICIES IN MULTI-AGENT SIMULATIONS", "text": "We now introduce ERMAS, an efficient optimization framework to solve the robustness problem in Equation 7. ERMAS proceeds in three steps. First, it dualizes Equation 7 following constrained RL. Second, it defines a trust-region for the uncertainty set ANE(πp, ), approximating the dual problem. Finally, it uses first-order meta-learning with trust-regions to solve the approximate dual problem. See Appendix A.1 for the detailed Algorithm description.\nDualizing Agent-Adv-Search The agent search problem in Equation 7 can be formulated similar to a constrained RL problem (Paternain et al., 2019), where the primary objective of the agents is to minimize the planner’s reward and the secondary objective is to maximize their own reward. While conventional constrained RL enforces a constant lower bound in the constraint, e.g.,\n3For instance, by setting ξi such that Jξi = −Jp.\nJi(π, πp) ≥ C, we enforce a dynamic one: ∀i ∈ 1 . . . N : Ji(π, πp) ≥ Ji(π∗i , π−i, πp)− , where π∗i is the optimal unilateral deviation for agent i: π ∗ i := arg maxπ̃i∈Π Ji(π̃i, π−i, πp).\nLetting λ denote Lagrange multipliers, we can dualize Agent-Adv-Search, i.e., Equation 4, as:\nmin π\n( Jp(π, πp)−\nN∑ i=1 λi [Ji(π ∗ i , π−i, πp)− Ji(π, πp)− i] ) ︸ ︷︷ ︸\nJ†p,min(πp, )\n, (8)\nThis is identical to the dualization of constrained reinforcement learning, whose duality gap is empirically negligible and provably zero under weak assumptions (Paternain et al., 2019). We now abuse notation to denote θ := [θ1, . . . , θN , θp] and Jp(θ) := Jp(π, πp) where θ are the parameters of π. To solve Equation 8, the agents apply gradients:\n∇θiJ † p,min(πp, ) = −∇θiJp(θ)− λi∇θi [Ji(θ ′ i(θ), θ−i)− Ji(θ)] , (9)\nwhere θ′i(θ) is the parameters of the optimal unilateral deviation π ∗ i for agent i, i.e. the parameters that minimize local regret, which depends on the current policy parameters θi. λi is updated as:\n∇λiJ † p,min(πp, ) = Ji(π ∗ i , π−i, πp)− Ji(π, πp)− i. (10)\nEquation 8 still poses a challenge through the Ji(π∗i , π−i, πp) terms, which correspond to unknown agent regret. We now detail the efficient approximation of the value and derivative of agent regret using local and meta-learning approximations, respectively.\nTrust Regions using Local -equilibria Estimating regret requires knowledge of the optimal unilateral deviation for agent i. We can simplify this problem by proposing a refinement of -equilibria inspired by the notion of local Nash equilibria in differentiable games (Ratliff et al., 2014).\nDefinition 3.1. A strategy π is a local -Nash equilibrium if there exists open sets Wi ⊂ ΠN such that πi ∈ Wi and for each i ∈ {1, . . . , N} we have that Ji(π′i, π−i) ≤ Ji(π) + ′ for all π′i ∈Wi \\ {πi}, where ′ := supπ′i∈Wi KL(πi||π ′ i).\nBy instead performing Agent-Adv-Search on the local -Nash equilibria, we can limit the set of unilateral deviations to consider to a small trust region, Πη(π):\nANE(πp, η) := {π | ∀i ∈ [1, N ], π̃i ∈ Πη(πi) : Ji(π̃i, π−i, πp) ≤ Ji(πi, π−i, πp) + }, (11) Πη(π) := {π′ ∈ Π | KL(π||π′) ≤ η}, (12)\nwhere η > 0 defines the size of the trust region. For small η, algorithms such as TRPO (Schulman et al., 2017) can be used to efficiently approximate optimal local deviations of πi, affording reasonable approximations of Ji(π∗i , π−i, πp). Note that our usage of trust region algorithms is not for optimization purposes. ERMAS requires the use of trust region optimization to ensure that the equilibria considered by ERMAS are limited to a local neighborhood of the policy space (Eq 11).\nFirst-Order Meta Learning Approximation The full gradient in Equation 9 is also complicated by the need to estimate the derivative of local regret ∇θi [Ji(θ′i(θ), θ−i)− Ji(θ)]. The second term maximizes the performance of the agent’s policy and is simply found with policy gradient. The first term is less straightforward: it minimizes the performance of the best agent policy in the current trust region. We note that this first term corresponds to a meta-learning gradient. We follow REPTILE (Nichol et al., 2018) to obtain a first-order approximation of a M -step meta-learning gradient:\n∇θiJi(θ′i(θ), θ−i) = g1 − 1\nM M∑ i=1 gi, gi = ∇θiJi θi + i−1∑ j=1 gj , θ−i, θp , (13) where gi denotes the ith policy gradient in the direction of Ji. In practice, we scale this meta-learning term with the hyperparameter β as β < 1 incorporates a helpful inductive bias where maximizing agent reward leads to local maxima. We can alternatively apply this gradient update periodically, to both mimic β < 1 and reduce computation overhead. First-order meta-learning approximations are known to be empirically effective, and are necessary for ERMAS to efficiently solve Eq 8.\nERMAS By solving the dual problem (Eq. 8), ERMAS yields robustness to -equilibria and, equivalently, uncertainty in agent objectives. ERMAS solves the dual problem by combining trust regions and meta-learning techniques to estimate and differentiate agent regret. Algorithm 1 and 2 (Appendix A.1) describe an efficient implementation of this procedure for nested policy learning." }, { "heading": "4 EXPERIMENTAL VALIDATION IN MULTI-AGENT SIMULATIONS", "text": "4.1 ERMAS SOLVES THE INNER LOOP: AGENT-ADV-SEARCH\nConstrained Repeated Bimatrix Game We analyze the agent behaviors learned by Agent-Adv-Search in experiments depicted in Figure 2. These experiments apply ERMAS to the classic repeated bimatrix game, which is well-studied in the game theory literature as its Nash equilibria can be solved efficiently. At each timestep, a row player (Agent 1) and column player\n(Agent 2) simultaneously choose from a finite number of actions. Each pair of actions (i, j) corresponds to a pair of payoffs for the agents r1(i, j), r2(i, j). We select the payoff matrices r1 and r2, illustrated in Figure 1a, so that only one Nash equilibrium exists and that the equilibrium constitutes a “tragedy-of-the-commons,” where agents selfishly optimizing their own reward leads to less reward overall. To extend this repeated bimatrix game into sequential decision making, we further constrain the game so that, at any timestep, agents can only choose an action adjacent to their action in the previous timestep. We also introduce a passive planner that observes the game and receives a payoff rp(i, j). The planner does not take any actions and its payoff is constructed such that its reward is high when the agents are at the Nash equilibrium. In effect, this toy setting allows us to verify that ERMAS samples realistic worst-case behaviors–that is, that Agents 1 and 2 learn to deviate from their tragedy-of-commons equilibrium in order to reduce the planner’s reward but also without significantly increasing their own regret.\nDiscovering -Equilibria Figure 2 (left) visualizes how the equilibria reached by AI agents balance the reward of the planner (y-axis) and agents (x-axis). Conventional multi-agent RL discovers the Nash equilibrium, which is visualized in the top left. At this equilibrium, the agents do not cooperate and the planner receives high reward. For small values of , ERMAS also discovers the Nash equilibrium. Because acts as a constraint on agent regret, larger values of enable ERMAS to deviate farther from the Nash equilibrium, discovering -equilibria to the bottom-right that result in lower planner rewards. Deviations from a Nash equilibrium are associated with higher regret, meaning that regret should increase with . Figure 2 (middle) clearly demonstrates that ERMAS imparts this trend. As described by Equation 7, this display of adversarial behavior is key to learning a more robust planner." }, { "heading": "4.2 ALL THE COMPONENTS OF ERMAS ARE NECESSARY", "text": "Dynamic or Frozen Lagrange multipliers λi The Lagrange multipliers λ balance the dual objectives of seeking stable agent equilibria and minimizing the planner’s reward. Recall that smaller values of λ mean agent objectives are more antagonistic. The λ are updated using local estimates of agent regret, as described in Equation 8. We can validate that these updates are necessary by analyzing the equilibria learned by ERMAS when λ are not updated. Fixing a value of λ reduces Equation 8 to learning with shared rewards. Figure 2 (right) visualizes the equilibria discovered with frozen λ in the same format as the Figure 2 (left) plot. This visualization shows that freezing λ affects the equilibria discovered by ERMAS; the bottom right quadrant which contains the -equilibria discovered with dynamic λ are not reached for any values of λ. This validates that certain -equilibria are only reachable with dynamic λ and hence the updates in 8 are necessary for proper behavior.4\n4However, we recommend temporarily freezing λ at the top of Algorithm 2 to “warm up” agent policies.\nFirst-order Approximation Using the first order approximation of Equation 13 might slow learning, since we step “away” from the direction that the standard policy gradient suggests. In contrast, we find that the ERMAS update can significantly speed up convergence. Figure 3 depicts the learning curve of variants of ERMAS with varying weights, β, on the meta-learning term. For instance, when β = 0, the meta-learning term is completely dropped. We find that lines corresponding to larger values of β reach improved agent rewards half an order of magnitude faster than lower values of β. However, larger values of β fail to reach the desirable region of adversarial -equilibria. In both cases, very large (β > 1N ) and very small (β ∼ 0) fail to fully reach and remain at the adversarial region. However, an appropriate range for β gives rapid convergence and good asymptotic behavior.\n4.3 SOLVING PLANNER-OPT USING AGENT-ADV-SEARCH\nWe now show that ERMAS yields planners πp which are more robust to uncertainty in the agent’s reward function. Specifically, we empirically validate that ERMAS can find strong solutions to the nested optimization problems Agent-Adv-Search and Planner-OPT, in two simulations: repeated bimatrix games and economic simulations (see Figure 1).\nEvaluation Workflow To measure planner robustness, we evaluate trained planners in test environments with a reality gap (i.e. reward parameterizations that differ from the training environment), containing test agents that are optimized for the test environment’s reward functions and that have not been seen by the planner. We proceed as follows: 1) Train the agents and planner with ERMAS. 2) Fix the planner and transfer it to a test environment, then train new agents from scratch in the test environment with reward functions unseen by the planner. 3) Report the value of the planner objective after the new agents have converged.\nAugmented Repeated Bimatrix Games We now extend the bimatrix game to a nested reinforcement learning problem by allowing the planner to take actions and influence its own experienced reward. The planner now selects an integer a ≥ 0 that modifies its new reward function: arp(i, j)−a2. This function is chosen for its quadratic form: a scales the reward but adds a cost a2 which disincentives large values of a. Agents are modified to receive a new reward function r′i = ri −Qrp where ri is their original reward function and Q ≤ 1 is an unknown scalar which may differ between training and testing environments. If Q = 0, agents act identical to the experiments in Figure 2. For environments with larger values of Q, e.g., Q = 0.5, agents have an incentive to be adversarial.\nFigure 4 shows the test performance of ERMAS against vanilla multi-agent RL policies learned with Qtrain = 0.5 or with Qtrain = 0. Naturally, MARL trained with Qtrain = 0.5 underperforms MARL trained withQtrain = 0 when evaluated in environments whereQtest < 0.3. However, MARL trained with Qtrain = 0.5 outperforms when Qtest > 0.3. Agent-Adv-Search successfully finds the worst-case perturbation, and yields a planner policy at least as good as MARL Qtrain = 0.5. This shows that ERMAS can produce policies which are robust to uncertainty in agents’ objectives and is a tractable adversarial robustness algorithm.\nExisting single-agent and multi-agent robust reinforcement learning algorithms optimize for different robustness objectives than ERMAS. However, some general techniques for addressing Sim2Real can be extended to provide an alternative to ERMAS’s adversarial approach. Our first baseline extends the technique of domain randomization (DR) by applying random perturbations to agent rewards. For DR, agents receive r′i = ri + σ, where σ ∼ U [−1, 1] (recall ri ∈ [0, 1]). These perturbations are randomized periodically between episodes. Even in a simple 4-by-4 bimatrix game, there are 2304 = 16 states ×16 states ×9 actions possible reward values to perturb; a 2304-dimensional space is too sparse to cover uniformly. Furthermore, in contrast to DR of visual inputs or dynamics, there is a natural latency to the effect of DR of agent rewards: randomization only has an effect if agents learn to adapt to their perturbed rewards. Our second baseline, risk robustness (RR), applies a concavity to planner rewards, e.g., log rp. This encourages robustness to environment stochasticity, including the randomness of agent policies. As illustrated in Figure 4, neither the domain randomization nor risk robustness baseline yields meaningful robustness even in this simple setting." }, { "heading": "4.4 ERMAS FOR LARGE-SCALE TWO-LEVEL LEARNING: AI ECONOMIST", "text": "We now address an important use-case, tax policy design, to demonstrate ERMAS at scale. We train a robust AI Economist: a planner for optimal taxation that acts in a spatiotemporal economic simulation, see Zheng et al. (2020) for a detailed description. This setting is very challenging given the scale of the simulation and agent affordances. In particular, adversarial and randomization baselines do not scale to this setting. In this simulation, agents optimize their expected utility Eπ,πp [∑T t=0 ri,t ] , where the utility ri,t is a function of labor li,t and post-tax endowments x̃i,t:\nz̃i,t = zi,t − T (zi,t), x̃i,t = ∑ t′≤t z̃i,t, ri,t(x̃i,t, li,t) = x̃1−ηi,t − 1 1− η − li,t, η > 0, (14)\nwhere η sets the degree of risk aversion (higher η means higher risk aversion). Agents earn income zi,t and pay taxes T (zi,t), which are set by the planner. The planner optimizes social welfare swf:\nswf = eq(x) · prod(x), eq(x) = 1− N N − 1 gini(x), prod(x) = N∑ i=1 xi, (15)\na combination of equality (Gini, 1912) and productivity.\nFigure 5 shows the performance of ERMAS tax policies along with baseline tax policies (Saez, US Federal) and an AI Economist (MARL). All models are trained on agents with η = 0.23. We replicate Zheng et al. (2020): MARL outperforms the Saez tax. However, MARL fails to outperform Saez when testing with η < 0.22 and η > 0.28. In contrast, ERMAS yields consistent gains across the depicted range of η. This shows that ERMAS’s tax policy can improve swf even if agent risk aversion significantly increases or decreases (e.g., due to economic cycles or exogenous shocks).\nIn fact, ERMAS outperforms the baseline AI Economist for the original setting of η = 0.23. This suggests the robustness and performance do not necessarily pose a zero-sum game: ERMAS can find equilibria with high performance and strong generalization. It has been observed empirically in various studies that single-agent robust reinforcement learning similarly yields performance improvements even in the absence of environment perturbations (Pinto et al., 2017)." }, { "heading": "5 RELATED WORK", "text": "Sim2real: Reality Gaps The Sim2Real problem considers how reality gaps, i.e., mismatches between a simulation and reality, can (negatively) impact RL policies. Robust performance across reality gaps has been studied when applying deep RL to robotic control with visual inputs (James et al., 2019; Christiano et al., 2016; Rusu et al., 2016; Tzeng et al., 2015). To close reality gaps, domain adaptation aims to transfer a simulated distribution (of the world) to the real world (Higgins et al., 2017; Kim et al., 2019). In comparison, the study of reality gaps in multi-agent RL is relatively nascent (Suh et al., 2019; Nachum et al., 2019).\nMulti-Agent Robustness Morimoto & Doya (2001) proposed robust RL for the worst-case adversarial objective (i.e., a max-min optimization problem), inspired by H∞ control, and provided\nanalytical solutions in the linear setting. The robust RL problem can be solved using an adversarial player that chooses perturbations. Rather than the worst-case, Pinto et al. (2017) optimized for the α-quantile of reward, using a conditional value-at-risk (CVAR) interpretation of adversarial robustness. We consider perturbations to agent rewards. Other works have studied perturbing transition matrices, observation spaces, and action probabilities, which are relevant to robotics and control applications (Pinto et al., 2017; Tessler et al., 2019; Hou et al., 2020; Li et al., 2019). We also address a more challenging multi-agent setting than previous works. First, agents do not necessarily optimize for the same (robustness) objective. We therefore consider a one-sided robustness problem where only the planner is expected to act robustly. This setting is significantly more difficult than previous works that assume all agents act robustly (Li et al., 2019). Second, we address a challenging nested reinforcement learning setting of interest to important real-world applications such as economics mechanism design (Zheng et al., 2020). Although we formulate ERMAS as using simultaneous gradient descent, ERMAS can be easily extended to use nested optimization solvers such as competitive gradient descent (Schäfer & Anandkumar, 2020).\nSee Appendix A.2 for additional related work." }, { "heading": "6 FUTURE WORK", "text": "Future work could extend ERMAS to reality gaps due to different causes, such as changes in transition dynamics, agent affordances, number of agents, etc, which might require more refined approximations to the planner’s robustness objective. Furthermore, it would be interesting to apply ERMAS to problems that feature more general solution concepts, e.g., Bayesian Nash equilibria, correlated equilibria, Walrassian equilibria, and others." }, { "heading": "C. Gini. Variabilità e mutabilità. 1912.", "text": "Irina Higgins, Arka Pal, Andrei A Rusu, Loic Matthey, Christopher P Burgess, Alexander Pritzel, Matthew Botvinick, Charles Blundell, and Alexander Lerchner. Darla: Improving zero-shot transfer in reinforcement learning. arXiv preprint arXiv:1707.08475, 2017.\nJohn H. Holland and John H. Miller. Artificial Adaptive Agents in Economic Theory. American Economic Review, 81(2):365–71, 1991. URL https://econpapers.repec.org/article/ aeaaecrev/v_3a81_3ay_3a1991_3ai_3a2_3ap_3a365-71.htm. Publisher: American Economic Association.\nLinfang Hou, Liang Pang, Xin Hong, Yanyan Lan, Zhiming Ma, and Dawei Yin. Robust Reinforcement Learning with Wasserstein Constraint. arXiv:2006.00945 [cs, stat], June 2020. URL http://arxiv.org/abs/2006.00945. arXiv: 2006.00945.\nPeter Howitt. What have central bankers learned from modern macroeconomic theory? Journal of Macroeconomics, 34(1):11–22, March 2012. ISSN 0164-0704. doi: 10.1016/j. jmacro.2011.08.005. URL http://www.sciencedirect.com/science/article/ pii/S0164070411000619.\nStephen James, Paul Wohlhart, Mrinal Kalakrishnan, Dmitry Kalashnikov, Alex Irpan, Julian Ibarz, Sergey Levine, Raia Hadsell, and Konstantinos Bousmalis. Sim-to-real via sim-to-sim: Dataefficient robotic grasping via randomized-to-canonical adaptation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 12627–12637, 2019.\nKun Ho Kim, Yihong Gu, Jiaming Song, Shengjia Zhao, and Stefano Ermon. Cross Domain Imitation Learning. arXiv:1910.00105 [cs, stat], September 2019. URL http://arxiv.org/ abs/1910.00105. arXiv: 1910.00105.\nAlan P. Kirman. Whom or What Does the Representative Individual Represent? Journal of Economic Perspectives, 6(2):117–136, June 1992. ISSN 0895-3309. doi: 10.1257/jep.6.2.117. URL https://www.aeaweb.org/articles?id=10.1257/jep.6.2.117.\nShihui Li, Yi Wu, Xinyue Cui, Honghua Dong, Fei Fang, and Stuart Russell. Robust Multi-Agent Reinforcement Learning via Minimax Deep Deterministic Policy Gradient. Proceedings of the AAAI Conference on Artificial Intelligence, 33:4213–4220, July 2019. doi: 10.1609/aaai.v33i01. 33014213.\nJun Morimoto and Kenji Doya. Robust Reinforcement Learning. In T. K. Leen, T. G. Dietterich, and V. Tresp (eds.), Advances in Neural Information Processing Systems 13, pp. 1061–1067. MIT Press, 2001. URL http://papers.nips.cc/paper/ 1841-robust-reinforcement-learning.pdf.\nRoger B. Myerson. Mechanism Design, pp. 1–13. Palgrave Macmillan UK, London, 2016. ISBN 978-1-349-95121-5. doi: 10.1057/978-1-349-95121-5 2675-1. URL https://doi.org/ 10.1057/978-1-349-95121-5_2675-1.\nOfir Nachum, Michael Ahn, Hugo Ponte, Shixiang Gu, and Vikash Kumar. Multi-Agent Manipulation via Locomotion using Hierarchical Sim2Real. arXiv:1908.05224 [cs], October 2019. URL http://arxiv.org/abs/1908.05224. arXiv: 1908.05224.\nAlex Nichol, Joshua Achiam, and John Schulman. On First-Order Meta-Learning Algorithms. arXiv:1803.02999 [cs], October 2018. URL http://arxiv.org/abs/1803.02999. arXiv: 1803.02999.\nSantiago Paternain, Luiz F. O. Chamon, Miguel Calvo-Fullana, and Alejandro Ribeiro. Constrained Reinforcement Learning Has Zero Duality Gap. arXiv:1910.13393 [cs, math, stat], October 2019. URL http://arxiv.org/abs/1910.13393. arXiv: 1910.13393.\nXue Bin Peng, Marcin Andrychowicz, Wojciech Zaremba, and Pieter Abbeel. Sim-to-Real Transfer of Robotic Control with Dynamics Randomization. 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 3803–3810, May 2018. doi: 10.1109/ICRA.2018.8460528. URL http://arxiv.org/abs/1710.06537. arXiv: 1710.06537.\nWolfgang Pesendorfer. Behavioral economics comes of age: A review essay on advances in behavioral economics. Journal of Economic Literature, 44(3):712–721, 2006.\nLerrel Pinto, James Davidson, Rahul Sukthankar, and Abhinav Gupta. Robust Adversarial Reinforcement Learning. arXiv:1703.02702 [cs], March 2017. URL http://arxiv.org/abs/ 1703.02702. arXiv: 1703.02702.\nLillian J. Ratliff, Samuel A. Burden, and S. Shankar Sastry. On the Characterization of Local Nash Equilibria in Continuous Games. arXiv:1411.2168 [math], November 2014. URL http: //arxiv.org/abs/1411.2168. arXiv: 1411.2168.\nAndrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, and Raia Hadsell. Sim-to-Real Robot Learning from Pixels with Progressive Nets. October 2016. URL https: //arxiv.org/abs/1610.04286v2.\nJohn Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, and Pieter Abbeel. Trust Region Policy Optimization. arXiv:1502.05477 [cs], April 2017. URL http://arxiv.org/abs/ 1502.05477. arXiv: 1502.05477.\nFlorian Schäfer and Anima Anandkumar. Competitive Gradient Descent. arXiv:1905.12103 [cs, math], June 2020. URL http://arxiv.org/abs/1905.12103. arXiv: 1905.12103.\nHerbert A. Simon. From substantive to procedural rationality. In T. J. Kastelein, S. K. Kuipers, W. A. Nijenhuis, and G. R. Wagenaar (eds.), 25 Years of Economic Theory: Retrospect and prospect, pp. 65–86. Springer US, Boston, MA, 1976. ISBN 978-1-4613-4367-7. doi: 10.1007/ 978-1-4613-4367-7 6. URL https://doi.org/10.1007/978-1-4613-4367-7_6.\nHerbert A. Simon and Jonathan Schaeffer. The Game of Chess. Technical report, CARNEGIEMELLON UNIV PITTSBURGH PA ARTIFICIAL INTELLIGENCE AND PSYCHOLOGY PROJECT, December 1990. URL https://apps.dtic.mil/sti/citations/ ADA225613. Section: Technical Reports.\nYoung-Ho Suh, Sung-Pil Woo, Hyunhak Kim, and Dong-Hwan Park. A sim2real framework enabling decentralized agents to execute MADDPG tasks. In Proceedings of the Workshop on Distributed Infrastructures for Deep Learning, DIDL ’19, pp. 1–6, Davis, CA, USA, December 2019. Association for Computing Machinery. ISBN 978-1-4503-7037-0. doi: 10.1145/3366622. 3368146. URL https://doi.org/10.1145/3366622.3368146.\nRichard S. Sutton and Andrew G. Barto. Reinforcement learning: An introduction. MIT press, 2018.\nChen Tessler, Yonathan Efroni, and Shie Mannor. Action Robust Reinforcement Learning and Applications in Continuous Control. arXiv:1901.09184 [cs, stat], May 2019. URL http:// arxiv.org/abs/1901.09184. arXiv: 1901.09184.\nJosh Tobin, Rachel Fong, Alex Ray, Jonas Schneider, Wojciech Zaremba, and Pieter Abbeel. Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World. arXiv:1703.06907 [cs], March 2017. URL http://arxiv.org/abs/1703.06907. arXiv: 1703.06907.\nEric Tzeng, Coline Devin, Judy Hoffman, Chelsea Finn, Pieter Abbeel, Sergey Levine, Kate Saenko, and Trevor Darrell. Adapting Deep Visuomotor Representations with Weak Pairwise Constraints. November 2015. URL https://arxiv.org/abs/1511.07111v5.\nStephan Zheng, Alexander Trott, Sunil Srinivasa, Nikhil Naik, Melvin Gruesbeck, David C. Parkes, and Richard Socher. The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies. arXiv:2004.13332 [cs, econ, q-fin, stat], April 2020. URL http://arxiv. org/abs/2004.13332. arXiv: 2004.13332." } ]
2,020
null
SP:a4900e2a8fbd39245400e377869f8c5350ce12fd
[ "This paper applies state-of-the-art transformer-based neural networks to layout representation learning of slides. The most notable contribution of this paper is the construction of large-scale parsed slide layout dataset. This paper proposes to pre-train the network on this large-scale dataset without masked reconstruction strategy and verifies it with several subtasks including element role labeling, image captioning, auto-completion and layout retrieval, with a comparison to a decision-tree based method as baseline. " ]
Layout representation, which models visual elements in a canvas and their interrelations, plays a crucial role in graphic design intelligence. With a large variety of layout designs and the unique characteristic of layouts that visual elements are defined as a list of categorical (e.g. shape type) and numerical (e.g. position and size) properties, it is challenging to learn a general and compact representation with limited data. Inspired by the recent success of self-supervised pre-training techniques in various natural language processing tasks, in this paper, we propose CanvasEmb (Canvas Embedding), which pre-trains deep representation from unlabeled graphic designs by jointly conditioning on all the context elements in the same canvas, with a multi-dimensional feature encoder and a multi-task learning objective. The pre-trained CanvasEmb model can be fine-tuned with just one additional output layer and with a small size of training data to create models for a wide range of downstream tasks. We verify our approach with presentation slides data. We construct a large-scale dataset with more than one million slides, and propose two novel layout understanding tasks with human labeling sets, namely element role labeling and image captioning. Evaluation results on these two tasks show that our model with fine-tuning achieves state-of-the-art performances. Furthermore, we conduct a deep analysis aiming to understand the modeling mechanism of CanvasEmb, and demonstrate its great potential use on more applications such as layout auto completion and layout retrieval.
[]
[ { "authors": [ "Joost Beusekom", "Daniel Keysers", "Faisal Shafait", "Thomas Breuel" ], "title": "Distance measures for layoutbased document image retrieval", "venue": "In Proceedings of the Second International Conference on Document Image Analysis for Libraries,", "year": 2006 }, { "authors": [ "Niranjan Damera-Venkata", "José Bento", "Eamonn O’Brien-Strain" ], "title": "Probabilistic document model for automated document composition", "venue": "In Proceedings of the 11th ACM Symposium on Document Engineering,", "year": 2011 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": "Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Nathan Hurst", "Wilmot Li", "Kim Marriott" ], "title": "Review of automatic document formatting", "venue": "In Proceedings of the 9th ACM Symposium on Document Engineering,", "year": 2009 }, { "authors": [ "Guolin Ke", "Qi Meng", "Thomas Finely", "Taifeng Wang", "Wei Chen", "Weidong Ma", "Qiwei Ye", "TieYan Liu" ], "title": "Lightgbm: A highly efficient gradient boosting decision tree", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Hsin-Ying Lee", "Weilong Yang", "Lu Jiang", "Madison Le", "Irfan Essa", "Haifeng Gong", "Ming-Hsuan Yang" ], "title": "Neural design network: Graphic layout generation with constraints", "venue": "In Proceedings of European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Gen Li", "Nan Duan", "Yuejian Fang", "Ming Gong", "Daxin Jiang" ], "title": "Unicoder-vl: A universal encoder for vision and language by cross-modal pre-training", "venue": "In The Thirty-Fourth AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Jianan Li", "Jimei Yang", "Aaron Hertzmann", "Jianming Zhang", "Tingfa Xu" ], "title": "Layoutgan: Generating graphic layouts with wireframe discriminators, 2019", "venue": null, "year": 2019 }, { "authors": [ "Yang Li", "J. Amelot", "X. Zhou", "S. Bengio", "S. Si" ], "title": "Auto completion of user interface layout design using transformer-based tree decoders", "venue": "ArXiv, abs/2001.05308,", "year": 2020 }, { "authors": [ "Tsung-Yi Lin", "Priya Goyal", "Ross Girshick", "Kaiming He", "Piotr Dollar" ], "title": "Focal loss for dense object detection", "venue": "IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Jiasen Lu", "Dhruv Batra", "Devi Parikh", "Stefan Lee" ], "title": "Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks", "venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Peter O’Donovan", "Aseem Agarwala", "Aaron Hertzmann" ], "title": "Learning Layouts for Single-Page Graphic Designs", "venue": "IEEE Transactions on Visualization and Computer Graphics,", "year": 2014 }, { "authors": [ "Xufang Pang", "Ying Cao", "Rynson W.H. Lau", "Antoni B. Chan" ], "title": "Directing user attention via visual flow on web designs", "venue": "ACM Trans. Graph.,", "year": 2016 }, { "authors": [ "Razvan Pascanu", "Tomas Mikolov", "Yoshua Bengio" ], "title": "On the difficulty of training recurrent neural networks", "venue": null, "year": 2012 }, { "authors": [ "Matthew E. Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations", "venue": "In Proc. of NAACL,", "year": 2018 }, { "authors": [ "A. Radford" ], "title": "Improving language understanding by generative pre-training", "venue": null, "year": 2018 }, { "authors": [ "A. Stoffel", "David Spretke", "Henrik Kinnemann", "D. Keim" ], "title": "Enhancing document structure analysis using visual analytics", "venue": "In Proceedings of the 2010 ACM Symposium on Applied Computing,", "year": 2010 }, { "authors": [ "Chen Sun", "Austin Myers", "Carl Vondrick", "Kevin Murphy", "Cordelia Schmid" ], "title": "Videobert: A joint model for video and language representation learning", "venue": "IEEE/CVF International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "S. Tabata", "Hiroki Yoshihara", "H. Maeda", "Kei Yokoyama" ], "title": "Automatic layout generation for graphical design magazines", "venue": "ACM SIGGRAPH 2019 Posters,", "year": 2019 }, { "authors": [ "Kashyap Todi", "Daryl Weir", "Antti Oulasvirta" ], "title": "Sketchplore: Sketch and explore with a layout optimiser", "venue": "In DIS 2016 - Proceedings of the 2016 ACM Conference on Designing Interactive Systems: Fuse,", "year": 2016 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need, 2017", "venue": null, "year": 2017 }, { "authors": [ "Wenhui Wang", "Nan Yang", "Furu Wei", "Baobao Chang", "Ming Zhou" ], "title": "Gated self-matching networks for reading comprehension and question answering", "venue": "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2017 }, { "authors": [ "Zhiyong Wu", "Yun Chen", "Ben Kao", "Qun Liu" ], "title": "Perturbed masking: Parameter-free probing for analyzing and interpreting bert", "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Ying Cao Xinru Zheng", "Xiaotian Qiao", "Rynson W.H. Lau" ], "title": "Content-aware generative modeling of graphic design layouts", "venue": "ACM Transactions on Graphics (Proc. of SIGGRAPH", "year": 2019 }, { "authors": [ "Yiheng Xu", "Minghao Li", "Lei Cui", "Shaohan Huang", "Furu Wei", "Ming Zhou" ], "title": "Layoutlm: Pretraining of text and layout for document image understanding, 2019", "venue": null, "year": 2019 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Yiming Yang", "Jaime G. Carbonell", "Ruslan Salakhutdinov", "Quoc V. Le" ], "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "year": 2019 } ]
[ { "heading": null, "text": "Layout representation, which models visual elements in a canvas and their interrelations, plays a crucial role in graphic design intelligence. With a large variety of layout designs and the unique characteristic of layouts that visual elements are defined as a list of categorical (e.g. shape type) and numerical (e.g. position and size) properties, it is challenging to learn a general and compact representation with limited data. Inspired by the recent success of self-supervised pre-training techniques in various natural language processing tasks, in this paper, we propose CanvasEmb (Canvas Embedding), which pre-trains deep representation from unlabeled graphic designs by jointly conditioning on all the context elements in the same canvas, with a multi-dimensional feature encoder and a multi-task learning objective. The pre-trained CanvasEmb model can be fine-tuned with just one additional output layer and with a small size of training data to create models for a wide range of downstream tasks. We verify our approach with presentation slides data. We construct a large-scale dataset with more than one million slides, and propose two novel layout understanding tasks with human labeling sets, namely element role labeling and image captioning. Evaluation results on these two tasks show that our model with fine-tuning achieves state-of-the-art performances. Furthermore, we conduct a deep analysis aiming to understand the modeling mechanism of CanvasEmb, and demonstrate its great potential use on more applications such as layout auto completion and layout retrieval." }, { "heading": "1 INTRODUCTION", "text": "Graphic design leverages layout to set up and arrange visual elements in a canvas for conveying message in different types of documents, while layout representation is the reversed process to understand visual elements and their inter-relations in a canvas, which is the key for the analysis (Stoffel et al., 2010), retrieval (Beusekom et al., 2006) and generation (Li et al., 2020b; Lee et al., 2020) of graphic designs. However, elements in a layout are complex, which are defined with multi-dimensional properties such as type (e.g., text box, image or button), position and color. For example, the web page and presentation slide shown in Figure 1 is defined by a lot of settings, as each example is constructed by several elements and each element is defined by several proprieties. Due to the complex and sparse features of elements, as well as the rich diversity of layouts, learning a general and compact layout representation is challenging with limited amount of data.\nPrevious works related to layout representations (Li et al., 2019; Tabata et al., 2019; Lee et al., 2020) are mostly task-oriented. They simplify the layout only as the positions of elements, and directly optimize task-specific labels with less than a few thousands instances. Recently a majority of self-supervised pre-trained models such as ELMO (Peters et al., 2018), GPT (Radford, 2018) and BERT (Devlin et al., 2019) have shown promising results in improving a variety of natural language processing (NLP) tasks. The success of pre-trained models in NLP has inspired us to learn contextual layout representations from large-scale unlabeled graphic designs, which can facilitate various downstream tasks for design intelligence. As one highly related work, LayoutLM (Xu et al., 2019) is a document pre-trained model incorporating both text content and layout information for scanned documents. However, it is difficult to generalize to other document types, since its input is\nword-level and it defines layout only as the word position, which is insufficient to describe a layout in graphic design.\nIn this paper, we present CanvasEmb, a large-scale pre-trained model for learning contextual layout representation. It is designed to pre-train deep representation from unlabeled graphic designs by jointly conditioning on all the context elements in the same canvas, and the pre-trained CanvasEmb model can be fine-tuned with just one additional output layer and with a small size of training data to create models for a wide range of downstream tasks. Specifically, we define a generic and high-coverage vocabulary to describe element properties in the canvas. A feature encoder is designed to jointly incorporate multi-dimensional properties, and it is developed with the multi-layer Transformer (Devlin et al., 2019) for modeling element contexts. To ensure the representation conditioning on all dimensions of element contexts, we adopt the masked language modeling strategy with a multi-task objective, where we randomly mask some properties of elements for prediction in the pre-training.\nTo verify our approach, we construct a large-scale dataset with more than one million presentation slides containing rich layout meta-information for pre-training. We then propose two novel downstream tasks for layout understanding with human labeling sets to evaluate the performance of our pre-trained CanvasEmb model. The first task is element role labeling. Only given the information of layout, the goal is to classify the semantic role of each element (e.g., title, subtitle). The second task is image captioning, which detects if a text box and an image in a slide belongs to the image captioning relation. Experimental results on the two tasks show that fine-tuning the CanvasEmb model achieves state-of-the-art performance. Furthermore, we conduct deep analysis to understand the modeling mechanismCanvasEmb. Also, we demonstrate the great potential use of our pre-trained CanvasEmb with two extended applications, including layout auto completion (Li et al., 2020b) and layout retrieval.\nThe contributions of this work are as follows:\n• We propose CanvasEmb, which to the best of our knowledge is the first pre-trained model for layouts in graphic design. It can be fine-tuned with a small size of training data for a wide range of downstream tasks.\n• We construct a large-scale dataset of presentation slides with rich layout information, as well as two novel tasks for layout understanding (i.e., element role labeling and image captioning) with human labeling sets.\n• We demonstrate that our model achieves state-of-the art performances on the two downstream tasks, and show the potential for more applications such as layout auto-completion and layout retrieval." }, { "heading": "2 RELATED WORK", "text": "Layout representation is the focal point of design in rich media, including presentation slides, magazines, comics, posters and web pages. High-quality representations can be conductive to multiple practical design tasks. Early works on design layout or document layout mainly rely on templates (Hurst et al., 2009; Damera-Venkata et al., 2011) or heuristic rules (O’Donovan et al., 2014; Tabata et al., 2019) and require professional knowledge and manual efforts. To efficiently facili-\ntate the problem-solving aspects of sketching in the graphic designs, Todi et al. (2016) propose an interactive layout design tool which uses a real-time layout optimiser without requiring extensive input. However, these methods are restricted and usually fail to model the rich varieties of media information. Xinru Zheng & Lau (2019) make use of the content information to model the graphic design layouts in the purely data-driven scenario to adapt to the contents to be laid out.\nRecently, there is a trend to adopt neural networks and deep learning methods to promote automating layout to be more efficiently. For example, to be more user-friendly, Pang et al. (2016) adopt attention mechanisms to trace the user’s attention, and Lee et al. (2020) improve the conventional GANbased methods (Li et al., 2019; Xinru Zheng & Lau, 2019) to explicitly model relationships among components and user-specified constraints. BERT4Rec (Sun et al., 2019) employs the deep bidirectional self-attention to model user behavior sequences. And Li et al. (2020b) develop Transformerbased tree decoders on the task of auto completion of user interface layout design, which can ease the efforts of UI designers and developers.\nHowever, previous works typically deal with limited kinds of design elements and fail to give general and scalable solutions to layout representation learning. Enlightened by the significant impact of large-scale pre-trained models in the area of NLP (Peters et al., 2018; Radford, 2018; Devlin et al., 2019; Yang et al., 2019) and multi-modal learning (Sun et al., 2019; Lu et al., 2019; Li et al., 2020a), our work implements the attention-based Transformer framework enhanced with pretraining to propose a data-driven and scalable method that captures contextual information for layout representation, which can be applied well on downstream tasks in graphic design." }, { "heading": "3 LAYOUT IN GRAPHIC DESIGN", "text": "Layout in graphic design refers to the way in which we arrange the visual elements on a canvas. Though some settings might vary specific to different document types, there exists basic characteristics of elements that make up the content of layouts (example shown in Figure 1):\n• Type Properties. Elements can be text boxes, pictures or lines. According to the semantic roles, elements can be divided into title, subtitle, button or other placeholders.\n• Geometry Properties. Position and size indicate the elements’ placement in the layout. Besides, z-order is the ordering of overlapping two-dimensional elements, and rotation describes the an element’s circular movement.\n• Color Properties. Color is one of the most straightforward visual feature, including the RGBA channels and extra features such as color gradient.\n• Content-related Properties. Though user contents are separated from the layout, some content-related properties (e.g., text font size and font type) can affect the layout arrangement.\nElements are complex and sparse, composed with the above properties of either categorical (e.g., shape type, color) or numerical (e.g., position, font size, word count) values. Hence, layouts are diverse and complicated for modeling. In the next section, we will introduce our approach for layout representation learning." }, { "heading": "4 MODELING", "text": "We present our model CanvasEmb, which inputs elements with multi-dimensional properties and outputs representation for the layout. To train our model, we adopt the two-stage learning framework, namely pre-training and fine-tuning." }, { "heading": "4.1 MODEL ARCHITECTURE", "text": "We formulate the input as a sequence of visual elements {x0, x1, x2, ..., xn} in the layout, where each element xi is defined with m properties {p1i ,p2i , ...,pmi }. Here x0 is the sequence representation which is randomly initialized. Figure 2 shows the overview architecture of our model, which is similar to BERT (Devlin et al., 2019). The feature embedding encodes high dimensional properties for elements, and is concatenated with the transformer encoder to model the global context\nof elements. The output representation can be further used to make prediction of element-level and layout-level labels, as well as the relations between the elements, with an extra task-specific prediction layer. Here, we introduce the details of model components.\nFeature Embedding. For the i-th element xi = [p1i ; p2i ; ...; pmi ], the embedding ei is obtained by concatenating m property embeddings:\nei = Θ( 1i ⊕ 2i ⊕ ...⊕ mi ), (1)\nwhere ⊕ is the concatenation operator and Θ is a non-linear transform function.\nFor each channel j, the corresponding property pji contains multi-dimensional values. For example, given a 2-dim numerical property pji = [p j,1 i ; p j,2 i ] (e.g., element size with height and width), the embedding in this channel can be calculated as:\nji = ξ j(pj,1i )⊕ ξ j(pj,2i ), (2)\nwhere pj,ki represents the k-th value in p j i and ξ j is the embedding function.\nThere are two types of embedding functions. For properties with categorical values such as type and color, we use the embedding matrix as the learning parameter. For properties with numerical values such as position and size, the positional encoding (Vaswani et al., 2017) is adopted:\nPE(p j,k i , 2h) = sin(p j,k i /10000 2h/dj,k) (3)\nPE(p j,k i , 2h+ 1) = cos(p j,k i /10000 2h/dj,k) (4)\nwhere dj,k means the embedding dimension assigned to pj,ki .\nTransformer Encoder. On top of the feature embeddings, we use a transformer encoder (Vaswani et al., 2017) to encode the element contexts. Similar to BERT (Devlin et al., 2019), the multi-layer transformer with the multi-head self-attention mechanism enables to capture correlations between different elements and property fields. Finally, we can get the low-dimensional representations {h(L)0 ; h (L) 1 ; ...,h (L) n } for all elements from the last, i.e. the L-th encoding layer." }, { "heading": "4.2 PRE-TRAINING CANVASEMB", "text": "We adopt the masked language modeling (MLM) as the objective of CanvasEmb pre-training, which has been proven effective in several domains (Devlin et al., 2019; Sun et al., 2019; Lu et al., 2019). To enable the model to learn correlations from different properties, we randomly select any one type of properties to mask for an element during training. CanvasEmb is trained to approximately maximize the pseudo likelihood Ex∼D ∑n i=1 ∑m j=1 P (p j i |p \\j i , x\\i; θ) (Sun et al., 2019), where D is the real data distribution, θ represents the model parameters, p\\ji represents the properties masking p j i of\nthe i-th element, and x\\i means all the elements in the input excluding for the i-th one. Particularly, we adopt cross-entropy loss and MSE loss for classification and regression tasks respectively:\nL(θ) = n∑ i=1 ∑ j∈Mc − logP (pji |p \\j i , x\\i; θ) n∑ i=1 ∑ j∈Mu ||pji − p̂ j i ||2\n(5)\nHereMc andMu represent the index sets of categorical and numerical properties, p̂ji is the model prediction." }, { "heading": "4.3 FINE-TUNING CANVASEMB", "text": "For different downstream tasks, we feed the same input as in the pre-training phase, and fine-tune the model with extra task-specific layers:\nElement-Level tasks are aimed to predict specific features of the elements, e.g. properties which are different from but correlated to the input ones.\nElement-to-Element tasks predict the relations between a pair of elements. In CanvasEmb, we build a query-memory attention-based mechanism for the relation prediction (Wang et al., 2017):\nR(xi, xj) = αi,j + αj,i (6)\nαi,j = expW rQh (L) i ·W rMh (L) j∑n\nj′=0 expW r Qh (L) i ·W rMh (L) j′\n(7)\nwhere W rQ and W r M represent trainable transforming weight matrix.\nSequence-Level tasks utilize the x0 representation token to make prediction." }, { "heading": "5 EXPERIMENT", "text": "" }, { "heading": "5.1 DATASET AND TASK DEFINITION", "text": "Pre-training Dataset. CanvasEmb is designed to represent various types of graphic designs. In this paper, we adopt presentation slides as an example, as a huge amount of slides are publicly available and the understanding of slides would enable many applications. We first crawl presentation slides from the web to conduct a pre-training dataset with one million pages of slides1. For each element (e.g., a textbox or a table) in a page of the slide, we further extract their properties as defined in Section 3.\nElement Role Labeling. This task is to detect the semantic role of each element on a canvas, which is one of the most important problems in Robotic Process Automation. As shown in Figure 3(a), each element is assigned with one of the following labels: title, subtitle, footer, decorator, heading, caption, and other. We create a dataset containing 48, 309 slides and split it into 38, 819 for training and 9, 490 for testing. Each element on the slides is annotated by five labelers and we use majority voting to assign the final label.\nImage Captioning. Given a pair of an image and a text box in a slide, the goal is to detect if the pair belongs to image captioning or not. In the example shown in Figure 3(b), there are four caption pairs on the right of the slide. With similar annotation pipeline, we finally get 2, 996 slides, divided as 1, 496 for training and 1, 500 for testing." }, { "heading": "5.2 SET UP", "text": "Model and Training Parameters: Following Devlin et al. (2019), we testify the effect of model size by comparing two model sizes: CanvasEmb (Base) (L=3, H=64) and CanvasEmb (Large) (L=6,\n1We plan to open source the script for crawling slides.\nH=128), where L and H denote the number of encoder layers and the hidden size, respectively. And the sequence length during training is 16 for both models. During pre-training, we randomly select one type of property to mask for each sample by replacing it with [MASK] (90% of the time) or random values (10% of the time). For multi-task learning objectives, the loss of each prediction task is weighted according to the current performance on the validation set, where the weight for task p ∈ Mc is log max( 1accuracyp+ , 1) and for task p ∈ Mu is log min( squared errorp 10 , 100). For fine-tuning, we adopt the α-balanced focal loss (Lin et al., 2017) to address the label unbalance. The trade-off parameters λ in focal loss is 2. All models are trained using Adam (Kingma & Ba, 2015) with mini-batch size 64. The learning rate is initially set to 10−5 and adaptive learning rate decay applied. We also adopt early stopping and use gradient clipping (Pascanu et al., 2012).\nBaselines: Since there are few strong neural methods that are applicable for the graphic layout design, we compare our proposed model CanvasEmb against the traditional method, i.e. Decision Tree implemented as GBDT (Ke et al., 2017). The input is the same as CanvasEmb and we use the same focal loss settings." }, { "heading": "5.3 RESULT ANALYSIS", "text": "Element Role Labeling. We report the F1 score for each role and the overall macro/micro F1 metrics in Table 1. On the one hand, all CanvasEmb methods outperform traditional Decision Tree model with a large margin on most of the metrics. On the other hand, CanvasEmb (Large) achieves better results than CanvasEmb (Base), and pre-training further boosts the performances.\nImage Captioning. We report F1 and AUC scores in Table 2. Similarly, CanvasEmb (Large) with pre-trained achieves the best results. Since this task focuses on the relation between elements, we also train an enhanced baseline (+2D features) where we explicitly incorporate additional handcrafted 2-dimensional features, such as the distance between two elements. We observe that CanvasEmb (Large) outperforms the enhanced baseline with a large margin. This indicates the ability of CanvasEmb to capture the relations between elements, which is much more efficient than the heuristic way of manual feature engineering.\n100% labeled data F1 AUC\nDecision Tree 84.30 98.32 + 2D features 84.50 98.21 CanvasEmb (Base) 88.99 98.23 - w/o pretrained 89.28 98.33 CanvasEmb (Large) 90.95 98.51 - w/o pretrained 89.68 98.32\n30% labeled data\nDecision Tree 76.39 96.42 + 2D features 81.34 97.36 CanvasEmb (Base) 78.37 94.26 - w/o pretrained 75.69 93.69 CanvasEmb (Large) 80.64 95.27 - w/o pretrained 79.43 94.85\nTable 3: Results for auto completion. We show accuracy for Shape Type (ST.) and Fill Color (FC.), IoU for Position (P.).\nModel ST. FC. P.\nCanvasEmb (Base) 87.38 69.49 44.01 CanvasEmb (Large) 89.37 75.63 46.40\n(a) (b) (c)\nFigure 4: The element-to-element relations. The first row shows the original slides with element index, and the second row shows the corresponding headmaps which X- and Y-axis are the element indexes. (a) Pre-Trained Model (b) Element-Role-Labeling Model (c) Image-Captioning Model." }, { "heading": "5.4 EFFECT OF PRE-TRAINING", "text": "Correlation between pre-training and model size. In general, CanvasEmb models with pretrained outperform those without pre-trained on both tasks. And the increase of model size in the large model (L=6, H=128) further boosts the performance compared to the base model (L=3, H=64). For example, in the task of element role labeling, the pre-trained model brings 5.52% macro F1 gain for CanvasEmb (Large) while only 0.46% gain for CanvasEmb (Base). This is accordant with our intuition, i.e. larger model can capture and embody more knowledge from the pre-training.\nCorrelation between pre-training and labeled fine-tuning data size. We also conduct ablation study with only 30% labeled training data for both downstream tasks, as shown in Table 1 and 2 respectively. We observe that pre-training contributes more when labeled data size is small. In element role labeling task, there is4 = 5.47% macro F1 gain from pre-training for CanvasEmb (Base) under the setting of 30% training data, compared to only 0.46% gain using full data. Similarly, in the task of image captioning, there is4 = 2.68% F1 increase for CanvasEmb (Base) from pre-training using 30% data while pre-training only works a little bit for the base model when using full data. This shows that our pre-trained models are robust and well aligned to the downstream tasks, which alleviates the data scarcity issue." }, { "heading": "5.5 PROBING INTO SHAPE CONTEXT", "text": "To investigate how CanvasEmb captures element-to-element relations, we use the impact function (Wu et al., 2020) fimpact(xi, xj) = dist(hi(X\\{xi}),hi(X\\{xi, xj})) to show the importance\nof the elements by calculating the distance between the hidden states of the layout when removing elements {xi} and {xi, xj}. The impact matrix heatmaps are displayed in Figure 4. • Pre-Training. From Figure 4(a), we observe that elements in close region tend to have stronger influences on each other (shown by darker-colored pixels). For example, element 2 and 4 on the bottom right of the slide have significantly strong impacts on each other, but weaker impacts on element 3 on the left. This indicates that the pre-trained model can typically capture location information in layout representation.\n• Element Role Labeling. Figure 4(b) visualizes the model fine-tuned on the task of element role labeling. We can see elements 1, 3, 5 generally have the most significant impacts on all other elements in the slide. Specifically, element 3 and 5, which are symmetric and have the same semantic roles, strongly affect each other. This shows that the model learns representations mainly based on the content-related features and location of elements in the slide.\n• Image Captioning. For the image captioning task in Figure 4(c), the fine-tuned model pays more attention to images and the corresponding caption-like elements. Element pairs {1, 3} and {2, 4} have very strong impacts on each other, while element 0 has the weakest influences in the slide. This is an interesting observation to see how caption pairs in a slide interact with each other, as the model in this case obviously separates the element 0 from others with captioning relation." }, { "heading": "5.6 EXTENDED APPLICATIONS OF PRE-TRAINED CANVASEMB", "text": "To further demonstrate the knowledge embodied in our pre-trained CanvasEmb, we show two extended applications.\nLayout Auto Completion. Given a partial layout, the model needs to complete the layout by predicting the remaining elements with their properties (Li et al., 2020b). We constrain the setting to one remaining element, which is aligned to our pre-training objective. From the result shown in Table 3, we can see our pre-trained model without any fine-tuning achieves promising results with respect to the properties of shape type, fill color and position. This shows the accurate modeling of elements and contextual representations of layouts in the pre-trained CanvasEmb.\nLayout Retrieval. Given a query layout, the scenario is to retrieve similar layouts from the database, which can be used for layout recommendation. Figure 5 shows an example retrieved by CanvasEmb (Large) with layout embedding cosine similarity. As we can see, both the query and the two candidates contain similarly four images with captions. For a quick evaluation, we construct 20 queries and manually score model’s retrieval quality (rate similarity score from 1 to 5). The average score of top 3 candidates are 3.86 (> 3 above average), which demonstrates CanvasEmb effective layout representations." }, { "heading": "6 CONCLUSION", "text": "In this paper, we present CanvasEmb, a large-scaled pre-trained model for layout representation in graphic design. It encodes elements in a layout with the multi-dimensional properties and models the element context with a Transformer-based architecture. To pre-train our model CanvasEmb, we adopt the masked language modeling with multi-task learning objective. In the two proposed tasks related to layout understanding, fine-tuned CanvasEmb achieves state-of-the-art performances. Furthermore, we conduct a deep analysis to prob into the embodied knowledge learned by CanvasEmb, and show its great potential for more applications in graphic design. As future work, we are going to apply our model to more downstream tasks, and verify our model on more document types such as web pages or word documents." } ]
2,020
null
SP:e030bf232cd040a4c2ea834f6d803d7fcf4aa971
[ "This paper addresses the complexity of the forward pass inference in neural ODEs. The paper proposes to augment training of the neural ODE with an auxiliary neural network that dynamically selects the best numerical integrator for a given input sample. Furthermore, the paper also proposes a regularizer that uses the errors of the numerical integrator to reduce the number of function evaluations, without sacrificing accuracy. " ]
Neural ordinary differential equations (Neural ODEs) are appreciated for their ability to significantly reduce the number of parameters when constructing a neural network. On the other hand, they are sometimes blamed for their long forwardpass inference time, which is incurred by solving integral problems. To improve the model accuracy, they rely on advanced solvers, such as the Dormand–Prince (DOPRI) method. To solve an integral problem, however, it requires at least tens (or sometimes thousands) of steps in many Neural ODE experiments. In this work, we propose to i) directly regularize the step size of DOPRI to make the forwardpass faster and ii) dynamically choose a simpler integrator than DOPRI for a carefully selected subset of input. Because it is not the case that every input requires the advanced integrator, we design an auxiliary neural network to choose an appropriate integrator given input to decrease the overall inference time without significantly sacrificing accuracy. We consider the Euler method, the fourth-order Runge–Kutta (RK4) method, and DOPRI as selection candidates. We found that 10-30% of cases can be solved with simple integrators in our experiments. Therefore, the overall number of functional evaluations (NFE) decreases up to 78% with improved accuracy.
[]
[ { "authors": [ "Ricky T.Q. Chen", "Yulia Rubanova", "Jesse Bettencourt", "David K Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "In NeurIPS", "year": 2018 }, { "authors": [ "Marco Ciccone", "Marco Gallieri", "Jonathan Masci", "Christian Osendorfer", "Faustino Gomez" ], "title": "Naisnet: Stable deep networks from non-autonomous differential equations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Talgat Daulbaev", "Alexandr Katrutsa", "Larisa Markeeva", "Julia Gusak", "Andrzej Cichocki", "Ivan Oseledets" ], "title": "Interpolated Adjoint Method for Neural ODEs", "venue": null, "year": 2003 }, { "authors": [ "J.R. Dormand", "P.J. Prince" ], "title": "A family of embedded runge-kutta formulae", "venue": "Journal of Computational and Applied Mathematics,", "year": 1980 }, { "authors": [ "Emilien Dupont", "Arnaud Doucet", "Yee Whye Teh" ], "title": "Augmented neural odes", "venue": null, "year": 2019 }, { "authors": [ "Chris Finlay", "Jörn-Henrik Jacobsen", "Levon Nurbekyan", "Adam M Oberman" ], "title": "How to train your neural ode: the world of jacobian and kinetic regularization", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Amir Gholami", "Kurt Keutzer", "George Biros" ], "title": "Anode: Unconditionally accurate memory-efficient gradients for neural odes", "venue": "arXiv preprint arXiv:1902.10298,", "year": 2019 }, { "authors": [ "Will Grathwohl", "Ricky T.Q. Chen", "Jesse Bettencourt", "Ilya Sutskever", "David Duvenaud" ], "title": "Ffjord: Free-form continuous dynamics for scalable reversible generative models", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "E. Hairer", "S.P. Nørsett", "G. Wanner" ], "title": "Solving Ordinary Differential Equations I (2nd Revised", "venue": "Ed.): Nonstiff Problems. Springer-Verlag,", "year": 1993 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Liviu Gr. Ixaru", "Guido Vanden Berghe" ], "title": "Runge-Kutta Solvers for Ordinary Differential Equations, pp. 223–304", "venue": null, "year": 2004 }, { "authors": [ "Yiping Lu", "Aoxiao Zhong", "Quanzheng Li", "Bin Dong" ], "title": "Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Andrew Y. Ng" ], "title": "Feature Selection, L1 vs. L2 Regularization, and Rotational Invariance", "venue": "In ICML,", "year": 2004 }, { "authors": [ "Hans Pinckaers", "Geert Litjens" ], "title": "Neural Ordinary Differential Equations for Semantic Segmentation of Individual Colon Glands", "venue": null, "year": 1910 }, { "authors": [ "Hans Pinckaers", "Geert Litjens" ], "title": "Neural ordinary differential equations for semantic segmentation of individual colon glands, 2019", "venue": null, "year": 2019 }, { "authors": [ "Alessio Quaglino", "Marco Gallieri", "Jonathan Masci", "Jan" ], "title": "Koutnı́k. Snode: Spectral discretization of neural odes for system identification", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Danilo Rezende", "Shakir Mohamed" ], "title": "Variational inference with normalizing flows. volume", "venue": "Proceedings of Machine Learning Research,", "year": 2015 }, { "authors": [ "Yulia Rubanova", "Ricky T.Q. Chen", "David K Duvenaud" ], "title": "Latent ordinary differential equations for irregularly-sampled time series", "venue": "In NeurIPS", "year": 2019 }, { "authors": [ "Lars Ruthotto", "Eldad Haber" ], "title": "Deep neural networks motivated by partial differential equations", "venue": "Journal of Mathematical Imaging and Vision,", "year": 2019 }, { "authors": [ "Ikaro Silva", "George Moody", "Daniel J Scott", "Leo A Celi", "Roger G Mark" ], "title": "Predicting in-hospital mortality of icu patients: The physionet/computing in cardiology", "venue": "Comput Cardiol,", "year": 2012 }, { "authors": [ "E Weinan" ], "title": "A proposal on machine learning via dynamical systems", "venue": "Communications in Mathematics and Statistics,", "year": 2017 }, { "authors": [ "Cagatay Yildiz", "Markus Heinonen", "Harri" ], "title": "Lahdesmaki. Ode2vae: Deep generative second order odes with bayesian neural networks", "venue": "In NeurIPS", "year": 2019 }, { "authors": [ "Juntang Zhuang", "Nicha Dvornek", "Xiaoxiao Li", "James S. Duncan" ], "title": "Ordinary differential equations on graph networks", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Juntang Zhuang", "Nicha Dvornek", "Xiaoxiao Li", "Sekhar Tatikonda", "Xenophon Papademetris", "James Duncan" ], "title": "Adaptive checkpoint adjoint method for gradient estimation in neural ode", "venue": "In ICML,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Neural ordinary differential equations (Neural ODEs) are to learn time-dependent physical dynamics describing continuous residual networks (Chen et al., 2018). It is well known that residual connections are numerically similar to the explicit Euler method, the simplest integrator to solve ODEs. In this regard, Neural ODEs are considered as a generalization of residual networks. In general, it is agreed by many researchers that Neural ODEs have two advantages and one disadvantage: i) Neural ODEs can sometimes reduce the required number of neural network parameters, e.g., (Pinckaers & Litjens, 2019), ii) Neural ODEs can interpret the neural network layer (or time) as a continuous variable and a hidden vector at an arbitrary layer can be calculated, iii) however, Neural ODEs’s forward-pass inference can sometimes be numerically unstable (i.e., the underflow error of DOPRI’s adaptive step size) and/or slow to solve an integral problem (i.e., too many steps in DOPRI) (Zhuang et al., 2020b; Finlay et al., 2020; Daulbaev et al., 2020; Quaglino et al., 2020).\nMuch work has been actively devoted to address the numerically unstable nature of solving integral problems. In this work, however, we are interested in addressing the problem of long forward-pass inference time. To overcome the challenge, we i) directly regularize the numerical errors of the Dormand–Prince (DOPRI) method (Dormand & Prince, 1980), which means we try to learn an ODE that can be quickly solved by DOPRI, and ii) dynamically select an appropriate integrator for each sample rather than relying on only one integrator. In many cases, Neural ODEs use DOPRI, one of the most advanced adaptive step integrator, for its best accuracy. However, our method allows that we rely on simpler integrators, such as the Euler method or the\nfourth-order Runge–Kutta (RK4) method (Ixaru & Vanden Berghe, 2004), for carefully selected inputs.\nTable 1 shows an experimental result that our proposed regularization not only reduces the number of function evaluations (NFE) — the inference time is linearly proportional to the number of function evaluations in Neural ODEs — but also increases the inference accuracy in the MNIST classification task. We can reduce the inference time by reducing the average number of steps (and thus, the average NFE) of DOPRI, which can be obtained when the learned ODE is trained to be in a suitable form to solve with DOPRI with a proper regularization.\nHowever, the NFE of DOPRI in a step is 6 whereas RK4 has 4 and the Euler method has 1. So, the Euler method is six times faster than DOPRI even when their step sizes are identical. Therefore, the automatic step size adjustment of DOPRI is not enough to minimize the NFE of forward-pass inference (see Section B in Appendix for more detailed descriptions with a concrete example). To this end, we design an auxiliary network that chooses an appropriate integrator for each sample. The combination of our regularization and the proposed Dynamic Integrator SElection (DISE) shows the best performance in the table.\nWe conduct experiments for three different tasks and datasets: MNIST image classification, PhysioNet mortality prediction, and continuous normalizing flows. Our method shows the best (or close to the best) accuracy with a much smaller NFE than state-of-the-art methods. Our contributions can be summarized as follows:\n1. We design an effective regularization to reduce the number of function evaluations (NFE) of Neural ODEs. 2. We design a sample-wise dynamic integrator selection (DISE) method to further accelerate Neural ODEs without significantly sacrificing model accuracy. 3. We conduct in-depth analyses with three popular tasks of Neural ODEs." }, { "heading": "2 RELATED WORK", "text": "In this section, we review the literature on Neural ODEs. In particular, we review recent regularization designs for Neuarl ODEs and numerical methods to solve ODEs." }, { "heading": "2.1 NEURAL ODES", "text": "It had been attempted by several researchers to model neural networks as differential equations (Weinan, 2017; Ruthotto & Haber, 2019; Lu et al., 2018; Ciccone et al., 2018; Chen et al., 2018; Gholami et al., 2019). Among them, the seminal neural ordinary differential equations (Neural ODEs), as shown in Fig. 1, consist of three parts in general: a feature extractor, an ODE, and a classifier (Chen et al., 2018; Zhuang et al., 2020a). Given an input x, the feature extractor produces an input to the ODE, denoted h(0).\nLet h(t) be a hidden vector at layer (or time) t in the ODE part. In Neural ODEs, a neural network f with a set of parameters, denoted θ, approximates ∂h(t)∂t and h(t1) becomes h(0) + ∫ t1 t0 f(h(t), t;θ) dt, where f(h(t), t;θ) = ∂h(t)∂t . In other words, the internal dynamics of the hidden vector evolution is described by an ODE. One key advantage of Neural ODEs is that we can reduce the number of parameters without sacrificing model accuracy. For instance, one recent work based on a Neural ODE marked the best accuracy for medical image segmentation with an order of magnitude smaller parameter numbers (Pinckaers & Litjens, 2019). In general, we calculate\nh(1)1 and feed it into the next classifier and its final prediction is made. One can accordingly modify the architecture in Fig. 1 for other types of tasks. For simplicity but without loss of generality, we assume the architecture in our discussion.\nNeural ODEs have been used in many tasks, ranging from classification and regression to time series forecasting and generative models (Yildiz et al., 2019; Grathwohl et al., 2019; Rubanova et al., 2019)." }, { "heading": "2.2 ODE SOLVERS", "text": "DOPRI is one of the most powerful integrators (Hairer et al., 1993) and widely used in Neural ODEs. It is a member of the Runge–Kutta family of ODE solvers. DOPRI dynamically controls the step size while solving an integral problem. It is now the default method for MATLAB, GNU Octave, and Simulink. It internally estimates an error by using a heuristic method and the step size is determined by a function inversely proportional to the estimated error — the larger the error, the shorter the step size. The error at i-th step of DOPRI for an integral problem x, denoted errx,i, is estimated by the difference between the fourth-order and the fifth-order Runge–Kutta methods at the moment. The intuition behind the heuristic error estimation is simple yet effective. Among simpler methods, we consider the Euler method, and the fourth-order Runge–Kutta (RK4) method. The Euler method is the simplest method to solve ODEs and both the Euler method and RK4 use a fixed step size. Therefore, their solving time is deterministic.\nOne step of DOPRI involves six function evaluations, i.e., six function calls of f . The Euler method calls the network f only once in a step and RK4 calls four times. Therefore, the Euler method is six times faster than DOPRI for a step. The term ‘NFE’ refers to the number of function evaluations to solve an integral problem. For the Euler method and RK4, NFE is deterministic and does not vary. In DOPRI, however, NFE varies from one sample to another, depending on the estimated error and the number of steps. We refer readers to Section B in Appendix for more detailed descriptions with a concrete example." }, { "heading": "2.3 REGULARIZATIONS IN NEURAL ODES", "text": "To make Neural ODEs faster, one possible way is regularizing the ODE function f . Two naı̈ve methods are regularizing θ with the L1 or L2 regularizers (Ng, 2004). Strictly speaking, these two regularizers are to prevent overfitting. Therefore, preventing overfitting does not necessarily mean quick forward-pass inference.\nTo this end, Dupont et al. showed that by augmenting h(t) with additional zeros, i.e., augmenting the dimensionality of h(t), one can achieve similar effects (Dupont et al., 2019). However, this method is meaningful when we cannot freely control the dimensionality of h(t), which is not our setting. Recently, a kinetic regularization concept has been proposed by Finlay et al. (Finlay et al., 2020), which is written as follows:\nRk def = ∫ t1 t0 ‖f(h(t), t;θ)‖22 dt. (1)\nAmong all regularization terms designed so far, this kinetic regularization’s goal is the closest to ours. It can encourage Neural ODEs to learn straight-line paths from h(t0) to h(t1)." }, { "heading": "3 PROPOSED METHOD", "text": "While enabling the design of compact models, Neural ODEs have one critical drawback that they require solving integral problems, for which many approximation methods have been proposed: the Euler method, RK4, and DOPRI, to name a few. Almost all of them are based on discretizing t and converting an integral into a series of additions. In many cases, therefore, it requires a dense discretization, resulting in a long forward-pass inference time.\n1For simplicity but without loss of generality, the time duration can be fixed into t ∈ [0, 1]. Any arbitrary length ODEs can be compressed into a unit time interval ODE. In some time series datasets, however, the final integral time t1 is given in a sample. In such a case, t1 is set to the sample time.\nIn this paper, we tackle the problem of minimizing the number of function evaluations (and thereby, the forward-pass inference time) of Neural ODEs without significantly sacrificing model accuracy. Our proposed method consists of two parts: i) using the DOPRI’s error estimation as a regularizer and ii) using an auxiliary network to select an appropriate integrator for each input sample." }, { "heading": "3.1 DOPRI’S ERROR ESTIMATION AS A REGULARIZER", "text": "We re-implement the DOPRI method in PyTorch and make it return the estimated error terms. Let {errx,1, errx,2, · · · , errx,N}, where N is the number of steps of DOPRI, be an array of errors estimated by DOPRI while solving an integral problem for an input x. Note that the adaptive step size is an inverse function of the error at each step. We use the following regularizer while training Neural ODEs:\nRerr def = ∑ x∈T N∑ i=1 errx,i, (2)\nwhere x is an input sample for which we have to solve an integral problem, and T is a training set.\nFor instance, x can be an image sample to classify. If we train a Neural ODE to classify images with the cross-entropy loss in conjunction with the regularizer, the trained Neural ODE will learn an ODE that can correctly classify images while reducing the forward-pass time of DOPRI.\nThe backward-pass calculation of our proposed regularizer can be done in O( 1savg ) by maintaining the forward-pass computation graph, where savg is the average step size of DOPRI. However, this complexity will decrease as training goes on with our regularizer because the average step size will increase." }, { "heading": "3.2 AUXILIARY NETWORK TO SELECT INTEGRATOR", "text": "We introduce our auxiliary network v to dynamically select an appropriate integrator given input. This network v cooperates with a Neural ODE as shown in Fig. 2. Before training the auxiliary network, we first train a target Neural ODE. We use the following loss function to train the target Neural ODE:\nLtask + λR, (3)\nwhere Ltask is a task-specific training loss, which can be optimized through the adjoint sensitivity method, and R is an appropriate regularization term. λ ≥ 0 is a coefficient to emphasize the regularization term.\nWe then train the auxiliary network v(h(0);θv) to predict the costs of the Euler method, RK4, and DOPRI. Given a fixed Neural ODE f and θ, we use the same training data to collect the following data for each integrator:\nc(h(0); f,θ) def = { ∆α, if prediction is the same as ground-truth in the training data, Mα, if otherwise,\n(4)\nwhere ∆ > 0 is the number of function evaluations (NFE) to solve the integral problem of h(0) +∫ t1 t0 f(h(t), t;θ) dt, M is a large enough penalty, and α is an exponent. We evaluate this cost value\nc for each integrator and train the network v. The auxiliary network predicts the costs of the three integrators simultaneously, i.e., its output dimensionality is 3.\nIf the target task is not a classification problem, we use the following cost:\nc(h(0); f,θ) def = { (∆Γ)α, if Γ ≤ β, Mα, if otherwise,\n(5)\nwhere Γ is an error estimation for a certain integrator, such as mean absolute/squared error, KL divergence, and so forth, for which we prefer small values. If larger values are preferred, the above definition should be accordingly modified. β is a hyperparameter to decide a threshold.\nNote that training v becomes a regression problem with supervision. After many epochs, the auxiliary integrator selection network v is stabilized and we can deploy it. The best integrator for h(0) is selected by finding the smallest predicted cost. We also note that we need to run the network v only once to select an appropriate integrator for a test case, which incurs a little overhead.\nThe neural network architecture for v should be carefully designed for each application. We introduce our general method in this section. In the experimental evaluation section, we will introduce our design choice for v in each application." }, { "heading": "4 EXPERIMENTAL EVALUATIONS", "text": "We describe our detailed experimental environments and results with the following three different tasks: i) MNIST image classification, ii) PhysioNet mortality prediction, and iii) continuous normalizing flows.\nWe conduct our experiments in the following sequence. We first compare the following regularization methods: i) Neural ODEs without any regularizer, ii) with the L1 regularizer, iii) with the L2 regularizer, iv) with the kinetic energy regularizer, and v) with our proposed regularizer. At this stage, we do not include the auxiliary network yet. This stage is to study which regularizer works better than others in each task. After selecting the best regularization method for each task, we train all networks including the auxiliary network.\nThroughout these experiments, we show that i) the forward-pass of learned ODEs can be faster by using an appropriate regularizer, and ii) the auxiliary network strategically selects simpler integrators than DOPRI for carefully chosen input samples. For each experiment, we show NFE values and unit NFE time in seconds and its multiplication will be wall-clock time. We repeat training and testing with 10 different seeds and report the mean value in the main paper and the standard deviation value in Appendix. Experiments with various integrators are also in Appendix.\nAll experiments were conducted in the following software and hardware environments: UBUNTU 18.04 LTS, PYTHON 3.6.6, NUMPY 1.18.5, SCIPY 1.5, MATPLOTLIB 3.3.1, PYTORCH 1.2.0, CUDA 10.0, and NVIDIA Driver 417.22, i9 CPU, and NVIDIA RTX TITAN." }, { "heading": "4.1 MNIST IMAGE CLASSIFICATION", "text": "We omit the description about MNIST. We use ODE-Net used in (Chen et al., 2018) to classify MNIST images. Refer to Appendix for detailed descriptions on ODE-Net with its hyperparameter configurations.\nAuxiliary Network Architecture. In addition to ODENet, we have one more auxiliary network v whose architecture is summarized in Table 2. We use the standard residual block (He et al., 2016) for this network and the proposed net-\nwork consists of four layers. In comparison with the network f of ODE-Net, its architecture is relatively simpler and it takes 0.8 NFEs (i.e., 0.0006 seconds) to run once. Recall in Table 1 that\nthe network (function) f is evaluated 5-20 times on average to solve integral problems. Therefore, running the auxiliary network v once to decide the best integrator can be a fruitful investment.\nHyperparameters. The regularization coefficient λ is set to {0.01, 0.0001, 0.0001, 0.005} and the starting learning rate is 0.1 with a decay factor {0.1, 0.01, 0.001} at {60, 100, 140} epochs. The exponent α is 0.3 and M is 1,000. We use DOPRI as our default solver unless DISE is adopted. We use the recommended hyperparameters of ODE-Net in the paper or in the respective github repository, as noted in Appendix.\nExperimental Results. The results are summarized in Table 1. It needs 26 NFEs when we do not use any regularizer to train ODE-Net. The kinetic energy-based regularizer improves both the accuracy and the NFE value. However, its NFE is comparable to other standard regularizations. Our regularizer shows an accuracy close to that of the kinetic regularization in the table with an NFE of 14, which is much faster than other cases.\nTable 3: The distribution of the integrator selection by our auxiliary network for MNIST\nIntegrator Percentage DOPRI 37%\nRK4 34% Euler 29%\nWe also applied the dynamic integrator selection (DISE) for both of the no-regularizer and our regularizer configurations. It is worth noting that the combination of our regularizer and the dynamic selection shows an extraordinary outcome in the table. Even after considering the overhead incurred by the auxiliary network, an NFE of 5.79 is very fast in comparison with other cases. Its accuracy is also larger than many other baselines.\nTable 3 shows a percentage of test cases where each integrator is selected by the auxiliary network with our regularizer. DOPRI occupies the biggest portion for its higher accuracy than others, i.e., 37%. RK4 and the Euler method have more numerical errors than DOPRI and as a result, their estimated costs are larger than that of DOPRI in general (due to the large penalty M ). We note that their rankings in terms of the selection percentage are the same as those in terms of the numerical error. One interesting point is that the Euler method also occupies a non-trivial portion in the table." }, { "heading": "4.2 PHYSIONET MORTALITY CLASSIFICATION", "text": "Dataset. We use the PhysioNet computing in cardiology challenge dataset released at 2012 (Silva et al., 2010). It is to forecast mortality rates in intensive care unit (ICU) populations. The dataset had been collected from 12,000 ICU stays. They remove short stays less than 48 hours and recorded up to 42 variables. Each records has a time stamp that indicates an elapsed time after admission to the ICU. Given a record, we predict whether the patient will\ndie or not. Therefore, the task-specific lossLtask in Eq. 3 is a cross-entropy loss. They well separated the dataset for training, validating, and testing.\nWe use Latent-ODE (Rubanova et al., 2019) for this task. The network (function) architecture of f in Latent-ODE is described in Appendix in conjunction with its hyperparameter configurations.\nAuxiliary Network Architecture. The auxiliary network for this dataset is shown in Table 4, which consists of four fully connected layers and an output layer. There is a dropout at the fourth layer and we found that using a dropout at this position improves the selection quality in some cases. The time to run the auxiliary network once is comparable to 0.023 NFE (i.e., 0.0003 seconds) in our testing, which is negligible.\nHyperparameters. The coefficient λ is set to {0.045, 0.0002, 0.0025, 0.015} respectively, and the starting learning rate is 0.01 with a decay factor of\n0.999. The exponent α is 0.05. The large penalty M is 1,000. The dropout ratio is {0, 0.25}. We use DOPRI as our default solver unless DISE is adopted. We use the recommended hyperparameters for Latent-ODE in their paper or in their github repository.\nExperimental Results. Table 5 summarizes all the results in terms of the AUC score and NFE. Adding an regularizer improves the AUC score and NFE at the same time in all cases. It is noted that the best AUC score is achieved when we use the L1 regularizer for this task among all regularizers. Our regularizer shows the smallest NFE among all tested regularizers, i.e., an NFE of 74 with no regularizers vs. an NFE of 39.71 with our regularizer. When being used with DISE, our regularizer marks an NFE of 34.1 and an AUC score of 0.7604, which is the best AUC to NFE ratio in the table.\nWe summarized the distribution of the integrator selection by DISE with our regularizer in Table 6. For a majority of cases, it chooses DOPRI due to the difficult nature of the task. One interesting point is that RK4 had not been used at all because it does not show any notable difference from the Euler method in this task. In that regard, choosing the Euler method is a sensible decision." }, { "heading": "4.3 CONTINUOUS NORMALIZING FLOWS", "text": "Dataset. Normalizing flows transform a simple distribution into a richer complex distribution, which can be used for many deep learning applications, such as generative models, reinforcement learning, and variational inference (Rezende & Mohamed, 2015). It is also known that Neural ODEs can generalize the concept of normalizing flows, which is called continuous normalizing flows. We use the experimental codes and data released in (Chen et al., 2018).\nIn this task, we perform a maximum likelihood training between q(x) and p(x), where q is a distribution created by a flow model and p is a target distribution. Therefore, the task-specific loss is to maximize Ep(x)[log q(x)], i.e., minimize Ltask = −Ep(x)[log q(x)]. The change of variable theorem is used to measure the probability q(x). We set the initial distribution to a Gaussian distribution of\nN (0, 1) and a flow model tries to transform it to the target distribution p. One good characteristic of continuous normalizing flows is that we can reverse the model and extract the reverse-mapping procedure from q to the Gaussian as well.\nTable 8: Continuous Normalizing Flow results (1 NFE≈ 0.008 seconds for a test batch of 32 samples)\nModel NLP NFE No reg. 0.8911 2297\nKinetic energy reg. 0.8914 2286 L1 reg. 0.8901 2259 L2 reg. 0.8760 2904 Our reg. 0.8841 2166\nNo reg. & DISE 0.8883 2104 Our reg. & DISE 0.8742 1984\nAuxiliary Network Architecture. The auxiliary network for this dataset is shown in Table 7, which consists of three fully connected layers and an output layer. The time to run the auxiliary network once is comparable to 0.018 NFE (i.e., 0.000144 seconds) in our testing, which is negligible.\nHyperparameters. The coefficient λ is set to {0.0005, 0.00001, 0.0003, 0.004} and the learning rate is 0.001. The exponent α is 0.03. The large penalty M is 1,000. We use DOPRI as our default solver unless DISE is adopted. We use the recommended hyperparameters for the base continuous normalizing flow model in the original paper or in their github repository. The threshold β is set to the average negative log-probability loss of the case where we use our regularizer. The rationale behind this threshold is that we select the Euler method only when its loss value is good enough.\nExperimental Results. Table 8 shows the performance of various methods. With no regularizers, the base Neural ODE model’s NFE is very large, i.e., an NFE of 2,297. Our regularizer decreases it to 2,166 and the combination of our regularizer and DISE further decreases to 1,984. However, the L2 regularizer rather increases the NFE and has a negative influence on it. In almost all cases, adding\nan regularizer decreases the negative log-probability, denoted NLP in the table. At this experiment, 1 NFE takes approximately 0.008 seconds.\nFigure 3 shows various transformation processes from the initial Gaussian noise to the target distribution with two circles. All show reasonable transformations with little differences.\nTable 9: The distribution of the integrator selection by our auxiliary network for Continuous Normalizing Flows\nIntegrator Percentage DOPRI 87.5%\nRK4 0% Euler 12.5%\nThe selection distribution by the auxiliary network with our regularizer is summarized in Table 9. Being similar to that in PhysioNet, DOPRI is selected for many cases, i.e., 87.5%, and the Euler method is used from time to time, i.e., 12.5%." }, { "heading": "5 CONCLUSIONS", "text": "We tackled one critical problem of Neural ODEs, a delayed process of the forward-pass inference. Even though DOPRI is able to dynamically adjust its step-sizes, as described earlier, there exists a limitation in saving the forward-pass inference time. To address the problem, we suggested i) regularizing the DOPRI’s estimated error, which results in a reduced NFE, and ii) dynamically selecting an appropriate integrator for each input. We successfully showed that both the model accuracy and the forward-pass inference time can be improved at the same time in all tasks. We also showed that nontrivial percentages of test cases can be solved by the Euler method after adopting our regularizer. In particular, for MNIST our model shows more than four times smaller NFE (i.e., more than four times faster forward-pass inference) in comparison with ODE-Net without any regularizer.\nOne difficulty in our work is controlling hyperparameters such asM and α. If they are ill-configured, the auxiliary selection network may choose only one integrator by pursuing either computational correctness (i.e., DOPRI) or computational lightweightness (i.e., the Euler method). During our preliminary experiments, we tuned them using training data." } ]
2,020
null
SP:7005dadb8330bdc6f7f2a066ff816bbe174ec843
[ "Detecting anomalies is a notoriously ill-defined problem. The notion of anomaly is not a rigorous concept and different algorithms produce different results. The paper critiques a broad set of methods which involve likelihood (or density) estimations. It's main idea revolves around the 'Principle' set on Page 4. The principle claims that when data capacity and computational constraints are removed, an AD algorithm should be invariant to 'reparametrization' of the input. Roughly speaking, that means the algorithm should be invariant to arbitrary 'name changing' of the input - the result should not change if each data item x is replaced by f(x) if f is invertible. " ]
Thanks to the tractability of their likelihood, some deep generative models show promise for seemingly straightforward but important applications like anomaly detection, uncertainty estimation, and active learning. However, the likelihood values empirically attributed to anomalies conflict with the expectations these proposed applications suggest. In this paper, we take a closer look at the behavior of distribution densities and show that these quantities carry less meaningful information than previously thought, beyond estimation issues or the curse of dimensionality. We conclude that the use of these likelihoods for out-of-distribution detection relies on strong and implicit hypotheses, and highlight the necessity of explicitly formulating these assumptions for reliable anomaly detection.
[]
[ { "authors": [ "Christopher M Bishop" ], "title": "Novelty detection and neural network validation", "venue": "IEE Proceedings-Vision, Image and Signal processing,", "year": 1994 }, { "authors": [ "David Blei", "Katherine Heller", "Tim Salimans", "Max Welling", "Zoubin Ghahramani" ], "title": "Panel: On the foundations and future of approximate inference", "venue": "In Symposium on Advances in Approximate Bayesian Inference, AABI 2017,", "year": 2017 }, { "authors": [ "Avrim Blum", "John Hopcroft", "Ravindran Kannan" ], "title": "Foundations of data science", "venue": "Vorabversion eines Lehrbuchs,", "year": 2016 }, { "authors": [ "Léon Bottou", "Olivier Bousquet" ], "title": "The tradeoffs of large scale learning", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "Joy Buolamwini", "Timnit Gebru" ], "title": "Gender shades: Intersectional accuracy disparities in commercial gender classification", "venue": "In Conference on fairness, accountability and transparency,", "year": 2018 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Adversarial examples are not easily detected: Bypassing ten detection methods", "venue": "In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security,", "year": 2017 }, { "authors": [ "Hyunsun Choi", "Eric Jang", "Alexander A Alemi" ], "title": "Waic, but why? generative ensembles for robust anomaly detection", "venue": "arXiv preprint arXiv:1810.01392,", "year": 2018 }, { "authors": [ "Thomas M Cover" ], "title": "Elements of information theory", "venue": null, "year": 1999 }, { "authors": [ "Luc Devroye" ], "title": "Sample-based non-uniform random variate generation", "venue": "In Proceedings of the 18th conference on Winter simulation,", "year": 1986 }, { "authors": [ "Laurent Dinh", "David Krueger", "Yoshua Bengio" ], "title": "Nice: Non-linear independent components estimation", "venue": "arXiv preprint arXiv:1410.8516,", "year": 2014 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using real nvp", "venue": "arXiv preprint arXiv:1605.08803,", "year": 2016 }, { "authors": [ "Yilun Du", "Igor Mordatch" ], "title": "Implicit generation and modeling with energy based models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ethan Fetaya", "Joern-Henrik Jacobsen", "Will Grathwohl", "Richard Zemel" ], "title": "Understanding the limitations of conditional generative models", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Angelos Filos", "Panagiotis Tigas", "Rowan McAllister", "Nicholas Rhinehart", "Sergey Levine", "Yarin Gal" ], "title": "Can autonomous vehicles identify, recover from, and adapt to distribution shifts", "venue": null, "year": 2006 }, { "authors": [ "Justin Fu", "John Co-Reyes", "Sergey Levine" ], "title": "Ex2: Exploration with exemplar models for deep reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Will Grathwohl", "Kuan-Chieh Wang", "Jörn-Henrik Jacobsen", "David Duvenaud", "Mohammad Norouzi", "Kevin Swersky" ], "title": "Your classifier is secretly an energy based model and you should treat it like one", "venue": null, "year": 1912 }, { "authors": [ "Alex Graves" ], "title": "Generating sequences with recurrent neural networks", "venue": "arXiv preprint arXiv:1308.0850,", "year": 2013 }, { "authors": [ "Dan Hendrycks", "Thomas Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Mantas Mazeika", "Thomas Dietterich" ], "title": "Deep anomaly detection with outlier exposure", "venue": "arXiv preprint arXiv:1812.04606,", "year": 2018 }, { "authors": [ "Victoria Hodge", "Jim Austin" ], "title": "A survey of outlier detection methodologies", "venue": "Artificial intelligence review,", "year": 2004 }, { "authors": [ "Aapo Hyvärinen", "Petteri Pajunen" ], "title": "Nonlinear independent component analysis: Existence and uniqueness results", "venue": "Neural networks,", "year": 1999 }, { "authors": [ "Martin Jørgensen", "Søren Hauberg" ], "title": "Reparametrization invariance in non-parametric causal discovery", "venue": "arXiv preprint arXiv:2008.05552,", "year": 2020 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In ICLR’2014,", "year": 2014 }, { "authors": [ "Durk P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Polina Kirichenko", "Pavel Izmailov", "Andrew Gordon Wilson" ], "title": "Why normalizing flows fail to detect out-of-distribution data", "venue": "arXiv preprint arXiv:2006.08545,", "year": 2020 }, { "authors": [ "Herbert Knothe" ], "title": "Contributions to the theory of convex bodies", "venue": "The Michigan Mathematical Journal,", "year": 1957 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Ryen Krusinga", "Sohil Shah", "Matthias Zwicker", "Tom Goldstein", "David Jacobs" ], "title": "Understanding the (un) interpretability of natural image distributions using generative models", "venue": null, "year": 1901 }, { "authors": [ "Kimin Lee", "Kibok Lee", "Honglak Lee", "Jinwoo Shin" ], "title": "A simple unified framework for detecting out-of-distribution samples and adversarial attacks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Hao Liu", "Pieter Abbeel" ], "title": "Hybrid discriminative-generative training via contrastive learning", "venue": "arXiv preprint arXiv:2007.09070,", "year": 2020 }, { "authors": [ "Weitang Liu", "Xiaoyun Wang", "John Owens", "Yixuan Li" ], "title": "Energy-based out-of-distribution detection", "venue": "Advances in Neural Information Processing Systems (NeurIPS),", "year": 2020 }, { "authors": [ "Francesco Locatello", "Stefan Bauer", "Mario Lucic", "Gunnar Raetsch", "Sylvain Gelly", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "Challenging common assumptions in the unsupervised learning of disentangled representations", "venue": "In international conference on machine learning,", "year": 2019 }, { "authors": [ "Warren R Morningstar", "Cusuh Ham", "Andrew G Gallagher", "Balaji Lakshminarayanan", "Alexander A Alemi", "Joshua V Dillon" ], "title": "Density of states estimation for out-of-distribution detection", "venue": null, "year": 2006 }, { "authors": [ "Mary M Moya", "Mark W Koch", "Larry D Hostetler" ], "title": "One-class classifier networks for target recognition applications", "venue": "STIN, 93:24043,", "year": 1993 }, { "authors": [ "Eric Nalisnick", "Akihiro Matsukawa", "Yee Whye Teh", "Dilan Gorur", "Balaji Lakshminarayanan" ], "title": "Do deep generative models know what they don’t know", "venue": "arXiv preprint arXiv:1810.09136,", "year": 2018 }, { "authors": [ "Eric Nalisnick", "Akihiro Matsukawa", "Yee Whye Teh", "Balaji Lakshminarayanan" ], "title": "Detecting out-ofdistribution inputs to deep generative models using typicality", "venue": null, "year": 1906 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": null, "year": 2011 }, { "authors": [ "Marco AF Pimentel", "David A Clifton", "Lei Clifton", "Lionel Tarassenko" ], "title": "A review of novelty detection", "venue": "Signal Processing,", "year": 2014 }, { "authors": [ "Jie Ren", "Peter J Liu", "Emily Fertig", "Jasper Snoek", "Ryan Poplin", "Mark Depristo", "Joshua Dillon", "Balaji Lakshminarayanan" ], "title": "Likelihood ratios for out-of-distribution detection", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Danilo Rezende", "Shakir Mohamed" ], "title": "Variational inference with normalizing flows", "venue": "In Proceedings of Machine Learning Research,", "year": 2015 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "In Proceedings of Machine Learning Research,", "year": 2014 }, { "authors": [ "Murray Rosenblatt" ], "title": "Remarks on a multivariate transformation", "venue": "The annals of mathematical statistics,", "year": 1952 }, { "authors": [ "Lorna Roth" ], "title": "Looking at shirley, the ultimate norm: Colour balance, image technologies, and cognitive equity", "venue": "Canadian Journal of Communication,", "year": 2009 }, { "authors": [ "Marco Rudolph", "Bastian Wandt", "Bodo Rosenhahn" ], "title": "Same same but differnet: Semi-supervised defect detection with normalizing flows", "venue": "arXiv preprint arXiv:2008.12577,", "year": 2020 }, { "authors": [ "Robin Tibor Schirrmeister", "Yuxuan Zhou", "Tonio Ball", "Dan Zhang" ], "title": "Understanding anomaly detection with deep invertible networks through hierarchies of distributions and features", "venue": "arXiv preprint arXiv:2006.10848,", "year": 2020 }, { "authors": [ "Bernhard Schölkopf", "John C Platt", "John Shawe-Taylor", "Alex J Smola", "Robert C Williamson" ], "title": "Estimating the support of a high-dimensional distribution", "venue": "Neural computation,", "year": 2001 }, { "authors": [ "Joan Serrà", "David Álvarez", "Vicenç Gómez", "Olga Slizovskaia", "José F. Núñez", "Jordi Luque" ], "title": "Input complexity and out-of-distribution detection with likelihood-based generative models", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Esteban G Tabak", "Cristina V Turner" ], "title": "A family of nonparametric density estimation algorithms", "venue": "Communications on Pure and Applied Mathematics,", "year": 2013 }, { "authors": [ "Lucas Theis", "Aäron van den Oord", "Matthias Bethge" ], "title": "A note on the evaluation of generative models", "venue": "In ICLR’2016", "year": 2016 }, { "authors": [ "Benigno Uria", "Iain Murray", "Hugo Larochelle" ], "title": "A deep and tractable density estimator", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Arash Vahdat", "Jan Kautz" ], "title": "Nvae: A deep hierarchical variational autoencoder", "venue": "arXiv preprint arXiv:2007.03898,", "year": 2020 }, { "authors": [ "Aaron van den Oord", "Nal Kalchbrenner", "Lasse Espeholt", "Oriol Vinyals", "Alex Graves" ], "title": "Conditional image generation with pixelcnn decoders", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Aäron van den Oord", "Nal Kalchbrenner", "Koray Kavukcuoglu" ], "title": "Pixel recurrent neural networks", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Jim Winkens", "Rudy Bunel", "Abhijit Guha Roy", "Robert Stanforth", "Vivek Natarajan", "Joseph R Ledsam", "Patricia MacWilliams", "Pushmeet Kohli", "Alan Karthikesalingam", "Simon Kohl" ], "title": "Contrastive training for improved out-of-distribution detection", "venue": "arXiv preprint arXiv:2007.05566,", "year": 2020 }, { "authors": [ "Hongjie Zhang", "Ang Li", "Jie Guo", "Yanwen Guo" ], "title": "Hybrid models for open set recognition", "venue": "arXiv preprint arXiv:2003.12506,", "year": 2020 }, { "authors": [ "Rui Zhao", "Volker Tresp" ], "title": "Curiosity-driven experience prioritization via density estimation", "venue": "arXiv preprint arXiv:1902.08039,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Several machine learning methods aim at extrapolating a behavior observed on training data in order to produce predictions on new observations. But every so often, such extrapolation can result in wrong outputs, especially on points that we would consider infrequent with respect to the training distribution. Faced with unusual situations, whether adversarial (Szegedy et al., 2013; Carlini & Wagner, 2017) or just rare (Hendrycks & Dietterich, 2019), a desirable behavior from a machine learning system would be to flag these outliers so that the user can assess if the result is reliable and gather more information if need be (Zhao & Tresp, 2019; Fu et al., 2017). This can be critical for applications like medical decision making (Lee et al., 2018) or autonomous vehicle navigation (Filos et al., 2020), where such outliers are ubiquitous.\nWhat are the situations that are deemed unusual? Defining these anomalies (Hodge & Austin, 2004; Pimentel et al., 2014) manually can be laborious if not impossible, and so generally applicable, automated methods are preferable. In that regard, the framework of probabilistic reasoning has been an appealing formalism because a natural candidate for outliers are situations that are improbable or out-of-distribution. Since the true probability distribution density p∗X of the data is often not provided, one would instead use an estimator, p(θ)X , from this data to assess the regularity of a point.\nDensity estimation has been a particularly challenging task on high-dimensional problems. However, recent advances in deep probabilistic models, including variational auto-encoders (Kingma & Welling, 2014; Rezende et al., 2014; Vahdat & Kautz, 2020), deep autoregressive models (Uria et al., 2014; van den Oord et al., 2016b;a), and flow-based generative models (Dinh et al., 2014; 2016; Kingma & Dhariwal, 2018), have shown promise for density estimation, which has the potential to enable accurate density-based methods (Bishop, 1994) for anomaly detection.\nYet, several works have observed that a significant gap persists between the potential of density-based anomaly detection and empirical results. For instance, Choi et al. (2018), Nalisnick et al. (2018), and Hendrycks et al. (2018) noticed that generative models trained on a benchmark dataset (e.g., CIFAR-10, Krizhevsky et al., 2009) and tested on another (e.g., SVHN, Netzer et al., 2011) are not able to identify the latter as out-of-distribution with current methods. Different hypotheses have been formulated to explain that discrepancy, ranging from the curse of dimensionality (Nalisnick et al., 2019) to a significant mismatch between p(θ)X and p ∗ X (Choi et al., 2018; Fetaya et al., 2020; Kirichenko et al., 2020; Zhang et al., 2020).\nIn this work, we propose a new perspective on this discrepancy and challenge the expectation that density estimation should enable anomaly detection. We show that the aforementioned discrepancy\npersists even with perfect density models, and therefore goes beyond issues of estimation, approximation, or optimization errors (Bottou & Bousquet, 2008). We highlight that this issue is pervasive as it occurs even in low-dimensional settings and for a variety of density-based methods for anomaly detection." }, { "heading": "2 DENSITY-BASED ANOMALY DETECTION", "text": "" }, { "heading": "2.1 UNSUPERVISED ANOMALY DETECTION: PROBLEM STATEMENT", "text": "Unsupervised anomaly detection is a classification problem (Moya et al., 1993; Schölkopf et al., 2001), where one aims at distinguishing between regular points (inliers) and irregular points (outliers). However, as opposed to the usual classification task, labels distinguishing inliers and outliers are not provided for training, if outliers are even provided at all. Given a input space X ⊆ RD, the task can be summarized as partitioning this space between the subset of outliers Xout and the subset of inliers Xin, i.e., Xout ∪ Xin = X and Xout ∩ Xin = ∅. When the training data is distributed according to the probability measure P ∗X (with density p ∗ X\n1), one would usually pick the set of regular points Xin such that this set contains the majority (but not all) of the mass (e.g., 95%) of this distribution, i.e., P ∗X(Xin) = 1− α ∈ ( 1 2 , 1 ) . But, for any given α, there exists in theory an infinity of corresponding partitions into Xin and Xout (see Figure 1). How are these partitions defined to match our intuition of inliers and outliers? We will focus in this paper on recently used methods based on probability density." }, { "heading": "2.2 DENSITY SCORING", "text": "When talking about outliers, infrequent observations, the association with probability can be quite intuitive. For instance, one would expect an anomaly to happen rarely and be unlikely. Since the language of statistics often associate the term likelihood with quantities like p(θ)X (x), one might consider an unlikely sample to have a low ”likelihood”, that is a low probability density p∗X(x). Conversely, regular samples would have a high density p∗X(x) following that reasoning. This is an intuition that is not only prevalent in several modern anomaly detection methods (Bishop, 1994; Blei et al., 2017; Hendrycks et al., 2018; Kirichenko et al., 2020; Rudolph et al., 2020; Liu et al., 2020) but also in techniques like low-temperature sampling (Graves, 2013) used for example in Kingma & Dhariwal (2018) and Parmar et al. (2018).\nThe associated approach, described in Bishop (1994), consists in defining the inliers as the points whose density exceed a certain threshold λ > 0 (for example, chosen such that inliers include a predefined amount of mass, e.g., 95%), making the modes the most regular points in this setting. Xout and Xin are then respectively the lower-level and upper-level sets {x ∈ X , p∗X(x) ≤ λ} and {x ∈ X , p∗X(x) > λ} (see Figure 2b).\n1We will also assume in the rest of the paper that for any x ∈ X , p∗X(x) > 0.\nx\np ∗ X (x )\n(a) An example of a distribution density p∗X .\nx\nλ\n(b) Density scoring method applied to the distribution p∗X .\nx\ne −H(p∗X )\n(c) Typicality test method (with one sample) applied to the distribution p∗X .\nFigure 2: Illustration of different density-based methods applied to a particular one-dimensional distribution p∗X . Outliers are in red and inliers are in blue. The thresholds are picked so that inliers include 95% of the mass. In Figure 2b, inliers are considered as the points with density above the threshold λ > 0 while in Figure 2c, they are the points whose log-density are in the -interval around the negentropy −H(p∗X)." }, { "heading": "2.3 TYPICALITY TEST", "text": "The Gaussian Annulus theorem (Blum et al., 2016) (generalized in Vershynin, 2019) attests that most of the mass of a high-dimensional standard Gaussian N (0, ID) is located close to the hypersphere of radius √ D. However, the mode of its density is at the center 0. A natural conclusion is that the curse of dimensionality creates a discrepancy between the density upper-level sets and what we expect as inliers (Choi et al., 2018; Nalisnick et al., 2019; Morningstar et al., 2020; Dieleman, 2020). This motivated Nalisnick et al. (2019) to propose another method for testing whether a point is an inlier or not, relying on a measure of its typicality. This method relies on the notion of typical set (Cover, 1999) defined by taking as inliers points whose average log-density is close to the average log-density of the distribution (see Figure 2c).\nDefinition 1 (Cover, 1999). Given independent and identically distributed elements ( x(n) ) n≤N from a distribution with density p∗X , the typical set A (N) (p∗X) ⊂ XN is made of all sequences that satisfy:∣∣∣∣∣H(p∗X) + 1N N∑ n=1 log p∗X ( x(n)\n)∣∣∣∣∣ ≤ , where H(X) = −E[log p∗X(X)] is the (differential) entropy and > 0 a constant.\nThis method matches the intuition behind the Gaussian Annulus theorem on the set of inliers of a high-dimensional standard Gaussian. Indeed, using a concentration inequality, we can show that limN→+∞ ( P ∗(Xi)1≤n≤N ( A (N) )) = 1, which means that with N large enough, A(N) (p∗X) will contain most of the mass of (p∗X) N , justifying the name typicality." }, { "heading": "3 THE ROLE OF REPARAMETRIZATION", "text": "Given the anomaly detection problem formulation Subsection 2.1, we are interested in reasoning about the properties a solution ought to satisfy, in the ideal case of infinite data and capacity. For density-based methods this means that p(θ)X = p ∗ X . This setting is appealing as it gives space for theoretical results without worrying about the underfitting or overfitting issues mentioned by Hendrycks et al. (2018); Fetaya et al. (2020); Morningstar et al. (2020); Kirichenko et al. (2020); Zhang et al. (2020).\nAlthough we work in practice on points (e.g., vectors), it is important to keep in mind that these points are actually representations of an underlying outcome. As a random variable, X is by definition the function from this outcome ω to the corresponding observation x = X(ω). However, at its core, an anomaly detection solution aims at classifying outcomes through these measurements. How is the\n0.0 0.2 0.4 0.6 0.8 1.0\n0.6\n0.8\n1.0\n1.2\n1.4\nx\np ∗ X (x )\n(a) An example of a distribution density p∗X .\n0.0 0.2 0.4 0.6 0.8 1.0 0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nx\nf (x )\n(b) Example of an invertible function f from [0, 1] to [0, 1].\n0.0 0.2 0.4 0.6 0.8 1.0\n0.6\n0.8\n1.0\n1.2\n1.4\nf(x)\np ∗ f (X\n)\n( f(x ))\n(c) Resulting density p∗f(X) from applying f to X ∼ p∗X as a function of the new axis f(x).\nFigure 3: Illustration of the change of variables formula and how much the application of a bijection can affect the density of the points considered in a one-dimensional case. In Figures 3a and 3c, points x with high density p∗X(x) are in blue and points with low density p ∗ X(x) are in red.\nchoice of X affecting the problem of anomaly detection? While several papers studied the effects of a change of representation through the lens of inductive bias (Kirichenko et al., 2020; Zhang et al., 2020), we investigate the more fundamental effects of reparametrizations f . To sidestep concerns about loss of information (Winkens et al., 2020), we study the particular case of an invertible map f .\nThe measurements x = X(ω) and f(x) = (f ◦ X)(ω) represent the same outcome ω (although differently), and, since x and f(x) are connected by an invertible transformation f , the same method applied respectively to X or f(X) should classify them with the same label, either as an inlier or an outlier. The target of these methods is to essentially assess the regularity of the outcome ω. From this, we could ideally make the following requirement for a solution to anomaly detection. Principle. In an infinite data and capacity setting, the result of an anomaly detection method should be invariant to any continuous invertible reparametrization f .\nDo density-based methods follow this principle? To answer that question, we look into how density behaves under a reversible change of representation. In particular, the change of variables formula (Kaplan, 1952) (used in Tabak & Turner, 2013; Dinh et al., 2014; Rezende & Mohamed, 2015), formalizes a simple intuition of this behavior: where points are brought closer together the density increases whereas this density decreases when points are spread apart. The formula itself is written as:\np∗f(X) ( f(x) ) = p∗X(x) ∣∣∣∣ ∂f∂xT (x) ∣∣∣∣−1 ,\nwhere ∣∣∣ ∂f∂xT (x)∣∣∣ is the Jacobian determinant of f at x, a quantity that reflects a local change in volume incurred by f . Figure 3 already illustrates how the function f (Figure 3b) can spread apart points close to the extremities to decrease the corresponding density round 0 and 1, and, as a result, turns the density on the left (Figure 3a) into the density on the right (Figure 3c). With this example, one can wonder to which degree an invertible change of representation can affect the density and the anomaly detection methods presented in Subsections 2.2 and 2.3 that use it." }, { "heading": "4 LEVERAGING THE CHANGE OF VARIABLES FORMULA", "text": "" }, { "heading": "4.1 UNIFORMIZATION", "text": "We start by showing that unambiguously defining outliers and inliers with any density-based approach becomes impossible when considering a particular type of invertible reparametrization of the problem, irrespective of dimensionality.\nUnder weak assumptions, one can map any distribution to a uniform distribution using an invertible transformation (Hyvärinen & Pajunen, 1999). This is in fact a common strategy for sampling from\n0.0 0.2 0.4 0.6 0.8 1.0\n0.6\n0.8\n1.0\n1.2\n1.4\nx\np ∗ X (x )\n(a) An example of a distribution density p∗X . Points x with high density p∗X(x) are in blue and points with low density p∗X(x) are in red.\n0.0 0.2 0.4 0.6 0.8 1.0 0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nx\nC D F p ∗ X (x )\n(b) The corresponding cumulative distribution function CDFp∗\nX .\n0.0 0.2 0.4 0.6 0.8 1.0\n0.6\n0.8\n1.0\n1.2\n1.4\nCDFp∗ X (x)\np ∗ C D\nF p ∗ X\n(X )\n( CD F p ∗ X (x ))\n(c) The resulting density from applying CDFp∗\nX to X ∼ p∗X is\np∗CDFp∗ X (X) = U([0, 1]), therefore we color all the points the same.\nFigure 4: Illustration of the one-dimensional case version of a Knothe-Rosenblatt rearrangement, which is just the application of the cumulative distribution function CDFp∗X on the variable x.\ncomplicated one-dimensional distributions (Devroye, 1986). Figure 4 shows an example of this where a bimodal distribution (Figure 4a) is pushed through an invertible map (Figure 4b) to obtain a uniform distribution (Figure 4c).\nTo construct this invertible uniformization function, we rely on the notion of Knothe-Rosenblatt rearrangement (Rosenblatt, 1952; Knothe et al., 1957). A Knothe-Rosenblatt rearrangement (notably used in Hyvärinen & Pajunen, 1999) is defined for a random variable X distributed according to a strictly positive density p∗X with a convex support X , as a continuous invertible map f (KR) from X onto [0, 1]D such that f (KR)(X) follows a uniform distribution in this hypercube. This rearrangement is constructed as follows: ∀d ∈ {1, ..., D}, f (KR)(x) = CDFp∗\nXd|X<d (xd | x<d) where CDFp is\nthe cumulative distribution function corresponding to the density p.\nIn these new coordinates, neither the density scoring method nor the typicality test approach can discriminate between inliers and outliers in this uniform D-dimensional hypercube [0, 1]D. Since the resulting density p∗\nf(KR)(X) = 1 is constant, the density scoring method attributes the same regularity\nto every point. Moreover, a typicality test on f (KR)(X) will always succeed as\n∀ > 0, N ∈ N∗,∀ ( x(n) ) n≤N , ∣∣∣∣∣H (p∗f(KR)(X))+ 1N N∑ n=1 log p∗f(KR)(X) ( f (KR) ( x(n) ))∣∣∣∣∣ =\n∣∣∣∣∣H (U ([0, 1]D))+ 1N N∑ n=1 log(1) ∣∣∣∣∣ = 0 ≤ . However, these uniformly distributed points are merely a different representation of the same initial points. Therefore, if the identity of the outliers is ambiguous in this uniform distribution, then anomaly detection in general should be as difficult." }, { "heading": "4.2 ARBITRARY SCORING", "text": "While a particular parametrization can prevent density-based outlier detection methods from separating between outliers and inliers, we find that it is also possible to build a reparametrization of the problem to impose to each point an arbitrary density level in the new representation. To illustrate this idea, consider some points from a distribution whose density is depicted in Figure 5a and a score function indicated in red in Figure 5b. In this example, high-density regions correspond to areas with low score value (and vice-versa). We show that there exists a reparametrization (depicted in Figure 5c) such that the density in this new representation (Figure 5d) now matches the desired score, which can be designed to mislead density-based methods into a wrong classification of anomalies.\n0.0 0.2 0.4 0.6 0.8 1.0\n0.6\n0.8\n1.0\n1.2\n1.4\nx\np ∗ X\n(x )\n(a) An example of a distribution density p∗X .\n0.0 0.2 0.4 0.6 0.8 1.0\n0.6\n0.8\n1.0\n1.2\n1.4\n1.6\n1.8\np *X (x) s(x)\nx\np ∗ X\n(x )\n(b) The distribution p∗X (in black) and the desired density scoring s (in red).\n0.0 0.2 0.4 0.6 0.8 1.0 0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nx\nf ( s ) (x\n)\n(c) A continuous invertible reparametrization f (s) such that p∗ f(s)(X) ( f (s)(x) ) = s(x).\n0.0 0.2 0.4 0.6 0.8 1.0\n0.6\n0.8\n1.0\n1.2\n1.4\nf(s)(x)\np ∗ f ( s ) ( X\n)\n( f(s ) (x\n))\n(d) Resulting density p∗ f(s)(X) from applying f (s) to X ∼ p∗X as a function of f (s)(x).\nFigure 5: Illustration of how we can modify the space with an invertible function so that each point x follows a predefined score. In Figures 5a and 5d, points x with high density p∗X(x) are in blue and points with low density p∗X(x) are in red.\nProposition 1. For any variable X ∼ p∗X with p∗X continuous strictly positive (with X convex) and any measurable continuous function s : X → R∗+ bounded below by a strictly positive number, there exists a continuous bijection f (s) such that for any x ∈ X , pf(s)(X) ( f (s)(x) ) = s(x) almost everywhere.\nProof. We write x to denote (x1, . . . , xD−1, xD) and (x<D, t) for (x1, . . . , xD−1, t). Let f (s) : X → Z ⊂ RD be a function such that(\nf (s)(x) ) D = ∫ xD 0 p∗X ( (x<D, t) ) s ( (x<D, t)\n) dt, and ∀d ∈ {1, ..., D−1}, ( f (s)(x) ) d = xd. As s is bounded below, f (s) is well defined and invertible.\nBy the change of variables formula,\n∀x ∈ X , p∗f(s)(X) ( f (s)(x) ) = p∗X(x) · ∣∣∣∣∂f (s)∂xT (x) ∣∣∣∣−1 = p∗X(x) · (p∗X(x)s(x) )−1 = s(x).\nIf Xin and Xout are respectively the true sets of inliers and outliers, we can pick a ball A ⊂ Xin such that P ∗X(A) = α < 0.5, we can choose s such that for any x ∈ (X \\A), s(x) = 1 and for any x ∈ A, s(x) = 0.1. With this choice of s (or a smooth approximation) and the function f (s) defined earlier, both the density scoring and the (one-sample) typical set methods will consider the set of inliers to be (X \\A) while Xout ⊂ (X \\A), making their results completely wrong. While we can also reparametrize the problem so that these methods may succeed, such reparametrization requires knowledge of (p∗X/s)(x). Without any constraints on the space considered, individual densities can be arbitrarily manipulated, which reveals how little these quantities say about the underlying outcome in general." }, { "heading": "4.3 CANONICAL DISTRIBUTION", "text": "Since our analysis in Subsections 4.1 and 4.2 reveals that densities or low typicality regions are not sufficient conditions for an observation to be an anomaly, whatever its distribution or its dimension, we are now interested in investigating whether additional realistic assumptions can lead to some guarantees for anomaly detection. Motivated by several representation learning algorithms which attempt to learn a mapping to a predefined distribution (e.g., a standard Gaussian, see Chen & Gopinath, 2001; Kingma & Welling, 2014; Rezende et al., 2014; Dinh et al., 2014; Krusinga et al., 2019) we consider the more restricted setting of a fixed distribution of our choice, whose regular regions could for instance be known. Surprisingly, we find that it is possible to exchange the densities of an inlier and an outlier even within a canonical distribution.\n(a) Points xin and xout in a uniformly distributed subset. f (rot) will pick a two-dimensional plane and use the polar coordinate using the mean xm of xin and xout as the center.\n(b) Applying a bijection f (rot) exchanging the points xin and xout. f (rot) is a rotation depending on the distance from the mean xm of xin and xout in the previously selected two-dimensional plane.\nFigure 6: Illustration of the norm-dependent rotation, a locally-acting bijection that allows us to swap two different points while preserving a uniform distribution (as a volume-preserving function).\nProposition 2. For any strictly positive density function p∗X over a convex space X ⊆ RD with D > 2, for any xin, xout in the interior X o of X , there exists a continuous bijection f : X → X such that p∗X = p ∗ f(X), p ∗ f(X) ( f ( x(in) )) = p∗X ( x(out) ) , and p∗f(X) ( f ( x(out) )) = p∗X ( x(in) ) .\nWe provide a sketch of proof and put the details in Appendix A. We rely on the transformation depicted in Figure 6, which can swap two points while acting in a very local area. If the distribution of points is uniform inside this local area, then this distribution will be unaffected by this transformation. In order to arrive at this situation, we use the uniformization method presented in Subsection 4.1, along with a linear function to fit this local area inside the support of the distribution (see Figure 7). Once those two points have been swapped, we can reverse the functions preceding this swap to recover the original distribution overall.\nSince the resulting distribution p∗f(X) is identical to the original f ∗ X , then their entropies are the same H ( p∗f(X) ) = H (f∗X). Hence, when xin and xout are respectively an inlier and an outlier, whether in terms of density scoring or typicality, there exists a reparametrization of the problem conserving the overall distribution while still exchanging their status as inlier/outlier. We provide an example applied to a standard Gaussian distribution in Figure 8.\nxin\nxout xm\n(a) When taking two points xin and xout inside the hypercube [0, 1]D , there is sometimes no L2-ball centered in their mean xm containing both xin and xout.\nL(xin)\nL(xout)\n(b) However, given xin and xout, one can apply an invertible linear transformation L such that there exists a L2-ball centered in their new mean L(xm) containing both L(xin) and L(xout). If the distribution was uniform inside [0, 1]D , then it is now also uniform inside L ( [0, 1]D\n)\nFigure 7: We illustrate how, given xin and xout in a uniformly distributed hypercube [0, 1]D, one can modify the space such that f (rot) shown in Figure 6 can be applied without modifying the distribution.\nx1\nx 2\n(a) Points sampled from p∗X = N (0, I2).\n( f(x) ) 1\n( f(x )) 2\n(b) Applying a bijection f that preserves the distribution p∗f(X) = N (0, I2) to the points in Figure 8a.\n( f(x) ) 1\n( f(x )) 2\n(c) The original distribution p∗X with respect to the new coordinates f(x), p∗X ◦ f−1.\nFigure 8: Application of a transformation using the bijection in Figure 6 to a standard Gaussian distribution N (0, I2), leaving it overall invariant.\nThis result is important from a representation learning perspective and a complement to the general non-identifiability result in several representation learning approaches (Hyvärinen & Pajunen, 1999; Locatello et al., 2019). It means that learning a representation with a predefined, well-known distribution and knowing the true density p∗X are not sufficient conditions to control the individual density of each point and accurately distinguish outliers from inliers." }, { "heading": "5 DISCUSSION", "text": "Fundamentally, density-based methods for anomaly detection rely on the belief that density, as a quantity, conveys useful information to assess whether an outcome is an outlier or not. For example, several density-based methods operate in practice on features learned independently from the anomaly detection task (Lee et al., 2018; Krusinga et al., 2019; Morningstar et al., 2020; Winkens et al., 2020) or on the original input features (Nalisnick et al., 2018; Hendrycks et al., 2018; Kirichenko et al., 2020; Rudolph et al., 2020; Nalisnick et al., 2019). In general, there is no evidence that the density in these representations will carry any useful information for anomaly detection bringing into question whether performance of probabilistic models on this task (e.g., Du & Mordatch, 2019; Grathwohl et al., 2019; Kirichenko et al., 2020; Liu & Abbeel, 2020) reflects goodness-of-fit of the density model. On the contrary, we have proven in this paper that density-based anomaly detection methods are inconsistent across a range of possible representations 2, even under strong constraints on the distribution, which suggests that finding the right input representation for meaningful density-based anomaly detection requires privileged information, as discussed in Subsection 4.2. Moreover, several papers have pointed to existing problems in commonly used input representations; for example, the geometry of a bitmap representation does not follow our intuition of semantic distance (Theis et al., 2016), or images can come from photographic sensors tuned to specific populations (Roth, 2009; Buolamwini & Gebru, 2018). This shows how strong of an otherwise understated assumption it is to suppose that the methods presented in Subsection 2.2 and Subsection 2.3 would work on input representations. This is particularly problematic for applications as critical as autonomous vehicle navigation or medical decision-making.\nWhile defining anomalies might be impossible without prior knowledge (Winkens et al., 2020) as out-of-distribution detection is an ill-posed problem (Choi et al., 2018; Nalisnick et al., 2019; Morningstar et al., 2020), several approaches make these assumptions more explicit. For instance, the density scoring method has also been interpreted in Bishop (1994) as a likelihood ratio method (Ren et al., 2019; Serrà et al., 2020; Schirrmeister et al., 2020), which, on the one hand, relies heavily on the definition of an arbitrary reference density as a denominator of this ratio but, on the other hand, is invariant to reparametrization. Inspired by the Bayesian approach from Choi et al. (2018), one can work on defining a prior distribution on possible reparametrizations over which to average (similarly to Jørgensen & Hauberg, 2020).\n2Alternatively, this can be seen as a change of base distribution used to define a probability density as a Radon-Nikodym derivative." }, { "heading": "A PROOF OF PROPOSITION 2", "text": "Proposition 3. For any strictly positive density function p∗X over a convex space X ⊆ RD with D > 2, for any xin, xout in the interior X o of X , there exists a continuous bijection f : X → X such that p∗X = p ∗ f(X), p ∗ f(X) ( f ( x(in) )) = p∗X ( x(out) ) , and p∗f(X) ( f ( x(out) )) = p∗X ( x(in) ) .\nProof. Our proof will rely on the following non-rigid rotation f (rot). Working in a hyperspherical coordinate system consisting of a radial coordinate r > 0 and (D − 1) angular coordinates (φi)i<D,\n∀d < D, xd = r ( d−1∏ i=1 sin(φi) ) cos(φd)\nxD = r ( D−2∏ i=1 sin(φi) ) sin(φD−1),\nwhere for all i ∈ {1, 2, ..., D− 2}, φi ∈ [0, π) and φD−1 ∈ [0, 2π), given rmax > r0 > 0, we define the continuous mapping f (rot) as:\nf (rot) ( (r, φ1, . . . , φD−2, φD−1) ) = ( r, φ1, . . . , φD−2, φD−1 + π\n(rmax − r)+ rmax − r0\n[mod 2π] ) .\nwhere (·)+ = max(·, 0). This mapping only affects points inside B2(0, rmax), and exchanges two points corresponding to (r0, φ1, . . . , φD−2, φD−1) and (r0, φ1, . . . , φD−2, φD−1+π) in a continous way (see Figure 6). Since the Jacobian determinant of the hyperspherical coordinates transformation is not a function of φD−1, f (rot) is volume-preserving in cartesian coordinates.\nLet f (KR) be a Knothe-Rosenblatt rearrangement of p∗X , f (KR)(X) is uniformly distributed in [0, 1]D. Let z(in) = f (KR) ( x(in) ) and z(out) = f (KR) ( x(out) ) . Since f (KR) is continuous, z(in), z(out)\nare in the interior (0, 1)D. Therefore, there is an > 0 such that the L2-balls B2 ( z(in), ) and\nB2 ( z(out), ) are inside (0, 1)D. Since (0, 1)D is convex, so is their convex hull.\nLet r0 = 12 ∥∥z(in) − z(out)∥∥ 2 and rmax = r0 + . Given z ∈ (0, 1)D, we write z‖ and z⊥ to denote\nits parallel and orthogonal components with respect to ( z(in) − z(out) ) . We consider the linear bijection L defined by L(z) = z‖ + −1rmaxz⊥.\nLet f (z) = L ◦ f (KR). Since L is a linear function (i.e., with constant Jacobian), f (z)(X) is uniformly distributed inside L ( [0, 1]D ) . If z(m) is the mean of z(in) and z(out), then f (z)(X )\ncontains B2 ( L ( z(m) ) , rmax ) (see Figure 7). We can then apply the non-rigid rotation f (rot) defined\nearlier, centered on L ( z(m) ) to exchange L ( z(in) ) and L ( z(out) ) while maintaining this uniform distribution.\nWe can then apply the bijection ( f (z) )−1 to obtain the invertible map f = ( f (z) )−1 ◦ f (rot) ◦ f (z) such that p∗f(X) = f ∗ X , p ∗ f(X) ( f ( x(in) )) = p∗X ( x(out) ) , and p∗f(X) ( f ( x(out) )) = p∗X ( x(in) ) ." } ]
2,020
null
SP:6b7e12310d7b29f8d66442933dd71b1b915805be
[ "The authors mainly concentrate on data sampling. To address the issue of optimizing high-dimensional sampling hyper-parameter in data sampling and release the requirement of prior knowledge from current methods, the authors introduce a searching-based method named AutoSampling. This method is comprised of exploration step and exploitation step which are conducted alternatively. The exploitation step train multi child models with current sampling strategy and save the best model for next iteration. while the exploration step estimates the sampling distribution according to the sampled data in exploitation step and rectifies it to sample all data possibly. The authors have conducted sufficient experiments to verify the superior of their method, especially for the effectiveness and generalizability. " ]
Data sampling acts as a pivotal role in training deep learning models. However, an effective sampling schedule is difficult to learn due to its inherent high-dimension as a hyper-parameter. In this paper, we propose the AutoSampling method to automatically learn sampling schedules for model training, which consists of the multi-exploitation step aiming for optimal local sampling schedules and the exploration step for the ideal sampling distribution. More specifically, we achieve sampling schedule search with shortened exploitation cycle to provide enough supervision. In addition, we periodically estimate the sampling distribution from the learned sampling schedules and perturb it to search in the distribution space. The combination of two searches allows us to learn a robust sampling schedule. We apply our AutoSampling method to a variety of image classification tasks illustrating the effectiveness of the proposed method.
[ { "affiliations": [], "name": "PLING SCHEDULES" } ]
[ { "authors": [ "Yoshua Bengio", "Jérôme Louradour", "Ronan Collobert", "Jason Weston" ], "title": "Curriculum learning", "venue": "In Proceedings of the 26th Annual International Conference on Machine Learning,", "year": 2009 }, { "authors": [ "James Bergstra", "Yoshua Bengio" ], "title": "Random search for hyper-parameter optimization", "venue": "J. Mach. Learn. Res.,", "year": 2012 }, { "authors": [ "Jonathon Byrd", "Zachary C. Lipton" ], "title": "Weighted risk minimization & deep learning", "venue": "CoRR, abs/1812.03372,", "year": 2018 }, { "authors": [ "Ekin Dogus Cubuk", "Barret Zoph", "Dandelion Mané", "Vijay Vasudevan", "Quoc V. Le" ], "title": "Autoaugment: Learning augmentation policies from data", "venue": "CoRR, abs/1805.09501,", "year": 2018 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "ImageNet: A Large-Scale Hierarchical Image Database", "venue": "In CVPR09,", "year": 2009 }, { "authors": [ "Chris Drummond", "Robert C Holte" ], "title": "C4. 5, class imbalance, and cost sensitivity: why undersampling beats over-sampling", "venue": "In Workshop on learning from imbalanced datasets II,", "year": 2003 }, { "authors": [ "Andrew Estabrooks", "Taeho Jo", "Nathalie Japkowicz" ], "title": "A multiple resampling method for learning from imbalanced data sets", "venue": "Computational intelligence,", "year": 2004 }, { "authors": [ "Yang Fan", "Fei Tian", "Tao Qin", "Jiang Bian", "Tie-Yan Liu" ], "title": "Learning what data to learn", "venue": "CoRR, abs/1702.08635,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "CoRR, abs/1512.03385,", "year": 2015 }, { "authors": [ "Daniel Ho", "Eric Liang", "Ion Stoica", "Pieter Abbeel", "Xi Chen" ], "title": "Population based augmentation: Efficient learning of augmentation policy schedules", "venue": "CoRR, abs/1905.05393,", "year": 2019 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens van der Maaten", "Kilian Q. Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Max Jaderberg", "Valentin Dalibard", "Simon Osindero", "Wojciech M. Czarnecki", "Jeff Donahue", "Ali Razavi", "Oriol Vinyals", "Tim Green", "Iain Dunning", "Karen Simonyan", "Chrisantha Fernando", "Koray Kavukcuoglu" ], "title": "Population based training of neural networks", "venue": "CoRR, abs/1711.09846,", "year": 2017 }, { "authors": [ "Andrew Jesson", "Nicolas Guizard", "Sina Hamidi Ghalehjegh", "Damien Goblot", "Florian Soudan", "Nicolas Chapados" ], "title": "Cased: Curriculum adaptive sampling for extreme data imbalance", "venue": "Medical Image Computing and Computer Assisted Intervention MICCAI", "year": 2017 }, { "authors": [ "Lu Jiang", "Zhengyuan Zhou", "Thomas Leung", "Li-Jia Li", "Li Fei-Fei" ], "title": "Mentornet: Regularizing very deep neural networks on corrupted labels", "venue": "CoRR, abs/1712.05055,", "year": 2017 }, { "authors": [ "Xiao Jin", "Baoyun Peng", "Yichao Wu", "Yu Liu", "Jiaheng Liu", "Ding Liang", "Junjie Yan", "Xiaolin Hu" ], "title": "Knowledge distillation via route constrained optimization", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Tyler B Johnson", "Carlos Guestrin" ], "title": "Training deep models faster with robust, approximate importance sampling", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "David Duvenaud Jonathan Lorraine", "Paul Vicol" ], "title": "Optimizing millions of hyperparameters by implicit differentiation", "venue": "proceedings of AISATS,", "year": 2019 }, { "authors": [ "Angelos Katharopoulos", "François Fleuret" ], "title": "Not all samples are created equal: Deep learning with importance", "venue": "sampling. CoRR,", "year": 2018 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Zhenmao Li", "Yichao Wu", "Ken Chen", "Yudong Wu", "Shunfeng Zhou", "Jiaheng Liu", "Junjie Yan" ], "title": "LAW: learning to auto weight", "venue": "CoRR, abs/1905.11058,", "year": 2019 }, { "authors": [ "Tsung-Yi Lin", "Priya Goyal", "Ross Girshick", "Kaiming He", "Piotr Dollár" ], "title": "Focal loss for dense object detection", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Online batch selection for faster training of neural networks", "venue": "CoRR, abs/1511.06343,", "year": 2015 }, { "authors": [ "Matthew MacKay", "Paul Vicol", "Jon Lorraine", "Roger Grosse David Duvenaud" ], "title": "Self-tuning networks: Bilevel optimization of hyperparameters using structured best-response functions", "venue": "proceedings of ICLR,", "year": 2019 }, { "authors": [ "Mengye Ren", "Wenyuan Zeng", "Bin Yang", "Raquel Urtasun" ], "title": "Learning to reweight examples for robust deep learning", "venue": "CoRR, abs/1803.09050,", "year": 2018 }, { "authors": [ "Abhinav Shrivastava", "Abhinav Gupta", "Ross Girshick" ], "title": "Training region-based object detectors with online hard example mining", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Jasper Snoek", "Oren Rippel", "Kevin Swersky", "Ryan Kiros", "Nadathur Satish", "Narayanan Sundaram", "Mostofa Patwary", "Mr Prabhat", "Ryan Adams" ], "title": "Scalable bayesian optimization using deep neural networks", "venue": "Proceedings of the 32nd International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Yiru Wang", "Weihao Gan", "Jie Yang", "Wei Wu", "Junjie Yan" ], "title": "Dynamic curriculum learning for imbalanced data classification", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Gary M Weiss", "Kate McCarthy", "Bibi Zabar" ], "title": "Cost-sensitive learning vs. sampling: Which is best for handling unbalanced classes with unequal error costs? Dmin", "venue": null, "year": 2007 }, { "authors": [ "Michael R. Zhang", "James Lucas", "Geoffrey E. Hinton", "Jimmy Ba" ], "title": "Lookahead optimizer: k steps forward, 1 step back", "venue": "CoRR, abs/1907.08610,", "year": 2019 }, { "authors": [ "Xinyu Zhang", "Qiang Wang", "Jian Zhang", "Zhao Zhong" ], "title": "Adversarial autoaugment", "venue": "In International Conference on Learning Representations,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Data sampling policies can greatly influence the performance of model training in computer vision tasks, and therefore finding robust sampling policies can be important. Handcrafted rules, e.g. data resampling, reweighting, and importance sampling, promote better model performance by adjusting the training data frequency and order (Estabrooks et al., 2004; Weiss et al., 2007; Bengio et al., 2009; Johnson & Guestrin, 2018; Katharopoulos & Fleuret, 2018; Shrivastava et al., 2016; Jesson et al., 2017). Handcrafted rules heavily rely on the assumption over the dataset and cannot adapt well to datasets with their own characteristics. To handle this issue, learning-based methods (Li et al., 2019; Jiang et al., 2017; Fan et al., 2017) were designed to automatically reweight or select training data utilizing meta-learning techniques or a policy network.\nHowever existing learning-based sampling methods still rely on human priors as proxies to optimize sampling policies, which may fail in practice. Such priors often include assumptions on policy network design for data selection (Fan et al., 2017), or dataset conditions like noisiness (Li et al., 2019; Loshchilov & Hutter, 2015) or imbalance (Wang et al., 2019). These approaches take images features, losses, importance or their representations as inputs and use the policy network or other learning approaches with small amount of parameters for estimating the sampling probability. However, for example, images with similar visual features can be redundant in training, but their losses or features fed into the policy network are more likely to be close, causing the same probability to be sampled for redundant samples if we rely on aforementioned priors. Therefore, we propose to directly optimize the sampling schedule itself so that no prior knowledge is required for the dataset. Specifically, the sampling schedule refers to order by which data are selected for the entire training course. In this way, we only rely on data themselves to determine the optimal sampling schedule without any prior.\nDirectly optimizing a sampling schedule is challenging due to its inherent high dimension. For example, for the ImageNet classification dataset (Deng et al., 2009) with around one million samples, the dimension of parameters would be in the same order. While popular approaches such as deep reinforcement learning (Cubuk et al., 2018; Zhang et al., 2020), Bayesian optimization (Snoek et al., 2015), population-based training (Jaderberg et al., 2017) or simple random search (Bergstra & Bengio, 2012) have already been utilized to tune low-dimensional hyper-parameters like augmentation schedules, their applications in directly finding good sampling schedules remain unexploited. For instance, the dimension of a data augmentation policy is generally only in dozens, and it needs thousands of training runs (Cubuk et al., 2018) to sample enough rewards to find an optimal augmentation\npolicy because high-quality rewards require many epochs of training to obtain. As such, optimizing a sampling schedule may require orders of magnitude more rewards than data augmentation to gather and hence training runs, which result in prohibitively slow convergence.\nTo overcome the aforementioned challenge, we propose a data sampling policy search framework, named AutoSampling, to sufficiently learn an optimal sampling schedule in a population-based training fashion (Jaderberg et al., 2017). Unlike previous methods, which focus on collecting longterm rewards and updating hyper-parameters or agents offline, our AutoSampling method collects rewards online with a shortened collection cycle but without priors. Specifically, the AutoSampling collects rewards within several training iterations, tens or hundred times shorter than that in existing works (Ho et al., 2019; Cubuk et al., 2018). In this manner, we provide the search process with much more frequent feedback to ensure sufficient optimization of the sampling schedule. Each time when a few training iterations pass, we collect the reward from the previous several iterations, accumulate them and later update the sampling distribution using the rewards. Then we perturb the sampling distribution to search in distribution space, and use it to generate new mini-batches for later iterations, which are recorded into the output sampling schedule. As illustrated in Sec. 4.1, shortened collection cycles with less interference also can better reflect the training value of each data.\nOur contributions are as follows: • To our best knowledge, we are the first to propose to directly learn a robust sampling\nschedule from the data themselves without any human prior or condition on the dataset. • We propose the AutoSampling method to handle the optimization difficulty due to the high\ndimension of sampling schedules, and efficiently learn a robust sampling schedule through shortened reward collection cycle and online update of the sampling schedule.\nComprehensive experiments on CIFAR-10/100 and ImageNet datasets (Krizhevsky, 2009; Deng et al., 2009) with different networks show that the Autosampling can increase the top-1 accuracy by up to 2.85% on CIFAR-10, 2.19% on CIFAR-100, and 2.83% on ImageNet." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 RELATED WORK", "text": "Data sampling is of great significance to deep learning, and has been extensively studied. Approaches with human-designed rules take pre-defined heuristic rules to modify the frequency and order by which training data is presented. In particular, one intuitive method is to resample or reweight data according to their frequencies, difficulties or importance in training (Estabrooks et al., 2004; Weiss et al., 2007; Drummond et al., 2003; Bengio et al., 2009; Lin et al., 2017; Shrivastava et al., 2016; Loshchilov & Hutter, 2015; Wang et al., 2019; Johnson & Guestrin, 2018; Katharopoulos & Fleuret, 2018; Byrd & Lipton, 2018; Jesson et al., 2017). These methods have been widely used in imbalanced training or hard mining problems. However, they are often restricted to certain tasks and datasets based on which they are proposed, and their ability to generalize to a broader range of tasks with different data distribution may be limited. In another word, these methods often implicitly assume certain conditions on the dataset, such as cleanness or imbalance. In addition, learning-based methods have been proposed for finding suitable sampling schemes automatically. Methods using meta-learning or reinforcement learning are also utilized to automatically select or reweight data during training (Li et al., 2019; Jiang et al., 2017; Ren et al., 2018; Fan et al., 2017), but they are only tested on small-scale or noisy datasets. Whether or not they can generalize over tasks of other datasets still remain untested. In this work, we directly study the data sampling without any prior, and we also investigate its wide generalization ability across different datasets such as CIFAR-10, CIFAR-100 and ImageNet using many typical networks.\nAs for hyper-parameter tuning, popular approaches such as deep reinforcement learning (Cubuk et al., 2018; Zhang et al., 2020), Bayesian optimization (Snoek et al., 2015) or simply random search (Bergstra & Bengio, 2012) have already been utilized to tune low-dimensional hyper-parameters and proven to be effective. Nevertheless, they have not been adopted to find good sampling schedule due to its inherent high dimensiona. Some recent works tackle the challenge of optimizing highdimensional hyper-parameter. MacKay et al. (2019) uses structured best-response functions and Jonathan Lorraine (2019) achieve this goal through the combinations of the implicit function theorem and efficient inverse Hessian approximations. However, they have not been tested on the task of optimizing sampling schedules, which is the major focus of our work in this paper." }, { "heading": "2.2 POPULATION BASED TRAINING", "text": "Hyper-parameter tuning task can be framed as a bi-level optimization problem with the following objective function,\nmin h∈H L(θ∗, h)\nsubject to θ∗ = argmax θ∈Θ\neval(θ, h) (1)\nwhere θ represents the model weight and h = (h1, h2, · · · , hT ) is the hyper-parameter schedule for T training intervals. Population based training (PBT) (Jaderberg et al., 2017) solves the bilevel optimization problem by training a population P of child models in parallel with different hyper-parameter schedules initialized:\nP = {(θi, hi, t)} Np i=1 (2)\nwhere θi, hi respectively represents the child model weight, the corresponding hyper-parameter schedule for the training interval t on worker i, and Np is the number of workers. PBT proceeds in intervals, which usually consists of several epochs of training. During the interval, the population of models are trained in parallel to finish the lower-level optimization of weights θi.\nBetween intervals, an exploit-and-explore procedure is adopted to conduct the upper-level optimization of the hyper-parameter schedule. In particular for interval t, to exploit we evaluate child models on a held-out validation dataset:\nh∗t , θ ∗ t = argmax\npi=(θi,hi,t)∈P eval(θi, hi)\nθ∗ → θi, i = 1, · · · , Np (3)\nWe record the best performing hyper-parameter setting h∗t and broadcast the top-performing model θ∗t to all workers. To explore, we initialize new hyper-parameter schedules for interval t+ 1 with different random seeds on all workers, which can be viewed as a search in the hyper-parameter space. The next exploit-and-explore cycle will then be continued. In the end, the top-performing hyper-parameter schedule h∗ = (h∗1, h ∗ 2, · · · , h∗T ) can be obtained.\nPBT is applied to tune low-dimenisal hyper-parameters such as data augmentation schedules (Ho et al., 2019; Jaderberg et al., 2017). However, it cannot be directly used for finding sampling strategies due to the high dimension. Unlike PBT, our AutoSampling adopts a multi-exploitation-andexploration structure, leading to much shorter reward collection cycles that contribute to much more and effective rewards for sufficient optimization within a practical computational budget." }, { "heading": "3 AUTOSAMPLING WITH SEARCHING", "text": "The overview of our AutoSampling is illustrated in Fig.1. AutoSampling alternately runs multiexploitation step and exploration step. In the exploration step, we 1) update the sampling distribution\nAlgorithm 1: The Multi-Exploitation Step\nInput: Training dataset D, population P = {(θi, hi, t)} Np i=1, number of workers Np, number of exploitation intervals T , exploitation interval length Ns Initialize H∗ ← () for t = 1 to T do\nfor j = 1 to Ns do for (θi, ht,i, t) ∈ P do θi ←5L(θi, ht,i) B update the weight of child model i\nend for h∗t , θ ∗ t = argmaxP eval(θi, hi) H∗ ← H∗ + h∗t B update the sampling for child model i for i = 1 to Np do θi ← θ∗t B clone the optimal weight\nend for end for\nend for Return H∗, P\nusing the rewards collected from the multi-exploitation step (the sampling distribution is uniform distribution initially); 2) perturb the updated sampling distribution for child models so that different child models have different sampling distributions; 3) use the corresponding perturbed sampling distribution for each child model to sample mini-batches of training data. In the multi-exploitation step, we 1) train multiple child models using the mini-batches sampled from the exploration step; 2) collect short-term rewards from the child models. AutoSampling finishes with a recorded topperforming sampling schedule, which can be transferred to other models." }, { "heading": "3.1 MULTI-EXPLOITATION BY SEARCHING IN THE DATA SPACE", "text": "In the multi-exploitation step, we aim to search locally in the data space by collecting short-term rewards and sub-schedules. Specifically, we wish to learn a sampling schedule for T exploitation intervals. In each interval, there are a population P of Np child models. Denote ht,i as the training data sub-schedule in the tth interval for the ith child model. When all of the T exploitation intervals for the ith child model are considered, we have Hi = {ht,i|t = 1, . . . , T} = {x1, · · · , xN}, where N is the number of training data for the multi-exploitation step. Each interval consists of Ns training iterations that is also equivalent to Ns training mini-batches, where Ns is the length of the interval. AutoSampling is expected to produce a sequence of training samples, denoted by H∗, so that a given model is optimally trained. The population {Hi} forms the local search space, from which we aim to search for an optimal sampling schedule H∗.\nGiven the population P , we train them in parallel on Np workers. Once an interval of data ht,i containing Ns training batches have been used for training, we evaluate all child models and use the top evaluation performance as the reward. According to the reward, we record the top-performing weight and sub-schedule for the current interval t, in particular,\nh∗t , θ ∗ t = argmax\npi=(θi,hi,t)∈P eval(θi, ht,i) (4)\nOn the other hand, we update all child model weights of P by cloning into them with the topperforming weight θ∗t so we can continue searching based on the more promising child. We will continue the exploit steps through the whole training process, and output the recorded optimal sampling schedule H∗ = {h∗1, h∗2, · · · , h∗T }. By using exploitation interval of mini-batches rather than epochs or even entire training runs adopted by earlier methods, AutoSampling may yield a better and more robust sampling schedule. It should be pointed out that even though in AutoSampling rewards are collected within a much shorter interval, they remain effective. As we directly optimize the sampling schedule, we are concerned with only the data themselves. The short-term rewards reflect the training value of data from the exploitation interval they are collected. But for global hyperparameters such as augmentation schedules, short-term rewards may lead to inferior performance as these hyper-parameters are concerned with the overall training outcome. We describe the multiexploitation with details in Alg.1.\nAlgorithm 2: Search based AutoSampling Input: Training dataset D, population size Np Initialize H∗ ← () , P (D)← uniform(D) and initialize child models θ1, · · · , θNp while not end of training do\nfor i = 1 to Np do Sample hi from Mixture(log(P (D) + β), Nu × uniform(D)) end for Initialize P = {(θi, hi, t)} Np i=1 H∗,P ← Alg.1 Estimate P (D) according to Equation (5) Update P (D) according to Equation (6) H∗ ← H∗ +H∗\nend while Return H∗, P (D)" }, { "heading": "3.2 EXPLORATION BY SEARCHING IN SAMPLING DISTRIBUTION SPACE", "text": "In the exploration, we search in sampling distribution space by updating and perturbing the sampling distribution. We first estimate the underlying sampling distribution P (D) from the top sampling schedule h∗ produced in the multi-exploitation, that is, for x ∈ D,\nP (x) = count(x ∈ H∗)∑ x∈D count(x ∈ H∗)\n(5)\nwhere count(x ∈ H∗) denotes the number of x’s appearances in H∗. We further perturb the P (D) and generate the sampling schedules on each worker for the later multi-exploitation. We introduce perturbations into the generated schedules by simply sampling from the multinomial distribution P (D) using different random seeds. However, in our experiments, we observe that the distribution produced by P (D) tends to be extremely skewed and a majority of the data actually have zero frequencies. Such skewness causes highly imbalanced training mini-batches, and therefore destabilizes subsequent model training.\nDistribution Smoothing To tackle the above issue, we first smooth P (D) through the logarithmic function, and then apply a probability mixture with uniform distributions. In particular for the dataset D, P ′(D) =Mixture(log(P (D) + β), Nu × uniform(D)) (6) where β ≥ 1 is the smoothing factor and Nu × uniform(D) denotes Nu uniform multinomial distributions on the dataset D. The smoothing through the log function can greatly reduce the skewness, however, log(P (D) + β) may still contain zero probabilities for some training data, resulting in unstable training. Therefore, we further smooth it through a probability mixture with Nu uniform distribution uniform(D) to ensure presence of all data. This is equivalent to combining Nu epochs of training data to the training batches sampled from P (D), and shuffling the union. Once we have new diverse sampling schedules for the population, we proceed to the next multi-exploitation step.\nWe continue this alternation between multi-exploitation and exploration steps until the end of training. Note that to generate sampling schedule for the first multi-exploitation run, we initialize P (D) to be an uniform multinomial distribution. In the end, we output a sequence of optimal sampling schedules H∗ = (H∗1, · · · ,H∗n) for n alternations. The entire process is illustrated in details in Alg.2." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we present comprehensive experiments on various datasets to illustrate the performance of AutoSampling, and also demonstrate the process of progressively learning better sampling distribution." }, { "heading": "4.1 ABLATION STUDY", "text": "For this part, we gradually build up and test components of AutoSampling on CIFAR-100, and then examine their performances on CIFAR-10 and ImageNet datasets. The training implementation details and computational complexity can be found in Appendix A.1.\nTable 2: Experiments on CIFAR-10.\nNETWORK EXPLORATION TYPE TOP1(%)\nRESNET18 UNIFORM 93.01±0.009 RESNET18 RANDOM 95.86±0.003 RESNET18 MIXTURE 95.80±0.018 RESNET50 UNIFORM 93.60±0.004 RESNET50 RANDOM 96.10±0.002 RESNET50 MIXTURE 96.09±0.070\nTable 3: Experiments on ImageNet.\nNETWORK EXPLORATION TYPE TOP1(%)\nRESNET18 UNIFORM 70.38 RESNET18 RANDOM 72.07 RESNET18 MIXTURE 72.91 RESNET34 UNIFORM 74.09 RESNET34 RANDOM 76.11 RESNET34 MIXTURE 76.92\nAdding Workers To look into the influence of the worker numbers, we conduct experiments using worker numbers of 1, 20, 80 respectively with the same setting (Ns = 20 with random exploration). With the worker number of 1, the experiment is simply the normal model training using stochastic gradient descent. To show the competitiveness of our baselines, we also include state-of-the-art results on CIFAR-100 with ResNet-18 and ResNet-50 (Zhang et al., 2019; Jin et al., 2019). We notice significant performance gain using the worker number of 20 for ResNet-18, ResNet-50 and DenseNet-121 (He et al., 2015; Huang et al., 2017), as illustrated in Table 1. However, we note that increasing worker number from 20 to 80 only brings marginal performance gains across various model structures, as shown in Table 1. Therefore, we set the worker number to be 20 for the rest of the experiments.\nShortening Exploitation Intervals To study the effects of the shortened exploitation interval, we run experiments using different exploitation intervals of 20 and 80 batches(iterations) respectively. As shown in Table 1, models with the shorter exploitation interval of 20 batches(iterations) perform better than the one with the longer exploitation interval across all three network structures, conforming to our assumptions that the reward collected reflects value of each data used in the exploitation interval. This result adheres to our intuition that shorter exploitation interval can encourage the sampler to accumulate more rewards to learn better sampling schedules. For the rest of this section we keep the exploitation interval of 20.\nAdding Exploration Type We further add mixture as the exploration type to see the effects of learning the underlying sampling distribution, and completing the proposed method. As shown in Table 1, with ResNet-18 and ResNet-50 we push performance higher with the mixture exploration, and outperform the baseline method by about 1 and 1.8 percentage on CIFAR-100 respectively. However, we found that it is not true in the case of DenseNet-121 and this case may be attributed to the bigger capacity of DenseNet-121.\nGeneralization Over Datasets In addition, we experiment on other datasets. We report the results on CIFAR10 in Table 2 and the results of ResNet-18, ResNet-34 on ImageNet in Table 3. For CIFAR-10, we notice that the mixture and random exploration methods are comparable while both outperforming the uniform baseline, and we believe it is due to the simplicity of the dataset. In the more challenging\nImageNet, the mixture exploration outperforms the random exploration by a clear margin. We also compare our AutoSampling with some recent non-uniform sampling methods on CIFAR-100, which can be found in Appendix A.2." }, { "heading": "4.2 STATIC VS DYNAMIC SCHEDULES", "text": "We aim to see if the final sampling distribution estimated by our AutoSampling is sufficient to produce robust sampling schedules. In another word, we wish to know training with the AutoSampling is either a process of learning a robust sampling distribution, or a process of dynamically adjusting the sampling schedule for optimal training. To this end, we conduct training using different sampling schedules. First, we calculate the sampling distribution estimated throughout the learning steps of AutoSampling, and use it to generate the sampling schedule of a full training process, which we denote as STATIC. Moreover, we denote the sampling schedule learned using AutoSampling as DYNAMIC, since AutoSampling dynamically adjust the sampling schedule alongside the training process. Finally, we denote the baseline method as UNIFORM, which uses the sampling schedule generated from uniform distribution.\nWe report results on CIFAR-100 with ResNet-18 and ResNet-50 in Table 4. Model trained with STATIC sampling schedules exceeds the baseline UNIFORM significantly, indicating the superiority of the learned sampling distribution over the uniform distribution. It shows the ability of AutoSampling to learn good sampling distribution. Nonetheless, note that models trained with DYNAMIC sampling schedules outperform models trained with STATIC, by a margin bigger than the one between STATIC and UNIFORM. This result shows the fact that despite the AutoSampling’s capability of learning good sampling distribution, its flexibility during training matters even more. Moreover, this phenomenon also indicates that models at different stages of learning process may require different sampling distributions to achieve optimal training. One single sampling distribution, even gradually estimated using AutoSampling, seems incapable of covering the needs from different learning stages. We plot the histograms of data counts in training estimated from schedules of different learning stages with ResNet-18 on CIFAR-100 in Fig.2, showing the great differences between optimized sampling distributions from different epochs." }, { "heading": "4.3 ANALYZING SAMPLING SCHEDULES LEARNED BY AUTOSAMPLING", "text": "To further investigate the sampling schedule learned by AutoSampling, we review the images at the tail and head part of the sampling spectrum. In particular, given a sampling schedule learned we rank all images based on their appearances in training. Training images at the top and bottom of the order are extracted, corresponding to high and low probabilities of being sampled respectively. In Fig.3, we show 4 classes of exemplary images. The images of low probability tend to have clearer imagery\nfeatures enabling easy recognition, while the images of high probability tend to be more obscure, indicating that the sampling schedule may show hard samples mining effects. However, as shown in A.3 and Fig. 4, the loss values and probabilities of being sampled seem to be not highly correlated, which indicates more potential of AutoSampling beyond visually hard example mining. In addition, we notice the images of low probability also contain low quality images. For instance, in Fig.3 the leftmost image of CAMAL class contains only legs. This shows that AutoSampling may potentially rule out problematic training data for better training.\nFurthermore, we examine the transfer ability of sampling distributions learned by AutoSampling to other network structures. Specifically, we run training on ResNet-50 (He et al., 2015) using STATIC sampling schedule generated by three distributions learned by AutoSampling on 3 different models. As shown in Table 5, using sampling schedules learned by AutoSampling from other models, we demonstrate similar improvements over the UNIFORM baseline. This result, combined with the above observations on images of different sampling probability, indicates that there may exist a common optimal sampling schedule determined by the intrinsic property of the data rather than the model being optimized. Our AutoSampling is an effort to gradually converge to such an optimal schedule." }, { "heading": "4.4 DISCUSSIONS", "text": "The experimental results and observations from Section 4.2 and 4.3 shed light on the possible existence of an optimal sampling schedule, which relies only on the intrinsic property of the data and the learning stage of the model, regardless of the specific model structure or any prior knowledge. The learned sampling schedule may provide enough rewards in the searching process, leading to sufficient convergence compared to other related works. Once obtained, the optimal sampling schedule may also be generalized over other model structures for robust training. Although AutoSampling requires relatively large amount of computing resources to find a robust sampler, we want to point out that the efficiency of our method can be improved through better training techniques. Moreover, the possibility of an optimal sampling schedule relying solely on the data themselves may indicate more efficient sampling policy search algorithms, if one can quickly and effectively determine data value based on its property." }, { "heading": "5 CONCLUSIONS", "text": "In this paper, we introduce a new search based AutoSampling scheme to overcome the issue of insufficient rewards for optimizing high-dimensional sampling hyper-parameter by utilizing a shorter period of reward collection. We use a shortened exploitation interval to search in the local data space and provide sufficient rewards. For the exploration step, we estimate sampling distribution from the searched sampling schedule and perturb it to search in the distribution space. We test our method and it consistently outperforms the baseline methods across different benchmarks." }, { "heading": "A APPENDIX", "text": "A.1 IMPLEMENTATION DETAILS\nExperiments on CIFAR We use the same training configuration for both CIFAR-100 and CIFAR-10 datasets, which both consist of 50000 training images. In particular, for model training we use the base learning rate of 0.1 and a step decay learning rate schedule where the learning rate is divided by 10 after each 60 epochs. We run the experiments for 240 epochs. In addition, we set the training batch size to be 128 per worker, and each worker is for one Nvidia V100 GPU card.\nWe run the explore step for each Nu + 1 epochs with Nu = 3, but note that we take the first explore step after the initial 20 epochs to better accumulate enough rewards. The experiments require 4800 epochs of training for 20 workers, and roughly 14 hours of training time.\nExperiments on ImageNet For ImageNet which consists of 1.28 million training images, we adopted the base learning rate of 0.2 and a cosine decay learning rate schedule. We run the experiments with 100 epochs of training. For each worker we utilize eight Nvidia V100 GPU cards and a total batch size of 512. Eight workers are used for all ImageNet experiments, and the rest of the setting adheres to that of CIFAR experiments. In addition, we utilize FP16 computation to achieve faster training, which has almost no drop in accuracy in practice. The experiments require 800 epochs of training for 8 workers, and roughly 4 days of training time.\nA.2 COMPARISON WITH EXISTING SAMPLING METHODS\nTo better illustrate the effectiveness of our AutoSampling method, we conduct experiments in comparison with recent non-uniform sampling methods DLIS (Johnson & Guestrin, 2018) and RAIS (Katharopoulos & Fleuret, 2018). DLIS (Johnson & Guestrin, 2018) achieves faster convergence by selecting data reducing gradient norm variance, while RAIS (Katharopoulos & Fleuret, 2018) does so through approximating the ideal sampling distribution using robust optimization. The comparison is recorded in Table 6.\nFirst, we run AutoSampling using Wide Resnet-28-2 (Zagoruyko & Komodakis, 2016) on CIFAR-100 with the training setting aligned roughly to (Katharopoulos & Fleuret, 2018). AutoSampling achievs improvement of roughly 3 percentage points (73.37±1.09%→ 76.24±1.02%), while Katharopoulos & Fleuret shows improvement of 2 percentage points (66.0%→ 68.0 %). Second, we report the comparison between AutoSampling and RAIS on CIFAR-100. Johnson & Guestrin shows no improvement (76.4%→ 76.4 %) on accuracy and 0.027 (0.989→ 0.962 ) decrease in validation loss, while our method shows improvement of 0.008 (78.6%→ 79.4%) on accuracy and 0.014 (0.886→ 0.872 ) decrease in validation loss. As such, our method demonstrates significant improvements over existing non-uniform sampling methods.\nA.3 COMPARISON BETWEEN LEARNED SAMPLING SCHEDULES AND DATA LOSS VALUES\nTo further interpret the learned sampling schedules, we compare the sampling frequency of each training image and its loss values in different epochs during training of CIFAR-100 with ResNet-18. We draw the comparison for randomly selected 500 training images in Fig. 4 for epoch 80, 160, and 240. As shown in the figure, across different learning stages, the correlation between loss values and sampling frequencies of training data is not obvious. The high chance of being sampled by AutoSampling does not necessarily lead to high loss values, which demonstrates that AutoSampling\nis not merely over-sampling difficult samples as pointed by the loss. The resulting sampling schedule learned by AutoSampling would be significantly different from the one guided by loss. Moreover as the training progresses the loss values of data are reduced, which is expected." } ]
2,020
null
SP:fb5575d5c26f54fbccbc9de46440c174fe46abdf
[ "The paper introduces a framework to privatize sensitive attributes of data using adversarial representation learning. The proposed method consists of a “filter” that removes the sensitive attribute from the data representation, and a “generator” that replaces the removed sensitive attribute with a randomly sampled synthetic value. The authors argue that the second step done by the generator enhances privacy, and use experiments on real image data to verify their method and compare it with a baseline." ]
Data privacy is an increasingly important aspect of many real-world big data analytics tasks. Data sources that contain sensitive information may have immense potential which could be unlocked using privacy enhancing transformations, but current methods often fail to produce convincing output. Furthermore, finding the right balance between privacy and utility is often a tricky trade-off. In this work, we propose a novel approach for data privatization, which involves two steps: in the first step, it removes the sensitive information, and in the second step, it replaces this information with an independent random sample. Our method builds on adversarial representation learning which ensures strong privacy by training the model to fool an increasingly strong adversary. While previous methods only aim at obfuscating the sensitive information, we find that adding new random information in its place strengthens the provided privacy and provides better utility at any given level of privacy. The result is an approach that can provide stronger privatization on image data, and yet be preserving both the domain and the utility of the inputs, entirely independent of the downstream task.
[]
[ { "authors": [ "Rawan Alharbi", "Mariam Tolba", "Lucia C. Petito", "Josiah Hester", "Nabil Alshurafa" ], "title": "To mask or not to mask? balancing privacy with visual confirmation utility in activity-oriented wearable cameras", "venue": "Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.,", "year": 2019 }, { "authors": [ "Martin Bertran", "Natalia Martinez", "Afroditi Papadaki", "Qiang Qiu", "Miguel Rodrigues", "Galen Reeves", "Guillermo Sapiro" ], "title": "Adversarially learned representations for information obfuscation and inference", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Alex Beutel", "Jilin Chen", "Zhe Zhao", "Ed H Chi" ], "title": "Data decisions and theoretical implications when adversarially learning fair representations", "venue": "arXiv preprint arXiv:1707.00075,", "year": 2017 }, { "authors": [ "Yunjey Choi", "Minje Choi", "Munyoung Kim", "Jung Woo Ha", "Sunghun Kim", "Jaegul Choo" ], "title": "StarGAN: Unified Generative Adversarial Networks for Multi-domain Image-to-Image Translation", "venue": "Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Harrison Edwards", "Amos J. Storkey" ], "title": "Censoring representations with an adversary", "venue": "In 4th International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "C. Huang", "P. Kairouz", "L. Sankar" ], "title": "Generative adversarial privacy: A data-driven approach to information-theoretic privacy", "venue": "In 2018 52nd Asilomar Conference on Signals, Systems, and Computers,", "year": 2018 }, { "authors": [ "Chong Huang", "Peter Kairouz", "Xiao Chen", "Lalitha Sankar", "Ram Rajagopal" ], "title": "Context-aware generative adversarial", "venue": "privacy. Entropy,", "year": 2017 }, { "authors": [ "Håkon Hukkelås", "Rudolf Mester", "Frank Lindseth" ], "title": "Deepprivacy: A generative adversarial network for face anonymization", "venue": "Advances in Visual Computing,", "year": 2019 }, { "authors": [ "P. Isola", "J. Zhu", "T. Zhou", "A.A. Efros" ], "title": "Image-to-image translation with conditional adversarial networks", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization, 2014. URL http: //arxiv.org/abs/1412.6980. cite arxiv:1412.6980Comment: Published as a conference paper at the 3rd International Conference for Learning Representations", "venue": "San Diego,", "year": 2015 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Seong Joon Oh", "Rodrigo Benenson", "Mario Fritz", "Bernt Schiele" ], "title": "Faceless person recognition: Privacy implications in social media", "venue": "Computer Vision – ECCV", "year": 2016 }, { "authors": [ "Seong Joon Oh", "Rodrigo Benenson", "Mario Fritz", "Bernt Schiele" ], "title": "Faceless person recognition: Privacy implications in social media", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Seong Joon Oh", "Mario Fritz", "Bernt Schiele" ], "title": "Adversarial image perturbation for privacy protection a game theory perspective", "venue": "IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Tribhuvanesh Orekondy", "Mario Fritz", "Bernt Schiele" ], "title": "Connecting pixels to privacy and utility: Automatic redaction of private information in images", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "S.A. Osia", "A. Taheri", "A.S. Shamsabadi", "K. Katevas", "H. Haddadi", "H.R. Rabiee" ], "title": "Deep privatefeature extraction", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2020 }, { "authors": [ "Nisarg Raval", "Ashwin Machanavajjhala", "Landon P Cox" ], "title": "Protecting visual secrets using adversarial nets", "venue": "IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW),", "year": 2017 }, { "authors": [ "Zhongzheng Ren", "Yong Jae Lee", "Michael S. Ryoo" ], "title": "Learning to anonymize faces for privacy preserving action detection", "venue": "Computer Vision – ECCV", "year": 2018 }, { "authors": [ "O. Ronneberger", "P.Fischer", "T. Brox" ], "title": "U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention (MICCAI), volume 9351 of LNCS, pp. 234–241", "venue": null, "year": 2015 }, { "authors": [ "Proteek C. Roy", "Vishnu N. Boddeti" ], "title": "Mitigating information leakage in image representations: A maximum entropy approach", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Lingxiao Song", "Zhihe Lu", "Ran He", "Zhenan Sun", "Tieniu Tan" ], "title": "Geometry guided adversarial facial expression synthesis", "venue": "CoRR, abs/1712.03474,", "year": 2017 }, { "authors": [ "Hao Tang", "Dan Xu", "Gaowen Liu", "Wei Wang", "Nicu Sebe", "Yan Yan" ], "title": "Cycle in cycle generative adversarial networks for keypoint-guided image generation", "venue": "In Proceedings of the 27th ACM International Conference on Multimedia, MM", "year": 2019 }, { "authors": [ "Luan Quoc Tran", "Xi Yin", "Xiaoming Liu" ], "title": "Representation learning by rotating your faces", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "Haotao Wang", "Zhenyu Wu", "Zhangyang Wang", "Zhaowen Wang", "Hailin Jin" ], "title": "Privacy-preserving deep visual recognition: An adversarial learning framework and A new dataset", "venue": "URL http://arxiv.org/abs/1906.05675", "year": 1906 }, { "authors": [ "Zhenyu Wu", "Zhangyang Wang", "Zhaowen Wang", "Hailin Jin" ], "title": "Towards privacy-preserving visual recognition via adversarial training: A pilot study", "venue": "Computer Vision – ECCV", "year": 2018 }, { "authors": [ "Taihong Xiao", "Yi-Hsuan Tsai", "Kihyuk Sohn", "Manmohan Chandraker", "Ming-Hsuan Yang" ], "title": "Adversarial learning of privacy-preserving and task-oriented representations", "venue": "In The Thirty-Fourth AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Qizhe Xie", "Zihang Dai", "Yulun Du", "Eduard Hovy", "Graham Neubig" ], "title": "Controllable invariance through adversarial feature learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Brian Hu Zhang", "Blake Lemoine", "Margaret Mitchell" ], "title": "Mitigating unwanted biases with adversarial learning", "venue": "In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society,", "year": 2018 }, { "authors": [ "UNet Ronneberger" ], "title": "An overview of the setup", "venue": null, "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Increasing capacity and performance of modern machine learning models lead to increasing amounts of data required for training them (Goodfellow et al., 2016). However, collecting and using large datasets which may contain sensitive information about individuals is often impeded by increasingly strong privacy laws protecting individual rights, and the infeasibility of obtaining individual consent. Giving privacy guarantees on a dataset may let us share data, while protecting the rights of individuals, and thus unlocking the large benefits for individuals and for society that big datasets can provide.\nIn this work, we propose a technique for selective obfuscation of image datasets. The aim is to provide the original data as detailed as possible while making it hard for an adversary to detect specific sensitive attributes. The proposed solution is agnostic to the downstream task, with the objective to make the data as private as possible given a distortion constraint. This issue has previously been addressed using adversarial representation learning with some success: a filter model is trained to obfuscate sensitive information while an adversary model is trained to recover the information (Edwards & Storkey, 2016). In the current work, we demonstrate that it is easier to hide sensitive information if you replace it with something else: a sample which is independent from the input data.\nAside from the adversary module, our proposed solution includes two main components: one filter model that is trained to remove the sensitive attribute, and one generator model that inserts a synthetically generated new value for the sensitive attribute. The generated sensitive attribute is entirely independent from the sensitive attribute in the original input image. Following a body of work in privacy-related adversarial learning we evaluate the proposed model on faces from the CelebA dataset (Liu et al., 2015), and consider, for example, the smile or gender of a person to be the sensitive attribute. The smile is an attribute that carries interesting aspects in the transformations of a human face. The obvious change reside close to the mouth when a person smiles, but also other subtle changes occur: eyelids tighten, dimples show and the skin wrinkles. The current work includes a thorough analysis of the dataset, including correlations of such features. These correlations make the task interesting and challenging, reflecting the real difficulty that may occur when anonymizing data. What is the right trade-off between preserving the utility as defined by allowing information about other attributes to remain, and removing the sensitive information?\nIn our setup, the adversary can make an arbitrary number of queries to the model. For each query another sample will be produced from the distribution of the sensitive data, while keeping as much as possible of the non-sensitive information about the requested data point." }, { "heading": "2 RELATED WORK", "text": "Privacy-preserving machine learning has been studied from a number of different angles. Some work assumes access to a privacy-preserving mechanism, such as bounding boxes for faces, and studies how to hide people’s identity by blurring (Oh et al., 2016a), removing (Orekondy et al., 2018) or generating the face of other people (Hukkelås et al., 2019) in their place. Other work assumes access to the utility-preserving mechanism and proposes to obfuscate everything except what they want to retain (Alharbi et al., 2019). This raises the question: how do we find the pixels in an image that need to be modified to preserve privacy with respect to some attribute?\nFurthermore, Oh et al. (2016b) showed that blurring or removing the head of a person has a limited effect on privacy. The finding is crucial; we cannot rely on modifications of an image such as blurring or overpainting to achieve privacy. An adversarial set-up instead captures the signals that the adversary uses, and can attain a stronger privacy. Adversarial learning is the process of training a model to fool an adversary (Goodfellow et al., 2014). Both models are trained simultaneously, and become increasingly good at their respective task during training. This approach has been successfully used to learn image-to-image transformations (Isola et al., 2017; Choi et al., 2018), and synthesis of properties such as facial expressions (Song et al., 2017; Tang et al., 2019). Privacy-preserving adversarial representation learning utilize this paradigm to learn representations of data that hide sensitive information (Edwards & Storkey, 2016; Zhang et al., 2018; Xie et al., 2017; Beutel et al., 2017; Raval et al., 2017).\nBertran et al. (2019) minimize the mutual information between the utility variable and the input image data conditioned on the learned representation. Roy & Boddeti (2019) maximize the entropy of the discriminator output rather than minimizing the log likelihood, which is beneficial for stronger privacy. Osia et al. (2020) approached the problem using an information-bottleneck. Wu et al. (2018), Ren et al. (2018), and Wang et al. (2019) learn transformations of video that respect a privacy budget while maintaining performance on a downstream task. Tran et al. (2018) proposed an approach for pose-invariant face recognition. Similar to our work, their approach used adversarial learning to disentangle specific attributes in the data. Oh et al. (2017) trained a model to add a small amount of noise to the input to hide the identity of a person. Xiao et al. (2020) learn a representation from which it is hard to reconstruct the original input,but from which it is possible to predict a predefined task. The method provides control over which attributes that is preserved, but no control over which attributes that are being censored. That is, it puts more emphasis on preserving utility than privacy, which is not always desired.\nAll of these, with the exception of Edwards & Storkey (2016) (see below), depend on knowing the downstream task labels. Our work has no such dependency: the data produced by our method is designed to be usable regardless of downstream task.\nIn Edwards & Storkey (2016), a limited experiment is included which does not depend on the downstream task. In this experiment, they remove sensitive text which was overlaid on images, a task which is much simpler than the real-world problem considered in the current work. The overlaid text is independent of the underlying image, and therefore the solution does not require a trade-off between utility and privacy which is the case in most real settings. Furthermore, we also replace the sensitive information with synthetic information which we show further strengthens the privacy.\nLike in the current work, Huang et al. (2017, 2018) use adversarial learning to minimize the mutual information between the private attribute and the censored image under a distortion constraint. Our solution extends and improves upon these ideas with a modular design consisting of a filter that is trained to obfuscate the data, and a generator that further enhances the privacy by adding new independently sampled synthetic information for the sensitive attributes." }, { "heading": "3 PRIVACY-PRESERVING ADVERSARIAL REPRESENTATION LEARNING", "text": "In the current work, we propose a novel solution for utility-preserving privacy-enhancing transformations of data: we use privacy-preserving representation learning to obfuscate information in the input data, and output results that retain the information and structure of the input." }, { "heading": "3.1 PROBLEM SETTING", "text": "Generative adversarial privacy (GAP) (Huang et al., 2018) was proposed as a method to provide privacy in images under a distortion constraint, and will be used as the baseline in the current work. In GAP, one assumes a joint distribution P (X,S) of public data points X and sensitive private attributes S where S is typically correlated with X . The authors define a privacy mechanism X ′ = f(X,Z1) where Z1 is the source of noise or randomness in f . Let hf (X ′) be an adversary’s prediction of the sensitive attribute S from the privatized data X ′ according to a decision rule hf . The performance of the adversary is thus measured by a loss function `f (hf (f(x, z1)), s) and the expected loss of the adversary with respect to X , S and Z1 is\nLf (hf , f) = E x,s∼p(x,s) z1∼p(z1) [`f (hf (f(x, z1), s)], (1)\nwhere p(z1) is the source of noise.\nThe privacy mechanism f will be trained to be privacy-preserving and utility-preserving. That is, it should be hard for an analyst to infer S from X ′, but X ′ should be minimally distorted with respect to X . Huang et al. (2018) formulate this as a constrained minimax problem\nmin f max hf −Lf (f, hf ) s.t. E\nx,s∼p(x,s) z1∼p(z1)\n[d(f(x, z1), x)] ≤ 1, (2)\nwhere the constant 1 ≥ 0 defines the allowed distortion for the privatizer and d(·, ·) is some distortion measure.\nIn the current work, f will be referred to as the filter since the purpose of it is to filter the sensitive information from x. A potential limitation with this formulation is that it only obfuscates the sensitive information in x which may make it obvious to the adversary that x′ is a censored version of x. Instead, we propose to replace the sensitive information with a new independent value s′." }, { "heading": "3.2 OUR CONTRIBUTION", "text": "We extend the filter with a generator module g, defined as X ′′ = g(f(X,Z1), S′, Z2) where S′ denotes the random variable of the new synthetic value for the sensitive attribute. Z1 and Z2 denote the sources of randomness in f and g respectively. The discriminator hg is trained to predict s when the input is a real image, and to predict the “fake” output when the input comes from g as in the learning setup in Salimans et al. (2016). The objective of the generator g(x′, s′, z2) is to generate a new synthetic (independent) sensitive attribute s′ in x′, that will fool the discriminator hg . We define the loss of the discriminator hg as\nLg(hg, g) = E x,s∼p(x,s) s′∼p(s′)\nz1,z2∼p(z1,z2)\n[`g(hg(g(f(x, z1), s ′, z2)), fake)] + E\nx,s∼p(x,s) [`g(hg(x), s)], (3)\nwhere p(z1, z2) is the source of noise, p(s′) is the assumed distribution of the synthetic sensitive attributes s′, fake is the fake class, and `g is the loss function. We formulate this as a constrained minimax problem\nmin g max hg −Lg(g, hg) s.t. E\nx,s∼p(x,s) s′∼p(s′)\nz1,z2∼p(z1,z2)\n[d(g(f(x, z1), s ′, z2), x)] ≤ 2, (4)\nwhere the constant 2 ≥ 0 defines the allowed distortion for the generator.\nIn Figure 1 we show the difference between (a) minimizing log-likelihood of the adversary, (b) maximizing entropy of the adversary, and (c) maximizing the entropy of the adversary and also synthetically replace the sensitive attribute with a random sample." }, { "heading": "3.3 IMPLEMENTATION", "text": "Let D = {(xi, si)}ni=1 be a dataset of samples (xi, si) which are assumed to be identically and independently distributed according to some unknown joint distribution P (X,S). We assume that the sensitive attribute is binary and takes values in {0, 1}. However, the proposed approach can easily be extended to categorical sensitive attributes.g(X ′, S′, Z2; θg) using convolutional neural networks of the UNet (Ronneberger et al., 2015) architecture parameterized by θf and θg, respectively. (See Appendix A.1 for details).\nThe discriminators hf (X ′;φf ) and hg(X ′′;φg) are modeled using ResNet-18 (He et al., 2016) and a modified version which we refer to as ResNet-101, respectively. The last fully connected layer has been replaced with a two and three class output layer for each model, respectively.\nAs suggested by Roy & Boddeti (2019), we choose the filter discriminator loss `f to be the the negative entropy. Intuitively, this leads to f learning to make the adversary hf confused about the sensitive attribute rather than to make it certain of the wrong value. For completeness, we also include experiments where `f is the categorical cross-entropy, as are `hf and `g. The distortion measure d is defined as the L2-norm, and p(s′) is assumed to be the uniform distribution U{0, 1}. The hyperparameters consist of the learning rate lr, the quadratic penalty term coefficient λ, the distortion constraint , and the (β1, β2) parameters to Adam (Kingma & Ba, 2014). Details of the training setup can be found in Appendix A.3, and the full code is published on GitHub2." }, { "heading": "4 EXPERIMENTS", "text": "In this section we describe our experiments, the datasets used, and the evaluation metrics.\nSynthetic data. We introduce and apply the method on a synthetic dataset to illustrate the difference between optimizing the filter for log-likelihoood and entropy, and why adding the generator is important. The synthetic data consists Dtrain = {(xi, si)}ni=1, x ∈ R2 and s ∈ {0, 1}, drawn from two different normal distributions N (µ1, σ1) and N (µ2, σ2), where µ1 = [−1, 1], and σ1 =\n1ResNet-10 has the same setup as ResNet-18, but each of the “conv2_x”, “conv3_x”, “conv4_x”, and “conv5_x” layers consists of only one block instead of two.\n2https://github.com/anonymous/anonymous-repo-name\n[[0.7, 0][0, 0.7]], and µ2 = [1,−1], and σ2 = [[0.7, 0][0, 0.7]]. Similarily, we construct a test set Dtest = {(xi, si)}mi=1, and we let n = 400, 000 and m = 2, 560. The points are classified according to which normal distribution they belong to, and we consider this the secret attribute, i.e, s = 0 if x ∼ N (µ1, σ1) and, s = 1 if x ∼ N (µ2, σ2). We sample s′ ∼ U({0, 1}). We run the experiment with distortion constraints ∈ {0.1, 0.5, 1.0, 1.5, 2.0} and run each experiment five times.\nCelebA. The CelebA dataset3 (Liu et al., 2015) consists of 202,599 face images of size 218x178 pixels and 40 binary attribute annotations per image, such as age (old or young), gender, if the image is blurry, if the person is bald, etc. The dataset has an official split into a training set of 162,770 images, a validation set of 19,867 images and a test set of 19,962 images. We resize all images to 64x64 for the quantitative experiments, and to 128x128 pixels for the qualitative experiments, and normalize all pixel values to the region [0, 1]. We use a higher resolution for the qualitative results to make subtle visual changes more apparent.\nFiltering and replacement of sensitive data. Let Dtrain = {(xi, si}ni=1 be a set of training data where xi denotes facial image i and si ∈ {0, 1} denotes the sensitive attribute. Further let Dtest = {(xi, si)}mi=1 be the held out test data. For the purpose of evaluation, we assume access to a number of utility attributes u(j) ∈ {0, 1} for each x and j ∈ U . The following attributes provided with the dataset will be used in the CelebA experiments: gender, lipstick, age, high cheekbones, mouth slightly open, heavy makeup, smiling. In each experiment, one of these will be selected as the sensitive attribute. The rest will be considered the utility attributes, not used for training, but allow for a utility score in the evaluation. This score shows how well non-sensitive attributes are preserved in the transformation. We compute an average over fixed classifiers, trained to predict each respective utility attribute on the original training data, when evaluated on the censored data.\nHyperparameters. We train the models using Dtrain with lr = 0.0005, λ = 105, ∈ {0.03, 0.02, 0.01, 0.005, 0.001}, and (β1, β2) = (0.9, 0.999). Let the training data censored by the filter be D′train = {(x′i, si)}ni=1, where x′i = f(xi, z (1) i ; θf ), and by both the filter and the generator be D′′train = {(x′′i , si)} where x′′i = g(x′i, s′i, z (2) i ; θg), z (1) i , z (2) i ∼ N (0,1), and s′i ∼ U{0, 1}. We do the same transformations to the test data and denote them D′test and D′′test respectively.\nComputational requirements. Each experiment was performed on a Tesla V100 SXM2 32 GB in a DGX-1 machine. The training was restricted to 100 epochs which takes about 13 hours. We run each experiment with negative entropy loss three times, and present the average over these runs, and we run each experiment with log-likelihood loss five times and present the average over these.\nEvaluation. To evaluate the privacy loss for our method we train an adversarial classifier to predict the ground truth sensitive attribute given a training set of images censored by each privatization method, and then we evaluate each adversary on a test set of censored images. If an adversary can predict the ground truth sensitive attribute of censored images, that it has not seen before, with high accuracy, then the privacy loss is high. Let adv(s|h(x)) denote an adversary trained on the censored training set to predict the ground truth attribute s given censored image h(x) then the privacy loss is defined as\nPrivacy loss = 1 |Dtest| ∑\n(x,s)∈Dtest\n1[adv(s|h(x)) = s]. (5)\nTo evaluate the utility score on the CelebA data, we use fixed classifiers that have been pre-trained on the original data to predict a set of utility attributes that are considered non-sensitive. If these utility attributes are predictable with high accuracy after privatization of the image, then we consider the method to have a high utility score. Let fix(u|x′′) denote a fixed classifier trained on the original training set to predict the ground truth attribute u, then the utility score is defined as\n(CelebA) Utility score = 1 |U | ∑ j∈U 1 |Dtest| ∑\n(x,u(j))∈Dtest\n1[fix(u(j)|h(x)) = u(j)]. (6)\n3http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html\nTo evaluate the utility score on the synthetic data, we compute\n(Synthetic) Utility score = 1− 1 max|Dtest| ∑ (x,s)∈Dtest |x− h(x)|, (7)\nwhere we normalize with max = 2 to get distances into the range [0, 1] and convert it to a similarity by subtracting the average over these distances from one for consistency in the trade-off curves. Where we let h(x) = f(x, z1; θf ) to evaluate the baseline, and h(x) = g((f(x, z1; θf ), s′, z2; θg) to evaluate our method.\nWe then plot the utility score against the privacy loss for different distortion budgets and get a trade-off curve. Similar evaluation method are used in Roy & Boddeti (2019); Bertran et al. (2019). In particular, Roy & Boddeti (2019) plot accuracy of an adversary predicting the sensitive attribute against the accuracy of a classifier predicting the target attribute." }, { "heading": "5 RESULTS", "text": "In this section we present quantitative and qualitative results on the facial image experiments.\nSynthetic data. Figure 2 shows the trade-off between privacy loss and utility score for the experiment on the synthetic dataset. Our method consistently outperforms the baseline for all given distortion budgets. Figure 3 shows the original data (orange for s=0, and blue for s=1) together with the censored data for the three different transforms (red for baseline with likelihood, green for baseline with entropy, and purple for the proposed method). When minimizing log-likelihood we see clear clusters on opposite side of the decision boundary, when maximizing entropy we see that all points approach the decision boundary, and when adding the generator we see that the points\nare mapped, at random, into the cluster of orange points or blue points depending on the sampled synthetic sensitive attribute.\nCelebA. Figure 4 shows the trade-off between privacy loss and utility score when evaluated on censored images. Our method consistently has a higher utility at any given level of privacy compared to the baseline. Remember: these are strong adversaries required to run tagged training data through the privacy mechanism to be able to train. Additional results can be found in Appendix A.2.\nTo further show that explicitly optimizing for privacy in the privatization mechanism is necessary we have conducted a similar experiment using StarGAN Choi et al. (2018) to randomly change the sensitive attribute in the image (results in suppl.). We then evaluate the censored images, which look very convincing to a human, using adversarial classifiers. The adversaries can successfully detect the sensitive attributes with an accuracy of roughly 90%. For the weight λrec of the cycle consistency loss we explored values {0, 5, 10, 50}. We obtain similar scores when we exclude the filter part of our method and use only the generator part to censor the images (see Appendix A.2).\nIn Table 1 we present the results of evaluating the accuracy of fix(s|·) on the dataset {x′′i , s′i}mi=1 where x′′i = g(x ′ i, s ′ iz (2) i ) is the image censored with our method and s ′ i is the new synthetic attribute uniformly sampled from {0, 1}. That is, we measure how often the classifier predict the new synthetic attribute s′i when applied to x ′′ i . We can see that with = 0.001 the method is on average able to fool the classifier 82.4% of the time for the smiling attribute, and this increases with larger distortion budget to a success rate of 91.2% on average with = 0.05. The results are similar when the images have been censored with respect to the attributes gender, lipstick, and age, but these require a larger distortion budget.\nTable 1: The success rate of our method to fool a fixed classifier for the sensitive attributes smiling, gender, lipstick, and age. This was measured as the accuracy of fix(s|·) for each censored attribute in the censored data {(x′′i , s′i)}mi=1. Higher is better. Average and standard deviation of five runs.\nDist. Synthetic Smiling Gender Lipstick Young\n0.001 82.4± 2.1 72.9± 1.5 62.2± 3.1 59.2± 1.9 0.005 86.4± 3.7 79.4± 0.6 71.7± 2.4 62.1± 2.5 0.01 87.4± 2.1 85.6± 1.4 77.6± 2.4 59.6± 1.6 0.05 91.2± 2.9 90.3± 4.3 90.6± 4.2 67.7± 2.1\nFigure 5: Qualitative results for the sensitive attribute smile. In the first four columns: = 0.001, and in the last four columns: = 0.01. From top to bottom row: input image (x), censored image (x′), censored image with synthetic non-smile (x′′, s′ = 0), censored image with synthetic smile (x′′, s′ = 1). The model is able to generate a synthetic smiling attribute while maintaining much of the structure in the image. These images were generated from a model trained using 128x128 pixels.\nFigure 5 shows, from the top row to the bottom row, the input image x, the censored image x′, the censored image x′′ with the synthetic attribute s′ = 0 (non-smiling), and the censored image x′′ with the synthetic attribute s′ = 1 (smiling). A value of = 0.001 is used in the first four columns, and = 0.01 in the last four columns. The images censored by our method look sharper and it is less obvious that they are censored. We can see that the method convincingly generates non-smiling faces and smiling faces while most of the other parts of the image is intact. These images are sampled from models trained on images of 128x128 pixels resolution. See Figure 7 in Appendix A.2 for corresponding samples on the same input images, but using gender as the sensitive attribute." }, { "heading": "6 DISCUSSION", "text": "In Figure 1, we show how a privatization mechanism is affected by an adversarial setup and its training criterion. Minimizing the log likelihood of the adversary can be interpreted as attempting to make the adversary certain of the wrong value of the sensitive attribute and may lead to transforming data with a certain sensitive value to be transformed to similar outputs. Following the gradient of the entropy loss instead leads to the adversary becoming less certain and thus leads to more privacy. Using this approach to remove sensitive information, and then adding random information makes an even stronger privacy, as we take another step adding randomness to the output. This intuition\nis confirmed in the experimental results on the synthetic data, as demonstrated by Figure 3. Our approach successfully transforms the datapoints into a distribution which is nearly indistinguishable from the input distribution, and where datapoints with the different sensitive values are completely mixed.\nOur method consistently outperforms the baseline, ensuring a higher level of privacy while maintaining more of the useful information (see Figure 4). For all sensitive attributes that we consider and at nearly all given distortion budgets ( ∈ {0.02, 0.01, 0.005, 0.001}) we observe both a higher privacy and a higher utility for our method. For some of the attributes (gender, lipstick), at extreme distortions ( = 0.03) our method has a utility score which is not better than the baseline, but still provide considerably better privacy. Furthermore, we can observe that using the entropy loss function for the filter benefits both the baseline and our method.\nThis shows that our method makes it more difficult for the adversary to see through the privatization step at each given distortion budget. To show the effect of the filter we have conducted experiments where we only use the generator to privatize the images (see Appendix A.2), which does not seem to provide any privacy.\nA generator without the filter may learn to either pass the original image without modification (when s′ = s) or to transform the image into a random other value (s′ 6= s). If the transformed image is indistinguishable from a real image this is not a problem, but otherwise we can easily reverse the privatization by detecting if the image is real or not. The filter step mitigates this by always removing the sensitive data in the image, forcing the generator to synthesize new data. Since the censored image is now guaranteed to be synthetic, we are no longer susceptible the simple real/fake attack.\nSimilarily, we conduct experiments where we use StarGAN to censor the images, but observe no privacy when using this standard attribute manipulation method. A reason could be that the cyclic consistency loss term actually encourage image transformations that are easily invertible given the original attribute, which is not desirable in a privacy setting. This motivates our approach which is explicit about the privacy objective.\nIn Table 1, we see that the fixed smile classifier is fooled by our privatization mechanism in 82.4% to 91.2% of the data points in the test set (depending on the distortion ). These results indicate that it may be harder for an adversarially trained classifier to predict the sensitive attribute when it has been replaced with something else, as compared to simply removed. We assume that this is due to the added variability in the data. Or intuitively: it is easier to “blend in” with other images that have similar demonstrations of smiles.\nThe fact that many important attributes in facial images correlate leads to the reflection that disentangling the underlying factors of variation is not entirely possible. For example, in CelebA, lipstick is highly correlated with female gender. This means that if we want to hide all information about the presence of lipstick we also need to hide the gender (and other correlating attributes). This problem is further analysed in Table 3 in Appendix A.2.\nA strength of our method is that it is domain-preserving, this allows a utility provider to use the censored image in existing algorithms without modifications. Since the method also preserves utility this may also allow stacking the privatization mechanism to censor multiple attributes in an image." }, { "heading": "7 CONCLUSIONS", "text": "In this work we have presented a strong privacy-preserving transformation mechanism for image data which is learned using an adversarial setup. While previous work on adversarial representation learning has focused on removing information from a representation, our approach extends on this and can also generate new information in its place that looks realistic and gives further privacy properties compared to the baseline. We evaluate our method using adversarially trained classifiers, and our results show that not only do we provide stronger privacy with regards to sensitive attributes, but we also preserve non-sensitive attributes of the image at a higher rate. The results show that the synthetically added attribute gives stronger privacy properties and helps fooling the adversary in the challenging setting where the adversary is allowed to be trained using the output of the privacy mechanism." }, { "heading": "A APPENDIX", "text": "A.1 ADDITIONAL DETAILS\nAn overview of the setup can be seen in Figure 6. We use the UNet Ronneberger et al. (2015), illustrated on the right in Figure 6, architecture for both the filter and the generator. The orange blocks are convolution blocks each of which, except for the last block, consist of a convolution layer, a batch normalization layer and a rectified linear activation unit, repeated twice in that order. The number of output channels of the convolution layers in each block has been noted in the figure. The last convolution block with a 3 channel output (the RGB image) consists of only a single convolutional layer followed by a sigmoid activation. The green blocks denote either a max pooling layer with a kernel size of two and a stride of two if marked with “/2” or a nearest neighbor upsampling by a factor of two if marked with “2x”. The blue block denotes an embedding layer, which takes as input the categorical value of the sensitive attribute and outputs a dense embedding of 128 dimensions. It is then followed by a linear projection and a reshaping to match the spatial dimensions of the output of the convolution block to which it is concatenated, but with a single channel. The same type of linear projection is applied on the 1024 dimensional noise vector input, but this projection and reshaping matches both the spatial and channel dimensions of the output of the convolutional block to which it is concatenated. Concatenation is in both cases done along the channel dimension.\nA.2 ADDITIONAL RESULTS\nThe results in Table 2 show that using the filter together with the generator (g ◦ f ) give a stronger privatization guarantee than using only the generator (without the filter) or only the baseline (the filter without the generator).\nIn Figure 7 we show additional qualitative results when the attribtue gender is considered sensitive.\nTable 3 shows the correlations between classifier predictions on a pair of attributes when one attribute has been synthetically replaced.\nFor example, in the CelebA dataset lipstick is highly correlated with female gender. This means that if we want to hide all information about whether or not the person is wearing lipstick we also need to hide its gender (and other correlating attributes). This problem can be seen in Table 3 where changing whether or not a person is wearing lipstick correlates with changes of gender.\nThe question is: if we censor an attribute in an image, how does that correlate with changes of other attributes in the image? In the lipstick column of Table 3 we have censored the attribute lipstick. We then make predictions on whether or not the person in the censored image is wearing lipstick, and compute the correlation between these predictions and predictions for the attributes for each row. We can see that changes in lipstick correlate negatively with changes in gender and positively with makeup. This highlights the problem of disentangling these underlying factors of variation.\nTable 4 shows the Pearson correlations between the smiling attribute and 37 other attributes in the CelebA dataset.\nA.3 TRAINING SETUP\nAlgorithm 1 demonstrates the training setup for our method. Note that when `f is negative entropy it does not depend on the 1− si argument, but is simply computed from the output of the discriminator hf . However, when `f is cross entropy we optimize the filter such that the discriminator hf is fooled into predicting the complement class 1 − s. This means that we need to assume s to be a binary attribute when using this loss. When using the negative entropy we do not need to make this assumption, which is a strength of the proposed method.\nAlgorithm 1 input: D, lr, λ, , β1, β2 1, 2 ← repeat\nDraw m samples uniformly at random from the dataset (x1, s1), . . . , (xm, sm) ∼ D Draw m samples from the noise distribution (z\n(1) 1 , z (2) 1 ), . . . , (z (1) m , z (2) m ) ∼ p(z(1), z(2))\nDraw m samples from the synthetic distribution s′1, . . . , s ′ m ∼ p(s′) Compute censored and synthetic data x′1, . . . , x ′ m = fθf (x1, z (1) 1 ), . . . , fθf (xm, z (1) m ) x′′1 , . . . , x ′′ m = gθg (x ′ 1, s ′ 1, z (2) 1 ), . . . , gθg (x ′ m, s ′ m, z (2) m ) Compute filter and generator losses\nΘf (θf ) = 1\nm m∑ i=1 `f (hf (x ′ i;φf ), 1− si)\n+ λmax( 1\nm m∑ i=1 d(x′i, xi)− , 0)2\nΘg(θg) = 1\nm m∑ i=1 `g(hg(x ′′ i ;φg), s ′ i)\n+ λmax( 1\nm m∑ i=1 d(x′′i , xi)− , 0)2\nUpdate filter and generator parameters θf ← Adam(Θf (θf ); lr, β1, β2) θg ← Adam(Θg(θg); lr, β1, β2) Compute discriminator losses Φf (φf ) = 1 m ∑m i=1 `hf (hf (x ′ i;φf ), si)\nΦg(φg) = 1\nm m∑ i=1 `g(hg(x ′′ i ;φg), fake)\n+ 1\nm m∑ i=1 `g(hg(xi;φg), si)\nUpdate discriminator parameters φf ← Adam(Φf (φf ); lr, β1, β2) φg ← Adam(Φg(φg); lr, β1, β2)\nuntil stopping criterion return θf , θg, φf , φg" } ]
2,020
null
SP:0cd88bb9be953a5db3d9ac0208848b26a6f4e1bd
[ "This work studies the offline RL problem and proposes MBOP for the same. The proposed method learns ensembles of dynamics models, behavioral policies, and value functions using the offline dataset. Subsequently, the approach uses online MPC with a learned terminal value function. The paper demonstrates experimental results on standard benchmark tasks (RLU and D4RL) as well as zero-shot adaptation results." ]
Offline learning is a key part of making reinforcement learning (RL) useable in real systems. Offline RL looks at scenarios where there is data from a system’s operation, but no direct access to the system when learning a policy. Recent work on training RL policies from offline data has shown results both with model-free policies learned directly from the data, or with planning on top of learnt models of the data. Model-free policies tend to be more performant, but are more opaque, harder to command externally, and less easy to integrate into larger systems. We propose an offline learner that generates a model that can be used to control the system directly through planning. This allows us to have easily controllable policies directly from data, without ever interacting with the system. We show the performance of our algorithm, Model-Based Offline Planning (MBOP) on a series of robotics-inspired tasks, and demonstrate its ability to leverage planning to respect environmental constraints. We are able to find near-optimal polices for certain simulated systems from as little as 50 seconds of real-time system interaction, and create zero-shot goal-conditioned policies on a series of environments.
[ { "affiliations": [], "name": "OFFLINE PLANNING" }, { "affiliations": [], "name": "Arthur Argenson" }, { "affiliations": [], "name": "Gabriel Dulac-Arnold" } ]
[ { "authors": [ "Abbas Abdolmaleki", "Jost Tobias Springenberg", "Yuval Tassa", "Remi Munos", "Nicolas Heess", "Martin Riedmiller" ], "title": "Maximum a posteriori policy optimisation", "venue": "arXiv preprint arXiv:1806.06920,", "year": 2018 }, { "authors": [ "Kurtland Chua", "Roberto Calandra", "Rowan McAllister", "Sergey Levine" ], "title": "Deep reinforcement learning in a handful of trials using probabilistic dynamics models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Marc Deisenroth", "Carl E Rasmussen" ], "title": "Pilco: A model-based and data-efficient approach to policy search", "venue": "In Proceedings of the 28th International Conference on machine learning", "year": 2011 }, { "authors": [ "Gabriel Dulac-Arnold", "Nir Levine", "Daniel J Mankowitz", "Jerry Li", "Cosmin Paduraru", "Sven Gowal", "Todd Hester" ], "title": "An empirical investigation of the challenges of real-world reinforcement learning", "venue": null, "year": 2003 }, { "authors": [ "Frederik Ebert", "Chelsea Finn", "Sudeep Dasari", "Annie Xie", "Alex Lee", "Sergey Levine" ], "title": "Visual foresight: Model-based deep reinforcement learning for vision-based robotic control", "venue": "arXiv preprint arXiv:1812.00568,", "year": 2018 }, { "authors": [ "Justin Fu", "Aviral Kumar", "Ofir Nachum", "George Tucker", "Sergey Levine" ], "title": "D4rl: Datasets for deep data-driven reinforcement learning", "venue": "arXiv preprint arXiv:2004.07219,", "year": 2020 }, { "authors": [ "Scott Fujimoto", "David Meger", "Doina Precup" ], "title": "Off-policy deep reinforcement learning without exploration", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Caglar Gulcehre", "Ziyu Wang", "Alexander Novikov", "Tom Le Paine", "Sergio Gómez Colmenarejo", "Konrad Zolna", "Rishabh Agarwal", "Josh Merel", "Daniel Mankowitz", "Cosmin Paduraru" ], "title": "Rl unplugged: Benchmarks for offline reinforcement learning", "venue": "arXiv preprint arXiv:2006.13888,", "year": 2020 }, { "authors": [ "Danijar Hafner", "Timothy Lillicrap", "Jimmy Ba", "Mohammad Norouzi" ], "title": "Dream to control: Learning behaviors by latent imagination", "venue": "arXiv preprint arXiv:1912.01603,", "year": 2019 }, { "authors": [ "Danijar Hafner", "Timothy Lillicrap", "Ian Fischer", "Ruben Villegas", "David Ha", "Honglak Lee", "James Davidson" ], "title": "Learning latent dynamics for planning from pixels", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Michael Janner", "Justin Fu", "Marvin Zhang", "Sergey Levine" ], "title": "When to trust your model: Modelbased policy optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Gregory Kahn", "Pieter Abbeel", "Sergey Levine" ], "title": "Badgr: An autonomous self-supervised learningbased navigation system", "venue": "arXiv preprint arXiv:2002.05700,", "year": 2020 }, { "authors": [ "Rahul Kidambi", "Aravind Rajeswaran", "Praneeth Netrapalli", "Thorsten Joachims" ], "title": "Morel: Modelbased offline reinforcement learning", "venue": "arXiv preprint arXiv:2005.05951,", "year": 2020 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Sergey Levine", "Vladlen Koltun" ], "title": "Guided policy search", "venue": "In International Conference on Machine Learning, pp", "year": 2013 }, { "authors": [ "Sergey Levine", "Aviral Kumar", "George Tucker", "Justin Fu" ], "title": "Offline reinforcement learning: Tutorial, review, and perspectives on open problems", "venue": "arXiv preprint arXiv:2005.01643,", "year": 2020 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "arXiv preprint arXiv:1509.02971,", "year": 2015 }, { "authors": [ "Kendall Lowrey", "Aravind Rajeswaran", "Sham Kakade", "Emanuel Todorov", "Igor Mordatch" ], "title": "Plan online, learn offline: Efficient learning and exploration via model-based control", "venue": "arXiv preprint arXiv:1811.01848,", "year": 2018 }, { "authors": [ "Kevin Lu", "Igor Mordatch", "Pieter Abbeel" ], "title": "Adaptive online planning for continual lifelong learning", "venue": "arXiv preprint arXiv:1912.01188,", "year": 2019 }, { "authors": [ "Tatsuya Matsushima", "Hiroki Furuta", "Yutaka Matsuo", "Ofir Nachum", "Shixiang Gu" ], "title": "Deploymentefficient reinforcement learning via model-based offline optimization", "venue": "arXiv preprint arXiv:2006.03647,", "year": 2020 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Ofir Nachum", "Yinlam Chow", "Bo Dai", "Lihong Li" ], "title": "Dualdice: Behavior-agnostic estimation of discounted stationary distribution corrections", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Anusha Nagabandi", "Kurt Konolige", "Sergey Levine", "Vikash Kumar" ], "title": "Deep dynamics models for learning dexterous manipulation", "venue": "In Conference on Robot Learning,", "year": 2020 }, { "authors": [ "Tom Le Paine", "Cosmin Paduraru", "Andrea Michi", "Caglar Gulcehre", "Konrad Zolna", "Alexander Novikov", "Ziyu Wang", "Nando de Freitas" ], "title": "Hyperparameter selection for offline reinforcement learning", "venue": null, "year": 2007 }, { "authors": [ "Xue Bin Peng", "Aviral Kumar", "Grace Zhang", "Sergey Levine" ], "title": "Advantage-weighted regression: Simple and scalable off-policy reinforcement learning", "venue": null, "year": 1910 }, { "authors": [ "Doina Precup" ], "title": "Eligibility traces for off-policy policy evaluation", "venue": "Computer Science Department Faculty Publication Series, pp", "year": 2000 }, { "authors": [ "Aravind Rajeswaran", "Vikash Kumar", "Abhishek Gupta", "Giulia Vezzani", "John Schulman", "Emanuel Todorov", "Sergey Levine" ], "title": "Learning complex dexterous manipulation with deep reinforcement learning and demonstrations", "venue": "arXiv preprint arXiv:1709.10087,", "year": 2017 }, { "authors": [ "Aravind Rajeswaran", "Igor Mordatch", "Vikash Kumar" ], "title": "A game theoretic framework for model based reinforcement learning", "venue": "arXiv preprint arXiv:2004.07804,", "year": 2020 }, { "authors": [ "J Rault", "A Richalet", "JL Testud", "J Papon" ], "title": "Model predictive heuristic control: application to industrial processes", "venue": null, "year": 1978 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "David Silver", "Aja Huang", "Chris J Maddison", "Arthur Guez", "Laurent Sifre", "George Van Den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot" ], "title": "Mastering the game of go with deep neural networks and tree", "venue": "search. nature,", "year": 2016 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement learning: An introduction", "venue": "MIT press,", "year": 2018 }, { "authors": [ "Nolan Wagener", "Ching-An Cheng", "Jacob Sacks", "Byron Boots" ], "title": "An online learning approach to model predictive control", "venue": "arXiv preprint arXiv:1902.08967,", "year": 2019 }, { "authors": [ "Tingwu Wang", "Jimmy Ba" ], "title": "Exploring model-based planning with policy networks", "venue": "arXiv preprint arXiv:1906.08649,", "year": 2019 }, { "authors": [ "Ziyu Wang", "Alexander Novikov", "Konrad Zolna", "Josh S Merel", "Jost Tobias Springenberg", "Scott E Reed", "Bobak Shahriari", "Noah Siegel", "Caglar Gulcehre", "Nicolas Heess" ], "title": "Critic regularized regression", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Grady Williams", "Andrew Aldrich", "Evangelos Theodorou" ], "title": "Model predictive path integral control using covariance variable importance sampling", "venue": "arXiv preprint arXiv:1509.01149,", "year": 2015 }, { "authors": [ "Grady Williams", "Andrew Aldrich", "Evangelos A Theodorou" ], "title": "Model predictive path integral control: From theory to parallel computation", "venue": "Journal of Guidance, Control, and Dynamics,", "year": 2017 }, { "authors": [ "Grady Williams", "Nolan Wagener", "Brian Goldfain", "Paul Drews", "James M Rehg", "Byron Boots", "Evangelos A Theodorou" ], "title": "Information theoretic mpc for model-based reinforcement learning", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2017 }, { "authors": [ "Yifan Wu", "George Tucker", "Ofir Nachum" ], "title": "Behavior regularized offline reinforcement learning", "venue": "CoRR, abs/1911.11361,", "year": 2019 }, { "authors": [ "Yuxiang Yang", "Ken Caluwaerts", "Atil Iscen", "Tingnan Zhang", "Jie Tan", "Vikas Sindhwani" ], "title": "Data efficient reinforcement learning for legged robots", "venue": "In Conference on Robot Learning,", "year": 2020 }, { "authors": [ "Tianhe Yu", "Garrett Thomas", "Lantao Yu", "Stefano Ermon", "James Zou", "Sergey Levine", "Chelsea Finn", "Tengyu Ma" ], "title": "Mopo: Model-based offline policy optimization", "venue": "arXiv preprint arXiv:2005.13239,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Learnt policies for robotic and industrial systems have the potential to both increase existing systems’ efficiency & robustness, as well as open possibilities for systems previously considered too complex to control. Learnt policies also afford the possibility for non-experts to program controllers for systems that would currently require weeks of specialized work. Currently, however, most approaches for learning controllers require significant interactive time with a system to be able to converge to a performant policy. This is often either undesirable or impossible due to operating cost, safety issues, or system availability. Fortunately, many systems are designed to log sufficient data about their state and control choices to create a dataset of operator commands and resulting system states. In these cases, controllers could be learned offline, using algorithms that produce a good controller using only these logs, without ever interacting with the system. In this paper we propose such an algorithm, which we call Model-Based Offline Planning (MBOP), which is able to learn policies directly from logs of a semi-performant controller without interacting with the corresponding environment. It is able to leverage these logs to generate a more performant policy than the one used to generate the logs, which can subsequently be goal-conditioned or constrained dynamically during system operation.\nLearning from logs of a system is often called ‘Offline Reinforcement Learning’ (Wu et al., 2019; Peng et al., 2019; Fujimoto et al., 2019; Wang et al., 2020) and both model-free (Wu et al., 2019; Wang et al., 2020; Fujimoto et al., 2019; Peng et al., 2019) and model-based (Yu et al., 2020; Kidambi et al., 2020) approaches have been proposed to learn policies in this setting. Current modelbased approaches, MOPO (Yu et al., 2020) and MoREL (Kidambi et al., 2020), learn a model to train a model-free policy in a Dyna-like (Sutton & Barto, 2018) manner. Our proposed approach, MBOP, is a model-based approach that leverages Model-Predictive Control (MPC) (Rault et al., 1978) and extends the MPPI (Williams et al., 2017b) trajectory optimizer to provide a goal or reward-conditioned policy using real-time planning. It combines three main elements: a learnt world model, a learnt behavior-cloning policy, and a learnt fixed-horizon value-function.\nMBOP’s key advantages are its data-efficiency and adaptability. MBOP is able to learn policies that perform better than the demonstration data from as little as 100 seconds of simulated system time (equivalent to 5000 steps). A single trained MBOP policy can be conditioned with a reward function,\na goal state, as well as state-based constraints, all of which can be non-stationary, allowing for easy control by a human operator or a hierarchical system. Given these two key advantages, we believe it to be a good candidate for real-world use in control systems with offline data.\nWe contextualize MBOP relative to existing work in Section 2, and describe MBOP in Section 3. In Section 4.2, we demonstrate MBOP’s performance on standard benchmark performance tasks for offline RL, and in Section 4.3 we demonstrate MBOP’s performance in zero-shot adaptation to varying task goals and constraints. In Section 4.4 we perform an ablation analysis and consider combined contributions of MBOP’s various elements." }, { "heading": "2 RELATED WORKS", "text": "Model-Based approaches with neural networks have shown promising results in recent years. Guided Policy Search (Levine & Koltun, 2013) leverages differential dynamic programming as a trajectory optimizer on locally linear models, and caches the resulting piece-wise policy in a neural network. Williams et al. (2017b) show that a simple model-based controller can quickly learn to drive a vehicle on a dirt track, the BADGR robot (Kahn et al., 2020) also uses Model-Predictive Path Integral (MPPI) (Williams et al., 2017a) with a learned model to learn to navigate to novel locations, Yang et al. (2020) show good results learning legged locomotion policies using MPC with learned models, and (Ebert et al., 2018) demonstrate flexible robot arm controllers leveraging learned models with image-based goals. Silver et al. (2016) have shown the power of additional explicit planning in various board games including Go. More recently planning-based algorithms such as PlaNet (Hafner et al., 2019b) have shown strong results in pixel-based continuous control tasks by leveraging latent variational RNNs. Simpler approaches such as PDDM (Nagabandi et al., 2020) or PETS (Chua et al., 2018) have shown good results using full state information both in simulation and on real robots. MBOP is strongly influenced by PDDM (Nagabandi et al., 2020) (itself an extension on PETS (Chua et al., 2018)), in particular with the use of ensembles and how they are leveraged during planning. PDDM was not designed for offline use, and MBOP adds a value function composition as well as a policy prior during planning to increase data efficiency and strengthen the set of priors for offline learning. It leverages the same trajectory re-weighting approach used in PDDM and takes advantage of its beta-mixture of the T trajectory buffer.\nBoth MoREL (Kidambi et al., 2020) and MOPO (Yu et al., 2020) leverage model-based approaches for offline learning. This is similar to approaches used in MBPO (Janner et al., 2019) and DREAMER (Hafner et al., 2019a), both of which leverage a learnt model to learn a model-free controller. MoREL and MOPO, however, due to their offline nature, train their model-free learner by using a surrogate MDP which penalizes for underlying model uncertainty. They do not use the models for direct planning on the problem, thus making the final policy task-specific. MOPO demonstrate the ability of their algorithm to alter the reward function and re-train a new policy according to this reward, but cannot leverage the final policy to dynamically adapt to an arbitrary goal or constrained objective. Matsushima et al. (2020) use a model-based policy for deployment efficient RL. Their use case is a mix between offline and online RL, where they consider that there is a limited number of deployments. They share a similarity in the sense that they also use a behaviorcloning policy πβ to guide trajectories in a learned ensemble model, but perform policy improvement steps on a parametrized policy initialized from πβ using a behavior-regularized objective function. Similarly to MoREL and MOPO their approach learns a parameterized policy for acting in the real system.\nThe use of a value function to extend the planning horizon of a planning-based policy has been previously proposed by Lowrey et al. (2018) with the POLO algorithm. POLO uses a ground-truth model (e.g. physics simulator) with MPPI/MPC for trajectory optimization. POLO additionally learns an approximate value-function through interaction with the environment which is then appended to optimized trajectories to improve return estimation. Aside from the fact that MBOP uses an entirely approximate & learned model, it uses a similar idea but with a fixed-horizon value function to avoid bootstrapping, and separate heads of the ensemble during trajectory optimization. BC-trained policies as sampling priors have been looked at by POPLIN (Wang & Ba, 2019). POPLIN does not use value bootstrapping, and re-samples an ensemble head at each timestep during rollouts, which likely provides less consistent variations in simulated plans. They show strong results relative to a series of model-based and model-free approaches, but do not manage to perform on the Gym Walker envi-\nronment. Additionally, they are overall much less data efficient than MBOP and do not demonstrate performance in the offline setting.\nTask-time adaptation using model-based approaches has been considered previously in the modelbased literature. Lu et al. (2019) look at mixing model-free and model-based approaches using notions of uncertainty to allow for adaptive controllers for non-stationary problems. Rajeswaran et al. (2020) use a game-theoretic framework to describe two adaptive learners that are both more sample efficient than common MBRL algorithms, as well as being more robust to non-stationary goals and system dynamics. MBOP is able to perform zero-shot adaptation to non-stationary goals and constraints, but does not provide a mechanism for dealing with non-stationary dynamics. If brought into the on-line settings, approaches from these algorithms such as concentrating on recent data, could however be leveraged to allow for this.\nPrevious approaches all look at various elements present in MBOP but none consider the full combination of a BC prior on the trajectory optimizer with a value-function initialization, especially in the case of full offline learning. Along with this high-level design, many implementation details such as consistent ensemble sampling during rollouts, or averaging returns over ensemble heads, appear to be important for a stable controller from our experience." }, { "heading": "3 MODEL-BASED OFFLINE PLANNING", "text": "Our proposed algorithm, MBOP (Model-Based Offline Planning), is a model-based RL algorithm able to produce performant policies entirely from logs of a less-performant policy, without ever interacting with the actual environment. MBOP learns a world model and leverages a particle-based trajectory optimizer and model-predictive control (MPC) to produce a control action conditioned on the current state. It can be seen as an extension of PDDM (Nagabandi et al., 2020), with a behaviorcloned policy used as a prior on action sampling, and a fixed-horizon value function used to extend the planning horizon.\nIn this following sections, we introduce the Markov Decision Process (MDP) formalism, briefly explain planning-based approaches, discuss offline learning, and then introduce the elements of MBOP before describing the algorithm in full." }, { "heading": "3.1 MARKOV DECISION PROCESS", "text": "Let us model our tasks as a Markov Decision Process (MDP), which can be defined as a tuple (S,A, p, r, γ), where an agent is in a state st ∈ S and takes an action at ∈ A at timestep t. When in state st and taking an action at, an agent will arrive in a new state st+1 with probability p(st+1|st, at), and receive a reward r(st, at, st+1). The cumulative reward over a full episode is called the return R and can be truncated to a specific horizon as RH . Generally reinforcement learning and control aim to provide an optimal policy function πs : S → A which will provide an action at in state st which will lead to the highest long-term return: π∗(st) = argmaxa∈A ∑∞ t=1 γ tr(st, π ∗(st)), where γ is a time-wise discounting factor that we fix to γ = 1, and therefore only consider finite-horizon returns." }, { "heading": "3.2 PLANNING WITH LEARNED MODELS", "text": "A large body of the contemporary work with MDPs involves Reinforcement Learning (RL) Sutton & Barto (2018) with model-free policies Mnih et al. (2015); Lillicrap et al. (2015); Schulman et al. (2017); Abdolmaleki et al. (2018). These approaches learn some form of policy network which provides its approximation of the best action at for a given state st often as a single forward-pass of the network. MBOP and other model-based approaches Deisenroth & Rasmussen (2011); Chua et al. (2018); Williams et al. (2017b); Hafner et al. (2019b); Lowrey et al. (2018); Nagabandi et al. (2020) are very different. They learn an approximate model of their environment and then use a planning algorithm to find a high-return trajectory through this model, which is then applied to the environment 1. This is interesting because the final policy can be more easily adapted to new\n1This approach is often called Model-Based Reinforcement Learning (MBRL) in the literature, but we chose to talk more generally about planning with learned models as the presence of a reward is not fundamentally necessary and the notion of reinforcement is much less present.\ntasks, be made to respect constraints, or offer some level of explainability. When bringing learned controllers to industrial systems, many of these aspects are highly desireable, even to the expense of raw performance." }, { "heading": "3.3 OFFLINE LEARNING", "text": "Most previous work in both reinforcement learning and planning with learned models has assumed repeated interactions with the target environment. This assumption allows the system to gather increased data along trajectories that are more likely, and more importantly to provides counterfactuals, able to contradict prediction errors in the learned policy, which is fundamental to policy improvement. In the case of offline learning, we consider that the environment is not available during the learning phase, but rather that we are given a dataset D of interactions with the environment, representing a series of timestep tuples (st, at, rt, st+1). The goal is to provide a performant policy π given this particular dataset D. Existing RL algorithms do not easily port over to the offline learning setup, for a varied set of reasons well-covered in Levine et al. (2020). In our work, we use the real environment to benchmark the performance of the produced policy. It is important to point out that oftentimes there is nevertheless a need to evaluate the performance of a given policy π without providing access to the final system, which is the concern of Off Policy Evaluation (OPE) Precup (2000); Nachum et al. (2019) and Offline Hyperparameter Selection(OHS) Paine et al. (2020) which are outside the scope of our contribution." }, { "heading": "3.4 LEARNING DYNAMICS, ACTION PRIORS, AND VALUES", "text": "MBOP uses three parameterized function approximators for its planning algorithm. These are:\n1. fm : S × A → S × R, a single-timestep model of environment dynamics such that (r̂t, ŝt+1) = fm(st, at). This is the model used by the planning algorithm to roll out potential action trajectories. We will use fm(st, at)s to denote the state prediction and fm(st, at)r for the reward prediction.\n2. fb : S ×A → A, a behavior-cloned policy network which produces at = fb(st, at−1), and is used by the planning algorithm as a prior to guide trajectory sampling.\n3. fR : S × A → R is a truncated value function, which provides the expected return over a fixed horizon RH of taking a specific action a in a state s, as R̂H = fR(st, at−1).\nEach one is a bootstrap ensemble (Lakshminarayanan et al., 2017) of K feed-forward neural networks, thus fm is composed of f im∀i ∈ [1,K], where each f im is trained with a different weight initialization but from the same dataset D. This approach has been shown to work well empirically to stabilize planning (Nagabandi et al., 2020; Chua et al., 2018). Each of the ensemble member networks is optimized to minimize the L2 loss on the predicted values in the dataset D in a standard supervised manner.\n3.5 MBOP-POLICY\nMBOP uses Model-Predictive Control (Rault et al., 1978) to provide actions for each new state as at = π(st). MPC works by running a fixed-horizon planning algorithm at every timestep, which returns a trajectory T of length H . MPC selects the first action from this trajectory and returns it as at. This fixed-horizon planning algorithm is effectively a black box to MPC, although in our case we have the MPC loop carry around a global trajectory buffer T . A high-level view of the policy loop using MPC is provided in Algorithm 1.\nThe MBOP-Policy loop is straightforward, and only needs to keep around T at each timestep. MPC is well-known to be a surprisingly simple yet effective method for planning-based control. Finding a good trajectory is however more complicated, as we will see in the next section.\n3.6 MBOP-TRAJOPT\nMBOP-Trajopt extends ideas used by PDDM (Nagabandi et al., 2020) by adding a policy prior (provided by fb) and value prediction (provided by fR). The full algorithm is described in Algorithm\nAlgorithm 1 High-Level MBOP-Policy 1: Let D be a dataset of E episodes 2: Train fm, fb, fR on D 3: Initialize planned trajectory: T 0 = [00, · · · , 0H−1]. 4: for t = 1..∞ do 5: Observe st 6: T t = MBOP-Trajopt(T t−1, st, fm, fb, fr) . Update planned trajectory T t starting at T0. 7: at = T t0 . Use first action T0 as π(st) 8: end for\nAlgorithm 2 MBOP-Trajopt 1: procedure MBOP-TRAJOPT(s, T, fm, fb, fR, H,N, σ2, β, κ) 2: Set RN = ~0N . This holds our N trajectory returns. 3: Set AN,H = ~0N,H . This holds our N action trajectories of length H. 4: 5: for n = 1..N do . Sample N trajectories over horizon H . 6: l = n mod K . Use consistent ensemble head throughout trajectory. 7: s1 = s, a0 = T0, R = 0 8: for t = 1..H do 9: ∼ N (0, σ2) 10: at = f lb(st, at−1) + . Sample current action using BC policy. 11: An,t = (1− β)at + βTi=min(t,H−1) . Beta-mixture with previous trajectory T . 12: st+1 = f lm(st,An,t)s . Sample next state from environment model. 13: R = R+ 1\nK ∑K i=1 f i m(st,An,t)r . Take average reward over all ensemble members.\n14: end for 15: Rn = R+ 1K ∑K i=1 f i R(sH+1,An,H) . Append predicted return and store. 16: end for\n17: T ′t = ∑N n=1 e\nκRnAn,t+1∑N n=1 e κRn , ∀t ∈ [0, H − 1] . Generate return-weighted average trajectory.\n18: return T ′ 19: end procedure\n2. In essence, MBOP-Trajopt is an iterative guided-shooting trajectory optimizer with refinement. MBOP-Trajopt rolls out N trajectories of length H using fm as an environment model. As fm is actually an ensemble with K members, we denote the lth ensemble member as f lm. Line 6 of Alg. 2 allows the nth trajectory to always use the same lth ensemble member for both the BC policy and model steps. This use of consistent ensemble members for trajectory rollouts is inspired by PDDM. We point out that fm models return both state transitions and reward, and so we denote the state component as fm(st, at)s and the reward component as fm(st, at)r.\nThe policy prior f lb is used to sample an action which is then averaged with the corresponding action from the previous trajectory generated by MBOP-Trajopt. By maintaining T from one MPC step to another we maintain a trajectory prior that allows us to amortize trajectory optimization over time. The β parameter can be interpreted as a form of learning rate defining how quickly the current optimal trajectory should change with new rollout information (Wagener et al., 2019). We did not find any empirical advantage to the time-correlated noise in Nagabandi et al. (2020), instead opting for i.i.d. noise.\nAs opposed to the BC policy and environment model, reward model is calculated using the average over all ensemble members to calculate the expected return Rn for trajectory n. At the end of a trajectory, we append the predicted return for the final state and action by averaging over all members of fR. The decision to take an average of returns vs using the ensemble heads was also inspired by the approach used in Nagabandi et al. (2020).\nOnce we have a set of trajectories and their associated return, we generate an average action for timestep t by re-weighting the actions of each trajectory according their exponentiated return, as in Nagabandi et al. (2020) and Williams et al. (2017b) (Alg 3, Line 17).\nSection 4 demonstrates how the combination of these elements makes our planning algorithm capable of generating improved trajectories over the behavior trajectories from D, especially in low-data regimes. In higher-data regimes, variants of MBOP without the BC prior can also be used for goal & constraint-based control. Further work will consider the addition of goal-conditioned fb and fR to allow for more data-efficient goal and constraint-based control." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "We look at two operating scenarios to demonstrate MBOP performance and flexibility. First we consider the standard offline settings where the evaluation environment and task are identical to the behavior policy’s. We show that MBOP is able to perform well with very little data. We then look at MBOP’s ability to provide controllers that can naturally transfer to novel tasks with the same system dynamics. We use both goal-conditioned tasks (that ignore the original reward function) and constrained tasks (that require optimising for the original reward under some state constraint) to demonstrate the MBOP’s transfer abilities. Accompanying videos are available here: https: //youtu.be/nxGGHdZOFts." }, { "heading": "4.1 METHODOLOGY", "text": "We use standard datasets from the RL Unplugged (RLU) (Gulcehre et al., 2020) and D4RL (Fu et al., 2020) papers. For both RLU and D4RL, policies are trained from offline datasets and then evaluated on the corresponding environment. For datasets with high variance in performance, we discard episodes that are below a certain threshold for the training of fb and fR. This is only done on the Quadruped and Walker tasks from RLU, and only provides a slight performance boost – performance on unfiltered data for these two tasks can be found in the Appendix’s 5.6. The unfiltered data is always used for training fs. We perform a grid-search to find optimal parameters for each dataset, but for most tasks these parameters are mostly uniform. The full set of parameters for each experiment can be found in the Appendix Sec. 5.2. For experiments on RLU, we generated\nadditional smaller datasets to increase the difficulty of the problem. On all plots we also report the performance of the behavior policy used to generate the data (directly from the episode returns in the datasets) and label it as the DATA policy. All non-standard datasets will be available publicly.\nFor RLU the datasets are generated using a 70% performant MPO (Abdolmaleki et al., 2018) policy on the original task, and smaller versions of the datasets are a fixed set of randomly sampled contiguous episodes (Dulac-Arnold et al., 2020; Gulcehre et al., 2020). D4RL has 4 behavior policies, ranging from random behavior to expert demonstrations, and are fully described in Fu et al. (2020). On all datasets, training is performed on 90% of data and 10% is used for validation." }, { "heading": "4.2 PERFORMANCE ON RL-UNPLUGGED & D4RL", "text": "For experiments on RLU we consider the unperturbed RWRL cartpole-swingup, walker and quadruped tasks (Tassa et al., 2018; Dulac-Arnold et al., 2020). For D4RL we consider the halfcheetah, hopper, walker2d and Adroit tasks (Brockman et al., 2016; Rajeswaran et al., 2017). Results for the RLU tasks as well as Adroit are presented in Figure 1. On the remaining D4RL tasks, results are compared to those presented by MOPO Yu et al. (2020) in Table 1 for four different data regimes (medium, medium-expert, medium-replay, random). For all experiments we report MBOP performance as well as the performance of a behavior cloning (BC) policy. The BC policy is simply the policy prior fb, with the control action as the average ensemble output. We use this baseline to demonstrate the advantages brought about by planning beyond simple cloning.\nFor the RLU datasets (Fig. 1), we observe that MBOP is able to find a near-optimal policy on most dataset sizes in Cartpole and Quadruped with as little as 5000 steps, which corresponds to 5 episodes, or approximately 50 seconds on Cartpole and 100 seconds on Quadruped. On the Walker datasets MBOP requires 23 episodes (approx. 10 minutes) before it finds a reasonable policy, and with sufficient data converges to a score of 900 which is near optimal. On most tasks, MBOP is able to generate a policy significantly better than the behavior data as well as the the BC prior.\nFor the Adroit task, we show that MBOP is able to outperform the behavior policy after training on a dataset of 50k data points generated by an expert policy (Fig. 1d). For other D4RL datasets, we compare to the performance of MOPO (Yu et al., 2020). We show that on the medium and medium-expert data regimes MBOP outperforms MOPO, sometimes significantly. However on higher-variance datasets such as random and mixed MBOP is not as performant. This is likely due to the reliance on policy-conditioned priors, which we hope to render more flexible in future work (for instance using multi-modal stochastic models). There are nevertheless many tasks where a human operator is running a systems in a relatively consistent yet sub-optimal manner, and one\nmay want to either replicate or improve upon the operator’s control policy. In such scenarios, MBOP would likely be able to not only replicate but improve upon the operator’s control strategy." }, { "heading": "4.3 ZERO-SHOT TASK ADAPTATION", "text": "One of the main advantages of using planning-based methods in the offline scenario is that they are easy to adapt to new objective functions. In the case of MBOP these would be novel objectives different from those optimized by the behavior policy that generates the offline data. We can easily take these new objectives into account by computing a secondary objective return as follows: R′n =∑ t fobj(st) where fobj is a user-provided function that computes a scalar objective reward given a state. We can then adapt the trajectory update rule to take into account the secondary objective:\nTt =\n∑N n=1 e\nκRn+κobjR ′ nAn,t∑N\nn=1 e κRn+κobjR′n\n,∀t ∈ [1, H].\nTo demonstrate this, we run MBOP on two types of modified objectives: goal-conditioned control, and constrained control. In goal-conditioned control, we ignore the original reward function (κ = 0) and define a new goal (such as a velocity vector) and optimize trajectories relative to that goal. In constrained operation, we add a state-based constraint which we penalize during planning, while maintaining the original objective and find a reasonable combination of κ and κobj.\nWe define three tasks: position-constrained Cartpole, where we penalize the cart’s position to encourage it to stay either on the right or left side of the track; heading-conditioned Quadruped, where we provide a target heading to the policy (Forward, Backwards, Right & Left); and finally height-constrained Walker, where we penalize the policy for bringing the torso height above a certain threshold. Results on Cartpole & Quadruped are presented in Figure 2.\nWe show that MBOP successfully integrates constraints that were not initially in the dataset and is able to perform well on objectives that are different from the objective of the behavior policy.\nWalker performs similarly, obtaining nearly 80% constraint satisfaction while maintaining a reward of 730. More analysis is available in the Appendix Sec. 5.5." }, { "heading": "4.4 ALGORITHMIC INVESTIGATIONS", "text": "Ablations To better understand the benefits of MBOP’s various elements, we perform three ablations: MBOP-NOPP which replaces fb with a Gaussian prior, MBOP-NOVF which removes fR’s estimated returns, and PDDM which removes both, thus recovering the PDDM controller. We show performance of these four ablations on the Walker dataset in Fig. 3a. A full set of ablations is available in the appendix Figures 4 & 5. Overall we see that the full combination of BC prior, value function and environment model are important for optimal performance. We also see that the PDDM approach is generally below either of the MBOP-NOPP and MBOP-NOVF ablations. Finally, we note that the BC prior when used alone can perform well on certain environments, but on others it stagnates at behavior policy’s performance.\n10000 14000 23000 41000 50000 250000 1000000 2000000 5000000 Data Points\n0\n200\n400\n600\n800\n1000\nEp iso\nde R\net ur\nn\nRLU/Walker/Walk Performance\nMBOP MBOP-NOPP MBOP-NOVF PDDM CLONING DATA\n(a) MBOP ablations’ performance on RLU Walker Dataset. We observe that MBOP is consistently more performant than its ablations.\n0.01 0.03 0.1 0.3 1.0 3.0 10.0 30.0 Kappa\n4 8 16 32 64 12 8 25 6 Ho riz on\nCartpole / MBOP\n0.01 0.03 0.1 0.3 1.0 3.0 10.0 30.0 Kappa\n4 8 16 32 64 12 8 25 6 Ho riz on\nQuadruped / MBOP\n0.01 0.03 0.1 0.3 1.0 3.0 10.0 30.0 Kappa\n4 8 16 32 64 12 8 25 6 Ho riz on\nWalker / MBOP\n400\n500\n600\n700\n800\n930\n945\n960\n975\n990\n700\n750\n800\n850\n900\n(b) MBOP sensitivity to Kappa (κ) and Horizon (H).\nExecution Speed A frequent concern with planning-based methods is their slower response time prohibiting practical use. We calculate the average control frequency of MBOP on the RLU Walker task using a single Intel(R) Xeon(R) W-2135 CPU @ 3.70GHz core and a Nvidia 1080TI and find that MBOP can operate at frequencies ranging from 106 Hz for h = 4 to 40 Hz for a h = 40, with BC operating at 362 Hz. Additional values are presented in Appendix Sec. 5.4.\nHyperparameter Stability\nWe perform a grid sweep over the κ (trajectory re-weighting) and H (planning horizon) on the three RLU environments and visualize the effects on return in Fig. 3b. We observe that overall MBOPmaintains consistent performance scores for wide ranges of hyperparameter values, only really degrading near extreme values. Additional analysis is present in the Appendix’s Section 5.5." }, { "heading": "5 CONCLUSION", "text": "Planning-based methods provide significantly more flexibility for external systems to interact with the learned controller. Bringing them into the offline data regime opens the door to their use on more real-world systems for which online training is not an option. MBOP provides an easy to implement, dataefficient, stable, and flexible algorithm for policy generation. It is easy to implement because the learning components are simple supervised learners, it is data-efficient thanks to its use of multiple complementary estimators, and it is flexible due to its use of on-line planning which allows it to dynamically react to changing goals, costs and environmental constraints. We show that MBOP can perform competitively in various data regimes, and can provide easily adaptable policies for more complex goal-conditioned or constrained tasks, even if the original data does not provide prior experience. Although MBOP’s performance is degraded when offline data is multi-modal or downright random, we believe there are a large number of scenarios where the current operating policy (be it human or automated) is reasonably consistent, but could benefit from being automated and improved upon. In these scenarios we believe that MBOP could be readily applicable. Future work intends to ameliorate performance by investigating the use of goal-conditioned policy priors and value estimates, as well as looking at effective ways to perform offline model selection and evaluation. We sincerely hope that MBOP can be useful as an out-of-the-box algorithm for learning stable and configurable control policies for real systems." }, { "heading": "500000 PDDM 32 1000 0.9 1.6 0.2 965.0 12.0", "text": "" }, { "heading": "500000 MBOP-NOVF 16 1000 1.9 0.8 0.2 984.2 2.2", "text": "" }, { "heading": "500000 MBOP-NOPP 16 1000 1.9 1.6 0.2 994.0 3.5", "text": "" }, { "heading": "500000 MBOP 8 1000 3.8 0.8 0.2 994.8 0.4", "text": "" }, { "heading": "500000 CLONING - - - - - 973.1 5.6", "text": "" }, { "heading": "200000 PDDM 32 1000 0.9 1.6 0.2 946.4 29.7", "text": "" }, { "heading": "200000 MBOP-NOVF 16 1000 1.9 0.8 0.2 986.6 1.3", "text": "" }, { "heading": "200000 MBOP-NOPP 16 1000 1.9 1.6 0.2 984.5 12.1", "text": "" }, { "heading": "200000 MBOP 8 1000 3.8 0.8 0.2 993.3 1.3", "text": "" }, { "heading": "200000 CLONING - - - - - 972.9 8.6", "text": "" }, { "heading": "100000 PDDM 32 1000 0.9 1.6 0.2 967.1 12.5", "text": "" }, { "heading": "100000 MBOP-NOVF 16 1000 1.9 0.8 0.2 983.8 2.2", "text": "" }, { "heading": "100000 MBOP-NOPP 16 1000 1.9 1.6 0.2 935.3 35.2", "text": "" }, { "heading": "100000 MBOP 8 1000 3.8 0.8 0.2 989.4 3.2", "text": "" }, { "heading": "100000 CLONING - - - - - 966.5 15.1", "text": "" }, { "heading": "5000000 PDDM 32 100 2.8 1.6 0.2 620.1 98.7", "text": "" }, { "heading": "5000000 MBOP-NOVF 16 100 9.4 0.2 0.2 833.5 100.7", "text": "" }, { "heading": "5000000 MBOP-NOPP 4 100 22.5 1.6 0.2 784.0 140.8", "text": "" }, { "heading": "5000000 MBOP 4 100 7.5 0.4 0.2 908.8 54.3", "text": "" }, { "heading": "5000000 CLONING - - - - - 759.9 48.2", "text": "" }, { "heading": "2000000 PDDM 32 100 2.8 1.6 0.2 460.3 117.6", "text": "" }, { "heading": "2000000 MBOP-NOVF 16 100 9.4 0.2 0.2 807.8 44.2", "text": "" }, { "heading": "2000000 MBOP-NOPP 4 100 22.5 1.6 0.2 823.8 35.5", "text": "" }, { "heading": "2000000 MBOP 4 100 7.5 0.4 0.2 872.4 70.2", "text": "" }, { "heading": "2000000 CLONING - - - - - 743.6 85.6", "text": "" }, { "heading": "1000000 PDDM 32 100 2.8 1.6 0.2 308.2 140.3", "text": "" }, { "heading": "1000000 MBOP-NOVF 16 100 9.4 0.2 0.2 814.3 88.4", "text": "" }, { "heading": "1000000 MBOP-NOPP 4 100 22.5 1.6 0.2 411.2 183.5", "text": "" }, { "heading": "1000000 MBOP 4 100 7.5 0.4 0.2 797.3 229.5", "text": "" }, { "heading": "1000000 CLONING - - - - - 701.5 190.9", "text": "" }, { "heading": "250000 PDDM 32 100 2.8 1.6 0.2 231.3 112.2", "text": "" }, { "heading": "250000 MBOP-NOVF 16 100 9.4 0.2 0.2 770.4 115.1", "text": "" }, { "heading": "250000 MBOP-NOPP 4 100 22.5 1.6 0.2 269.1 155.9", "text": "" }, { "heading": "250000 MBOP 4 100 7.5 0.4 0.2 844.7 48.4", "text": "" }, { "heading": "250000 CLONING - - - - - 686.8 205.4", "text": "" }, { "heading": "1000000 PDDM 16 100 0.01 0.4 0.2 -54.6 1.7", "text": "" }, { "heading": "1000000 MBOP-NOVF 32 200 0.3 0.1 0 3015.6 241.6", "text": "" }, { "heading": "1000000 MBOP-NOPP 64 1000 0.01 0.4 0 -60.7 2.0", "text": "" }, { "heading": "1000000 MBOP 16 100 0.1 0.2 0 2910.2 579.6", "text": "" }, { "heading": "1000000 CLONING - - - - - 3004.3 142.3", "text": "" }, { "heading": "400000 PDDM 4 1000 0.01 0.1 0.2 -52.9 0.6", "text": "" }, { "heading": "400000 MBOP-NOVF 16 200 0.3 0.1 0 3019.5 128.9", "text": "" }, { "heading": "400000 MBOP-NOPP 64 100 0.03 0.4 0 -61.4 2.1", "text": "" }, { "heading": "400000 MBOP 16 100 0.3 0.1 0 3000.3 388.4", "text": "" }, { "heading": "400000 CLONING - - - - - 3025.1 21.0", "text": "" }, { "heading": "200000 PDDM 64 500 0.3 0.2 0.2 -59.7 2.8", "text": "" }, { "heading": "200000 MBOP-NOVF 32 500 0.3 0.1 0 3024.2 198.6", "text": "" }, { "heading": "200000 MBOP-NOPP 4 500 0.01 0.05 0 -52.9 0.8", "text": "" }, { "heading": "200000 MBOP 8 1000 0.03 0.2 0 2967.0 355.0", "text": "" }, { "heading": "200000 CLONING - - - - - 3028.0 26.6", "text": "" }, { "heading": "5.3 MBOP ABLATIONS", "text": "Full results for the various ablations of MBOP are visualized in Figures 4 and 5." }, { "heading": "5000 25000 100000 200000 500000", "text": "" }, { "heading": "5.4 EXECUTION SPEED", "text": "" }, { "heading": "BC N/A 362", "text": "Execution speeds on the RLU Walker task in represented in Table 9. We see that we can easily achieve control frequencies below 10Hz, but cannot currently attain 100Hz with longer horizons. For lower level control policies for which high-frequency is important, we would suggest distilling the controller into a task-specific policy similar to MoREL (Kidambi et al., 2020) or MOPO (Yu et al., 2020)." }, { "heading": "5.5 MBOP PARAMETERS", "text": "All parameters were set as follows except for the D4RL Walker task where we use 15 ensemble networks.\n• # FC Layers : 2 • Size FC Layers : 500\n• # Ensemble Networks : 3 • Learning Rate : 0.001 • Batch Size : 512 • # Epochs : 40\nCONTINUED ANALYSIS OF CONSTRAINED TASKS\nWe can see the height-constrained Walker performance in Figure 6a. MBOP is able to satisfy the height constraint 80% of the episode while maintaining reasonable performance. Over the various ablations we have found that MBOP is better able to maintain base task performance for similar constraint satisfaction rates.\nHYPERPARAMETER STABILITY\nFigure 7 shows the sensitivity of MBOP and associated ablations to the Beta and Horizon parameters. Figure 8 shows the effects of Sigma to MBOP and ablations on the RLU datasets. Figure 6b shows sensitivity to Horizon and Kappa in synchrony." }, { "heading": "5.6 IMPACT OF FILTERING POOR EPISODES", "text": "As mentioned in the above part of the paper, for RLU / Quadruped and RLU / Walker we exclude the episodes with lowest returns before training the behavior cloning and value function models. In this section we report the performances on these environment with various filtering thresholds.\nFor each of these two environments, and each of the dataset sizes we keep a subset of the initial dataset by filtering on the top episodes. We experiments with filters varying from the top-1% to the top-100% (i.e. the entire raw dataset).\nMBOP sensitivity to Horizon x Kappa on RLU - 100%" } ]
2,021
null
SP:06ca143a8bf0570ccfba6abcb37a22ab23a9c3dd
[ "Some generative models have been proposed for causal effect estimation but they often do not have a competitive performance. Recent work suggested that a combination of generative and discriminative model may improve treatment estimation with observational data, and further suggests a generic latent variable model for factorizing selection bias, as well as outcome. The author(s) build on this work and propose a set of deep generative models, with a hybrid objective function (generative + discriminative), that outperforms current approaches for ATE. " ]
This paper provides a generative approach for causal inference using data from observational studies. Inspired by the work of Kingma et al. (2014), we propose a sequence of three architectures (namely Series, Parallel, and Hybrid) that each incorporate their M1 and M2 models as building blocks. Each architecture is an improvement over the previous one in terms of estimating causal effect, culminating in the Hybrid model. The Hybrid model is designed to encourage decomposing the underlying factors of any observational dataset; this in turn, helps to accurately estimate all treatment outcomes. Our empirical results demonstrate the superiority of all three proposed architectures compared to both state-of-the-art discriminative as well as other generative approaches in the literature.
[]
[ { "authors": [ "Mohamed Ishmael Belghazi", "Aristide Baratin", "Sai Rajeshwar", "Sherjil Ozair", "Yoshua Bengio", "Aaron Courville", "Devon Hjelm" ], "title": "MINE: Mutual information neural estimation", "venue": null, "year": 2018 }, { "authors": [ "Christopher Burgess", "Irina Higgins", "Arka Pal", "Loic Matthey", "Nick Watters", "Guillaume Desjardins", "Alexander Lerchner" ], "title": "Understanding disentangling in β-VAE", "venue": "arXiv preprint:1804.03599,", "year": 2018 }, { "authors": [ "Ricky TQ Chen", "Xuechen Li", "Roger B Grosse", "David K Duvenaud" ], "title": "Isolating sources of disentanglement in variational autoencoders", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "NeurIPS,", "year": 2014 }, { "authors": [ "Jonathan Gordon", "José Miguel Hernández-Lobato" ], "title": "Combining deep generative and discriminative models for bayesian semi-supervised learning", "venue": "Pattern Recognition,", "year": 2020 }, { "authors": [ "Arthur Gretton", "Karsten M Borgwardt", "Malte J Rasch", "Bernhard Schölkopf", "Alexander Smola" ], "title": "A kernel two-sample test", "venue": "JMLR, 13(March),", "year": 2012 }, { "authors": [ "Ruocheng Guo", "Lu Cheng", "Jundong Li", "P. Richard Hahn", "Huan Liu" ], "title": "A survey of learning causality with data: Problems and methods", "venue": "arXiv preprint:1809.09337,", "year": 2018 }, { "authors": [ "Negar Hassanpour", "Russell Greiner" ], "title": "Counterfactual regression with importance sampling weights", "venue": "In IJCAI,", "year": 2019 }, { "authors": [ "Negar Hassanpour", "Russell Greiner" ], "title": "Learning disentangled representations for counterfactual regression", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "β-VAE: Learning basic visual concepts with a constrained variational framework", "venue": null, "year": 2017 }, { "authors": [ "Jennifer L Hill" ], "title": "Bayesian nonparametric modeling for causal inference", "venue": "Journal of Computational and Graphical Statistics,", "year": 2011 }, { "authors": [ "Matt Hoffman", "Carlos Riquelme", "Matthew Johnson" ], "title": "The β-VAE’s implicit prior. 2017", "venue": "URL http://bayesiandeeplearning.org/2017/papers/66.pdf", "year": 2017 }, { "authors": [ "Paul W Holland" ], "title": "Statistics and causal inference", "venue": "Journal of the American statistical Association,", "year": 1986 }, { "authors": [ "Fredrik Johansson", "Uri Shalit", "David Sontag" ], "title": "Learning representations for counterfactual inference", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Jimmy Lei Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Shakir Mohamed", "Danilo Jimenez Rezende", "Max Welling" ], "title": "Semi-supervised learning with deep generative models", "venue": "In NeurIPS,", "year": 2014 }, { "authors": [ "Kun Kuang", "Peng Cui", "Bo Li", "Meng Jiang", "Shiqiang Yang", "Fei Wang" ], "title": "Treatment effect estimation with data-driven variable decomposition", "venue": "In AAAI,", "year": 2017 }, { "authors": [ "Francesco Locatello", "Stefan Bauer", "Mario Lucic", "Gunnar Raetsch", "Sylvain Gelly", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "Challenging common assumptions in the unsupervised learning of disentangled", "venue": null, "year": 2019 }, { "authors": [ "Romain Lopez", "Jeffrey Regier", "Michael I Jordan", "Nir Yosef" ], "title": "Information constraints on autoencoding variational bayes", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Christos Louizos", "Kevin Swersky", "Yujia Li", "Max Welling", "Richard Zemel" ], "title": "The variational fair autoencoder", "venue": "arXiv preprint:1511.00830,", "year": 2015 }, { "authors": [ "Christos Louizos", "Uri Shalit", "Joris M Mooij", "David Sontag", "Richard Zemel", "Max Welling" ], "title": "Causal effect inference with deep latent-variable models", "venue": "NeurIPS", "year": 2017 }, { "authors": [ "Marian F MacDorman", "Jonnae O Atkinson" ], "title": "Infant mortality statistics from the 1996 period linked birth/infant death", "venue": "dataset. Monthly Vital Statistics Report,", "year": 1998 }, { "authors": [ "Yishay Mansour", "Mehryar Mohri", "Afshin Rostamizadeh" ], "title": "Domain adaptation: Learning bounds and algorithms", "venue": "arXiv preprint:0902.3430,", "year": 2009 }, { "authors": [ "Andrew McCallum", "Chris Pal", "Gregory Druck", "Xuerui Wang" ], "title": "Multi-conditional learning: Generative/discriminative training for clustering and classification", "venue": "In AAAI,", "year": 2006 }, { "authors": [ "Andrew Y Ng", "Michael I Jordan" ], "title": "On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes", "venue": "In NeurIPS,", "year": 2002 }, { "authors": [ "Jonas Peters", "Dominik Janzing", "Bernhard Schölkopf" ], "title": "Elements of causal inference: foundations and learning algorithms", "venue": "MIT press,", "year": 2017 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": null, "year": 2014 }, { "authors": [ "Paul R Rosenbaum", "Donald B Rubin" ], "title": "The central role of the propensity score in observational studies for causal effects", "venue": null, "year": 1983 }, { "authors": [ "Donald B Rubin" ], "title": "Estimating causal effects of treatments in randomized and nonrandomized studies", "venue": "Journal of Educational Psychology,", "year": 1974 }, { "authors": [ "Uri Shalit", "Fredrik D. Johansson", "David Sontag" ], "title": "Estimating individual treatment effect: Generalization bounds and algorithms", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Claudia Shi", "David Blei", "Victor Veitch" ], "title": "Adapting neural networks for the estimation of treatment effects", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Liuyi Yao", "Sheng Li", "Yaliang Li", "Mengdi Huai", "Jing Gao", "Aidong Zhang" ], "title": "Representation learning for treatment effect estimation from observational data", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Jinsung Yoon", "James Jordon", "Mihaela van der Schaar" ], "title": "GANITE: Estimation of individualized treatment effects using generative adversarial nets", "venue": "In ICLR,", "year": 2018 }, { "authors": [], "title": "factors, without needing the help of a KLD penalty. This means either of the following is happening: (i) β-VAE is not the best performing disentangling method and other disentangling constraints should be used instead — e.g., works of Chen et al", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "As one of the main tasks in studying causality (Peters et al., 2017; Guo et al., 2018), the goal of Causal Inference is to figure out how much the value of a certain variable would change (i.e., the effect) had another certain variable (i.e., the cause) changed its value. A prominent example is the counterfactual question (Rubin, 1974; Pearl, 2009) “Would this patient have lived longer [and by how much], had she received an alternative treatment?”. Such question is often asked in the context of precision medicine, which attempts to identify which medical procedure t ∈ T will benefit a certain patient x the most, in terms of the treatment outcome y ∈ R (e.g., survival time). A fundamental problem in causal inference is the unobservablity of the counterfactual outcomes (Holland, 1986). That is, for each subject i, any real-world dataset can only contain the outcome of the administered treatment (aka the observed outcome: yi), but not the outcome(s) of the alternative treatment(s) (aka the counterfactual outcome(s) ) — i.e., yti for t ∈ T \\ {ti}. In other words, the causal effect is never observed (i.e., missing in any training data) and cannot be used to train predictive models, nor can it be used to evaluated a proposed model. This makes estimating causal effects a more difficult problem than that of generalization in the supervised learning paradigm.\nIn general, we can categorize most machine learning algorithms into two general approaches, which differ in how the input features x and their target values y are modeled (Ng & Jordan, 2002):\nDiscriminative methods focus solely on modeling the conditional distribution p(y|x) with the goal of direct prediction of y for each instance x. For prediction tasks, discriminative approaches are often more accurate since they use the model parameters more efficiently than generative approaches. Most of the current causal inference methods are discriminative, including the Balancing Neural Network (BNN) (Johansson et al., 2016), CounterFactual Regression Network (CFR-Net) (Shalit et al., 2017), and CFR-Net’s extensions — cf., (Yao et al., 2018; Hassanpour & Greiner, 2019; 2020) — as well as Dragon-Net (Shi et al., 2019).\nGenerative methods, on the other hand, describe the relationship between x and y by their joint probability distribution p(x, y). This, in turn, would allow the generative model to answer arbitrary queries, including coping with missing features x using the marginal distribution p(x) or [similar to discriminative models] predicting the unknown target values y via p(y|x). A promising direction forward for causal inference is developing generative models, using either Generative Adverserial Network (GAN) (Goodfellow et al., 2014) or Variational Auto-Encoder (VAE) (Kingma & Welling, 2014; Rezende et al., 2014). This has led to two generative approaches for causal inference: GANs for inference of Individualised Treatment Effects (GANITE) (Yoon et al., 2018) and Causal Effect\nVAE (CEVAE) Louizos et al. (2017). However, neither of the two achieve competitive performance in terms of treatment effect estimation compared to the discriminative approaches.\nAlthough discriminative models have excellent predictive performance, they suffer from two drawbacks: (i) overfitting, and (ii) making highly-confident predictions, even for instances that are “far” from the observed training data. Generative models based on Bayesian inference, on the other hand, can handle both of these drawbacks: issue (i) can be minimized by taking an average over the posterior distribution of model parameters; and issue (ii) can be addressed by explicitly providing model uncertainty via the posterior (Gordon & Hernández-Lobato, 2020). Although the exact inference is often intractable, efficient approximations to the parameter posterior distribution is possible through variational methods. Here, we use the Variational Auto-Encoder (VAE) (Kingma & Welling, 2014; Rezende et al., 2014) for the Bayesian inference component of our causal inference method.\nContribution: In this paper, we propose three interrelated Bayesian model architectures (namely Series, Parallel, and Hybrid) that employ the VAE framework to address the task of causal inference for binary treatments. We find that the best performing architecture is the Hybrid model, that is [partially] successful in decomposing the underlying factors of any observational dataset. This is a valuable property, as that means it can accurately estimate all all treatment outcomes. We demonstrate that these models significantly outperform the state-of-the-art in terms of treatment effect estimation performance on two publicly available benchmarks, as well as a fully synthetic dataset that allows for detailed performance analyses." }, { "heading": "2 RELATED WORKS", "text": "CFR-Net Shalit et al. (2017) considered the binary treatment task and attempted to learn a representation space Φ that reduces selection bias by making Pr( Φ(x) | t=0 ) and Pr( Φ(x) | t=1 ) as close to each other as possible, provided that Φ(x ) retains enough information that the learned regressors {ht ( Φ(·) ) : t∈{0, 1}} can generalize well on the observed outcomes. Their objective\nfunction includes L [ yi, h ti ( Φ(xi) )] , which is the loss of predicting the observed outcome for sample i (described as xi), weighted by ωi = ti2u + 1−ti\n2(1−u) , where u = Pr( t=1 ). This is effectively setting ωi =\n1 2 Pr( ti ) where Pr( ti ) is the probability of selecting treatment ti over the entire population.\nDR-CFR Hassanpour & Greiner (2020) argued against the standard implicit assumption that all of the covariates X are confounders (i.e., contribute to both treatment assignment and outcome determination). Instead, they proposed a graphical model similar to that in Figure 1 and designed a discriminative causal inference approach accordingly — built on top of the CFR-Net. Specifically, their model, named Disentangled Representations for CFR (DR-CFR), includes three representation networks, each trained with constraints to insure that each component corresponds to its respective underlying factor. While the idea behind DR-CFR provides an interesting intuition, it is known that only generative models (and not discriminative ones) can truly identify the underlying data generating mechanism. This paper is a step in this direction.\nDragon-Net Shi et al. (2019)’s main objective was to estimate the Average Treatment Effect (ATE), which they explain requires a two stage procedure: (i) fit models that predict the outcomes for both treatments; and (ii) find a downstream estimator of the effect. Their method is based on a classic result from strong ignorability — i.e., Theorem 3 in (Rosenbaum & Rubin, 1983) — that states:\n(y1, y0) ⊥ t |x & Pr( t = 1 |x ) ∈ (0, 1) =⇒ (y1, y0) ⊥ t | b(x) & Pr( t = 1 | b(x) ) ∈ (0, 1)\nwhere b(x) is a balancing score1. They consider propensity score as a balancing score and argue that only the parts ofX relevant for predicting T are required for the estimation of the causal effect 2. This theorem only provides a way to match treated and control instances though — i.e., it helps finding potential counterfactuals from the alternative group to calculate ATE. Shi et al. (2019), however, used this theorem to derive minimal representations on which to regress to estimate the outcomes.\n1That is, X ⊥ T | b(X) (Rosenbaum & Rubin, 1983). 2The authors acknowledge that this would hurt the predictive performance for individual outcomes. As a\nresult, this yields inaccurate estimation of Individual Treatment Effects (ITEs).\nGANITE Yoon et al. (2018) proposed the counterfactual GAN, whose generator G, given {x, t, yt}, estimates the counterfactual outcomes (ŷ¬t); and whose discriminator D tries to identify which of {[x, 0, y0], [x, 1, y1]} is the factual outcome. It is, however, unclear why this requires that G must produce samples that are indistinguishable from the factual outcomes, especially as D can just learn the treatment selection mechanism instead of distinguishing the factual outcomes from counterfactuals. Although this work is among the few generative approaches for causal inference, our empirical results (in Section 4) show that it does not effectively estimate counterfactual outcomes.\nCEVAE Louizos et al. (2017) used VAE to extract latent confounders from their observed proxies in X . While this is an interesting step in the right direction, empirical results show that it does not always accurately estimate treatment effect (see Section 4). The authors note that this may be because CEVAE is not able to address the problem of selection bias. Another reason that we think contributes to CEVAE’s sub-optimal performance is its assumed graphical model of the underlying data generating mechanism (depicted in Figure 2). This model assumes that there is only one latent variable Z (confounding T and Y ) that generates the entire observational data; however, we know from (Kuang et al., 2017) and (Hassanpour & Greiner, 2020) that there must be more (see Figure 1). R2: M1 and M2 VAEs In an attempt to enhance the conventional representation learning with VAEs — referred to as the M1 model (Kingma & Welling, 2014; Rezende et al., 2014) — in a semisupervised manner, Kingma et al. (2014) proposed the M2 VAE. While the M1 model helps learning latent representations from the covariate matrix X alone, the M2 model allows the target information also to guide the representation learning process. In our work, the target information includes the treatment bit T as well as the observed outcome Y . This additional information helps learning more expressive representations, that was not possible with the unsupervised M1 model. Appendix A.1 presents a more detailed overview of the M1 and M2 VAEs." }, { "heading": "3 METHOD", "text": "Following (Hassanpour & Greiner, 2020) and without loss of generality, we assume that the random variable X follows an unknown joint probability distribution Pr(X |Γ,∆,Υ,Ξ ), where Γ, ∆, Υ, and Ξ are non-overlapping independent factors. Moreover, we assume that treatment T follows Pr(T |Γ,∆ ) (i.e., Γ and ∆ are the responsible factors for selection bias) and outcome Y T follows Pr T (Y T |∆,Υ ); see Figure 1. Observe that the factor Γ (resp., Υ) partially determines only T (resp., Y ), but not Y (resp., T ); and ∆ includes the confounding factors between T and Y .\nOur goal is to design generative model architectures that encourage learning disentangled representations of these four underlying latent factors (see Figure 1). In other words, it is an attempt to decompose and separately learn the underlying factors that are responsible for determining T and Y . To achieve this, we propose three architectures (as illustrated in Figures 3(a), 3(b), and 3(c)), each employing a VAE (Kingma & Welling, 2014; Rezende et al., 2014) that each include a decoder (generative model) and an encoder (variational posterior). Specifically, we use the M1 and M2 models from (Kingma et al., 2014) as our building blocks, leading to a Series architecture, a Parallel architecture, and a Hybrid one. Each component is parametrized as a deep neural network." }, { "heading": "3.1 THE VARIATIONAL AUTO-ENCODER COMPONENT", "text": "" }, { "heading": "3.1.1 THE SERIES ARCHITECTURE", "text": "The architecture of the Series model is illustrated in Figure 3(a). Louizos et al. (2015) proposed a similar architecture to address fairness in machine learning, but using a binary sensitive variable S\n(e.g., gender, race, etc.) rather than the treatment T . Here, we employ this architecture for causal inference and explain why it should work. R4: We hypothesize that this structure functions as a distillation tower: the bottom M2 VAE attempts to decompose Γ (guided by T ) from ∆ and Υ (captured by Z1); and the top M2 VAE attempts to learn ∆ and Υ (guided by Y ). Decoder and encoder components of the Series model (parametrized by θs and φs respectively) involve the following distributions:\nPriors Likelihood Posteriors pθs(z2) pθs(x|z1, t) qφs(z1|x, t) pθs(z1|y, z2) qφs(y|z1)\nqφs(z2|y, z1)\nThe goal is to maximize the conditional log-likelihood of the observed data (left-hand-side of the following inequality) by maximizing the Evidence Lower BOund (ELBO; right-hand-side) — i.e.,\nN∑ i=1 log p(xi|ti, yi) ≥ N∑ i=1 Eqφs (z1|x,t) [ log pθs(xi|z1i, ti) ]\n(1)\n− KL ( qφs(z1|x, t) || pθs(z1|y, z2) ) − KL ( qφs(z2|y, z1) || pθs(z2) ) (2)\nwhere KL denotes the Kullback-Leibler divergence, pθs(z2) is the unit multivariate Gaussian (i.e., N (0, I)), and the other distributions are parameterized as deep neural networks." }, { "heading": "3.1.2 THE PARALLEL ARCHITECTURE", "text": "R4: The Series model is composed of two M2 stacked models. However, Kingma et al. (2014) showed that an M1+M2 stacked architecture learns better representations than an M2 model alone for a downstream prediction task. This motivated us to design a double M1+M2 Parallel model; where one arm is for the outcome to guide the representation learning via Z1 and another for the treatment to guide that via Z3. This architecture is illustrated in Figure 3(b). We hypothesize that Z1 would learn ∆ and Υ, and Z3 would learn Γ (and perhaps partially ∆). Decoder and encoder components of the Parallel model (parametrized by θp and φp respectively) involve the following distributions:\nPriors Likelihood Posteriors pθp(z2) pθp(z4) pθp(x|z1, z3) qφp(z1|x, t) qφp(z3|x, y) pθp(z1|y, z2) pθp(z3|t, z4) qφp(y|z1) qφp(t|z3)\nqφp(z2|y, z1) qφp(z4|t, z3)\nHere, the conditional log-likelihood can be upper bounded by:\nN∑ i=1 log p(xi|ti, yi) ≥ N∑ i=1 Eqφp (z1,z3|x,t,y) [ log pθp(xi|z1i, z3i) ]\n(3)\n− KL ( qφp(z1|x, t) || pθp(z1|y, z2) ) − KL ( qφp(z2|y, z1) || pθp(z2) ) (4)\n− KL ( qφp(z3|x, y) || pθp(z3|t, z4) ) − KL ( qφp(z4|t, z3) || pθp(z4) ) (5)" }, { "heading": "3.1.3 THE HYBRID ARCHITECTURE", "text": "R4: The final architecture, Hybrid, attempts to get the best capabilities of the previous two architectures. The backbone of the Hybrid model has a Series architecture, that separates Γ (factors related to the treatment T ; captured by the right module with Z3 as its head), from ∆ and Υ (factors related to the outcome Y ; captured by the left module with Z7 as its head). The left module, itself, consists of a Parallel model that attempts to proceed one step further and decompose ∆ from Υ. This is done with the help of a discrepancy penalty (see Section 3.3). Figure 3(c) illustrates our designed architecture for the Hybrid model. Decoder and encoder components of the Hybrid model (parametrized by θh and φh respectively) involve the following distributions:\nPriors Likelihood Posteriors pθh(z2) pθh(z4) pθh(z6) pθh(x|z3, z7) qφh(z7|x, t) qφh(z1|z7) qφh(z5|z7) pθh(z1|y, z2) pθh(z3|t, z4) pθh(z5|y, z6) qφh(z3|x, y) qφh(y|z1, z5) qφh(t|z3) pθh(z7|z1, z5) qφh(z2|y, z1) qφh(z6|y, z5) qφh(z4|t, z3)\nHere, the conditional log-likelihood can be upper bounded by:\nN∑ i=1 log p(xi|ti, yi) ≥ N∑ i=1 Eqφh (z3,z7|x,t,y) [ log pθh(xi|z3i, z7i) ]\n(6)\n− KL ( qφh(z1|z7) || pθh(z1|y, z2) ) − KL ( qφh(z2|y, z1) || pθh(z2) ) (7)\n− KL ( qφh(z3|x, y) || pθh(z3|t, z4) ) − KL ( qφh(z4|t, z3) || pθh(z4) ) (8)\n− KL ( qφh(z5|z7) || pθh(z5|y, z6) ) − KL ( qφh(z6|y, z5) || pθh(z6) ) (9)\n− KL ( qφh(z7|x, t) || pθh(z7|z1, z5) ) (10)\nThe first term in the ELBO (i.e., right-hand-side of Equations (1), (3), or (6)) is called the Reconstruction Loss (RecL) and the next term(s) (i.e., Equation (2), summation of Equations (4) and (5), or summation of Equations (7), (8), (9), and (10)) is referred to as the KL R1: Divergence (KLD). Concisely, the ELBO can be written as: RecL− KLD, which is to be maximized.\n3.2 DISENTANGLEMENT WITH β-VAE\nAs mentioned earlier, we want the learned latent variables to be disentangled, to match our assumption of non-overlapping factors Γ, ∆, and Υ. To ensure this, we employ the β-VAE (Higgins et al., 2017) which adds a hyperparameter β as a multiplier of the KLD part of the ELBO. This adjustable hyperparameter facilitates a trade-off that helps balance the latent channel capacity and independence constraints (handled by the KL terms) with the reconstruction accuracy — i.e., including the β hyperarameter grants a better control over the level of disentanglement in the learned representations (Burgess et al., 2018). Therefore, the generative objective to be minimized becomes:\nLVAE = −RecL + β · KLD (11)\nAlthough Higgins et al. (2017) suggest the β to be set greater than 1 in most applications, Hoffman et al. (2017) show that having a β < 1 weight on the KL term can be interpreted as optimizing the ELBO under an alternative prior, which functions as a regularization term to prevent degeneracy." }, { "heading": "3.3 DISCREPANCY", "text": "Although all the three proposed graphical models suggest that T and Z1 are statistically independent (see, for example, the collider structure (at X): T → X ← Z1 in Figure 3(a)), an information leak is quite possible due to the correlation between the outcome y and treatment t in the data. We therefore require an extra regularization term on qφ(z1|t) in order to penalize the discrepancy (denoted by disc) between the conditional distributions of z1 given t= 0 versus given t= 1. To achieve this regularization, we calculate the disc using an Integral Probability Metric (IPM) (Mansour et al., 2009) c1 that measures the distance between the two above-mentioned distributions:\nLdisc = IPM ( {z1}i:ti=0, {z1}i:ti=1 ) (12)\nc1In this work, we use the Maximum Mean Discrepancy (MMD) (Gretton et al., 2012)." }, { "heading": "3.4 PREDICTIVE LOSS", "text": "Note, however, that neither the VAE nor the disc losses contribute to training a predictive model for outcomes. To remedy this, we extend the objective function to include a discriminative term for the regression loss of predicting y: c1\nLpred = 1\nN N∑ i=1 ωi · L [ yi, ŷi ] (13)\nwhere the predicted outcome ŷi is derived as the mean of the qtiφ (yi|z1i) posterior trained for the respective treatment ti; L [ yi, ŷi ] is the factual loss (i.e., L2 loss for real-valued outcomes and log loss for binary-valued outcomes); and ωi represent the weights R1: that attempt to account for selection bias. We consider two approaches in the literature to derive the weights: (i) the Population-Based (PB) weights as proposed in CFR-Net (Shalit et al., 2017); and (ii) the Context-Aware (CA) weights as proposed by Hassanpour & Greiner (2019). Note that disentangling ∆ from Υ is only beneficial when using the CA weights, since we need just the ∆ factors to derive them (Hassanpour & Greiner, 2020)." }, { "heading": "3.5 FINAL MODEL(S)", "text": "Putting everything together, the overall objective function to be minimized is then:\nJ = Lpred + α · Ldisc + γ · LVAE + λ ·Reg (14) where Reg penalizes the model complexity.\nThis objective function is motivated by the work of McCallum et al. (2006), which suggested optimizing a convex combination of discriminative and generative losses would indeed improve predictive performance. As an empirical verification, note that for γ = 0, the Series and Parallel models effectively reduce to CFR-Net. However, our empirical results (cf., Section 4) suggest that the generative term in the objective function helps learning representations that embed more relevant information for estimating outcomes than that of Φ in CFR-Net.\nWe refer to the family of our proposed methods as VAE-CI (Variational Auto-Encoder for Causal Inference); specifically: {S, P, H}-VAE-CI, for Series, Parallel, and Hybrid respectively." }, { "heading": "4 EXPERIMENTS, RESULTS, AND DISCUSSION", "text": "" }, { "heading": "4.1 BENCHMARKS", "text": "Infant Health and Development Program (IHDP) The original IHDP randomized controlled trial was designed to evaluate the effect of specialist home visits on future cognitive test scores of premature infants. Hill (2011) induced selection bias by removing a non-random subset of the treated population. The dataset contains 747 instances (608 control and 139 treated) with 25 covariates. We use the same benchmark (with 100 realizations of outcomes) provided by and used in (Johansson et al., 2016) and (Shalit et al., 2017).\nAtlantic Causal Inference Conference 2018 (ACIC’18) ACIC’18 is a collection of binarytreatment datasets released for a data challenge. Following (Shi et al., 2019), we use a subset of the datasets with instances N ∈ {1, 5, 10}×103 (four datasets in each category). The covariates matrix for each dataset involves 177 features and is sub-sampled from a table of medical measurements taken from the Linked Birth and Infant Death Data (LBIDD) (MacDorman & Atkinson, 1998), that contains information corresponding to 100,000 subjects.\nFully Synthetic Datasets We generated a set of synthetic datasets according to the procedure described in (Hassanpour & Greiner, 2020); see Section A.2 for an overview. We considered all the viable datasets in a mesh generated by various sets of variables, of sizes mΓ,m∆,mΥ ∈ {0, 4, 8} and mΞ =1. This creates 24 scenariosc1 that consider all possible relative sizes of the factors Γ, ∆,\nc1This is similar to the way (Kingma et al., 2014) included a classification loss in their Equation (9). c1There are 33 =27 combinations in total; however, we removed three of these combinations that generate\npure noise outcomes — i.e., ∆=Υ=∅: (0, 0, 0), (4, 0, 0), and (8, 0, 0).\nTable 1: IHDP (100 realizations) benchmark\nMETHOD PEHE ATE CFR 0.75 (0.57) 0.08 (0.10) DR-CFR 0.65 (0.37) 0.03 (0.04) DRAGON NA 0.14 (0.15) GANITE 2.81 (2.30) 0.24 (0.46) CEVAE 2.50 (3.47) 0.18 (0.25) S-VAE-CI 0.51 (0.37) 0.00 (0.02) P-VAE-CI 0.52 (0.36) 0.01 (0.03) H-VAE-CI (PB) 0.49 (0.36) 0.01 (0.02) H-VAE-CI (CA) 0.48 (0.35) 0.01 (0.01)\nTable 2: ACIC’18 (N≤10K) benchmark\nMETHOD PEHE ATE CFR 5.13 (5.59) 1.21 (1.81) DR-CFR 3.86 (3.39) 0.80 (1.41) DRAGON NA 0.48 (0.77) GANITE 3.55 (2.27) 0.69 (0.65) CEVAE 5.30 (5.52) 3.29 (3.50) S-VAE-CI 2.73 (2.39) 0.51 (0.82) P-VAE-CI 2.62 (2.26) 0.37 (0.75) H-VAE-CI (PB) 1.78 (1.27) 0.44 (0.77) H-VAE-CI (CA) 1.66 (1.30) 0.39 (0.75)\nPEHE and ATE measures (lower is better) represented in the form of “mean (standard deviation)”.\nand Υ. For each scenario, we synthesized multiple datasets with various initial random seeds in order to allow for statistical significance testing of the performance comparisons between various methods." }, { "heading": "4.2 EVALUATING IDENTIFICATION OF THE UNDERLYING FACTORS", "text": "To evaluate the identification performance of the underlying factors, we use a fully synthetic dataset withmΓ =m∆ =mΥ =8 andmΞ =1. We set x to be one of the four dummy vectors V1..4 and input it to each trained representation network Zj . Three of these vectors had “1” in the 8 positions associated with Γ, ∆, and Υ respectively, and the remaining 17 positions of each vector was filled with “0”. The fourth vector was all “1” except for the last position (the noise) which was “0”. This helps measure the maximum amount of information that is passed to the final layer of each representation network.\nWe let Oi,j be the elu output (here, ∈R200) of the encoder network Zj when x=Vi. The average of the 200 values of Oi,j (Avg(Oi,j)) represents the power of signal that was produced by the Zj channel on the input Vi. The values shown in Figure 4’s tables are the ratios of Avg(O1,j), Avg(O2,j), and Avg(O3,j) divided by Avg(O4,j) for each of the learned representation networks. Note that, a larger ratio indicates that the respective representation network Zj has allowed more of the input signal Vi to pass through. Section A.3 includes more details on this procedure.c0\nAs expected, Z3 and Z4 capture Γ (e.g., the Z3 ratios for Γ in the {P, H}-VAE-CI tables are largest), and Z1, Z2, Z5, Z6, and Z7 capture ∆ and Υ. Note that decomposition of ∆ from Υ has not been achieved by any of the methods except for H-VAE-CI, which captures Υ by Z1 and ∆ by Z5 R4: (note the ratios are largest for Z1 and Z5). This decomposition is vital for deriving context-aware importance sampling weights because they must be calculated from ∆ only (Hassanpour & Greiner, 2020). Also observe that both {P, H}-VAE-CI are able to separate Γ from ∆. However, DR-CFR, which tried to disentangle all factors, failed not only to disentangle ∆ from Υ, but also Γ from ∆." }, { "heading": "4.3 EVALUATING TREATMENT EFFECT ESTIMATION", "text": "Evaluation of treatment effect estimation is often done with semi- or fully- synthetic datasets that include both factual and counterfactual outcomes. There are two categories of performance measures:\nc0Unlike the evaluation strategy presented in (Hassanpour & Greiner, 2020) that only looked at the first layer’s weights of each representation network, we propagate the values through the entire network and check how much of each factor is exhibited in the final layer of every representation network. R1 & R4: Yet, the proposed procedure still crudely evaluates the quality of disentanglement of the underlying factors. We did explore using the Mutual Information (Belghazi et al., 2018) for this task (not shown here); however, it appears that it does not work for high-dimensional data such as ours. All in all, more research is needed to address this task.\nIndividual-based: “Precision in Estimation of Heterogeneous Effect” PEHE= √\n1 N ∑N i=1 (êi−ei) 2\nuses êi = ŷ1i − ŷ0i as the estimated effect and ei = y1i − y0i as the true effect (Hill, 2011); and Population-based: “Bias of the Average Treatment Effect” ATE = ∣∣ATE − ÂTE∣∣, where ATE =\n1 N ∑N i=1 y 1 i − 1N ∑N j=1 y 0 j and ÂTE is calculated based on the estimated outcomes.\nIn this paper, we compare performances of our proposed methods {S, P, H}-VAE-CI versus the following treatment effect estimation methods: CFR-Net (Shalit et al., 2017), DR-CFR (Hassanpour & Greiner, 2020), Dragon-Net (Shi et al., 2019), GANITE (Yoon et al., 2018), and CEVAE (Louizos et al., 2017). The basic search grid for hyperparameters of the CFR-Net based algorithms (including our proposed methods) is available in Section A.4. For the other algorithms, we searched around their default hyperparameters’ setting. We ran the experiments for the contender methods using their publicly available code-bases; note the following points:\n• Since Dragon-Net is designed to estimate ATE only, we did not report its performance results for the PEHE measure (which, as expected, were significantly inaccurate).\n• The original GANITE code-base was implemented for binary outcomes only. We modified the code (losses, etc.) such that it could process real-valued outcomes also.\n• We were surprised that CEVAE diverged when running on the ACIC’18 datasets. To avoid this, we had to run the ACIC’18 experiments on the binary covariates only.\nTables 1, 2, and 3 summarize the mean and standard deviation of the PEHE and ATE measures (lower is better) on the IHDP, ACIC’18, and Synthetic benchmarks respectively. VAE-CI achieves the best performance among the contending methods. These results are statistically significant (in bold; based on Welch’s unpaired t-test with α = 0.05) for the IHDP and Synthetic benchmarks. Although VAE-CI also achieves the best performance on the ACIC’18 benchmark, the results are not statistically significant due to the high standard deviation of the contending methods’ performances.\nFigure 5 visualizes the PEHE measures on the entire synthetic datasets with sample size ofN=10,000. We observe that both plots corresponding to H-VAE-CI method (PB as well as CA) are inscribed by plots of all other methods, showcasing H-VAE-CI’s superior performance under every possible selection bias scenario. R1: Note that for scenarios wherem∆ = 0 (i.e., the ones of formmΓ_0_mΥ on perimeter of the radar chart in Figure 5), the performances of H-VAE-CI (PB) and H-VAE-CI (CA) are almost identical. This is expected, since for these scenarios, the learned representation for ∆ would be degenerate, and therefore, the context-aware weights would reduce to population-based ones. On the other hand, for scenarios where m∆ 6= 0, the H-VAE-CI (CA) often performs better than H-VAE-CI (PB). This can be attributed to the fact that H-VAE-CI has correctly disentangled ∆ from Υ. This facilitates learning good context-aware weights that better account for selection bias, which in turn, results in a better causal effect estimation performance.\nWe also performed hyperparameters’ sensitivity analyses in terms of PEHE (see Figure 6). We discuss the results in the following:\n• For the α hyperparameter (i.e., coefficient of the discrepancy penalty), Figure 6(a) suggests that DR-CFR and H-VAE-CI methods have the most robust performance R1 & R4: throughout various values of α. This is expected, because, unlike CFR and {S, P}-VAE-CI, DR-CFR and H-VAE-CI possess an independent node for representing ∆. This helps them still capture ∆ as α grows; since for them, α only affects learning a representation of Υ.\nComparing H-VAE-CI (PB) with (CA), we observe that for all α>0.01, H-VAE-CI (CA) outperforms H-VAE-CI (PB). This is because the discrepancy penalty would force Z1 to only capture Υ and Z5 to only capture ∆. This results in deriving better CA weights (that should be learned from ∆; here, from its learned representation Z5). H-VAE-CI (PB), on the other hand, cannot take advantage of this disentanglement, which explains its sub-optimal performance.\n• According to Figure 6(b), various β values (i.e., coefficient of KL divergence penalty) R1 & R3 & R4: do not make much difference for H-VAE-CI (except for β ≥ 1; since this large value means the learned representations will be close to Gaussian noise). We initially thought using β-VAE might help further disentangle the underlying factors. However, Figure 6(b) suggests that close-to-zero or even zero βs also work effectively. We now think that the H-VAE-CI’s architecture itself sufficiently decomposes the Γ, ∆, and Υ factors, without needing the help of a KLD penalty. Appendix A.5 includes more evidence and a detailed discussion on why this interpretation should hold.\nFigure 5: Radar graphs of PEHE (on the radii; lower is better) for the entire synthetic benchmark (24× 3 with N=10,000; each vertex denotes the respective dataset). Figure is best viewed in color.\nTable 3: PEHE and ATE measures (lower is better) represented in the form of “mean (standard deviation)” on the entire synthetic benchmark (average performance of 24×3 datasets, each with sample size of 10,000).\nMETHOD PEHE ATE CFR 0.39 (0.08) 0.027 (0.020) DR-CFR 0.26 (0.07) 0.007 (0.004) DRAGON NA 0.007 (0.005) GANITE 1.28 (0.43) 0.036 (0.015) CEVAE 1.39 (0.32) 0.287 (0.217)\nS-VAE-CI 0.28 (0.05) 0.004 (0.003) P-VAE-CI 0.28 (0.05) 0.004 (0.003) H-VAE-CI (PB) 0.20 (0.03) 0.003 (0.002) H-VAE-CI (CA) 0.18 (0.02) 0.003 (0.002)\n• R1 & R4: For hyperparameter γ (i.e., coefficient of the generative loss penalty), H-VAE-CI achieves the most stable performance compared to the {S, P}-VAE-CI models — see Figure 6(c). Of particular interest is the superior performance of H-VAE-CI for γ ≤ 0.01 compared to that of {S, P}-VAE-CI. This means that having the generative loss term (i.e., LVAE) is more important for {S, P}-VAE-CI than for H-VAE-CI to perform well — note an extreme case happens at γ = 0, where the latter performs significantly better than the former. We hypothesize that this is because H-VAE-CI already learns expressive representations Z3 and Z7, meaning the optimization no longer really requires the LVAE term to impose that. This is in contrast to Z1 in S-VAE-CI and Z1 and Z3 in P-VAE-CI." }, { "heading": "5 FUTURE WORKS AND CONCLUSION", "text": "Despite the success of the proposed methods, especially the Hybrid model, in addressing causal inference, no known algorithms can yet learn to perfectly disentangle factors ∆ and Υ. R1: This goal is important because we know isolating ∆, and learning context-aware weights from it, does enhance the quality of the causal effect estimation performance — note the superior performance of H-VAE-CI (CA). R1 & R3: The results of our ablation study (in Figure 6(b)), however, revealed that the currently used β-VAE does not help with disentanglement of the underlying factors. This shows that we can attribute all the decomposition we get to the proposed architectures and objective function. A future direction of this research is to explore the use of better disentangling constraints — e.g., works of Chen et al. (2018) and Lopez et al. (2018) — to see if they would yield sharper results.\nThe goal of this paper was to estimate treatment effects (either for individuals or the entire population) from observational data. We designed three VAE-based (Kingma & Welling, 2014; Rezende et al., 2014) architectures (namely Series, Parallel, and Hybrid), that employed (Kingma et al., 2014)’s M1 and M2 models. The Hybrid model, as the best performing architecture, partially succeeded at decomposing the underlying factors Γ, ∆, and Υ; which helped in accurate estimation of treatment outcomes. Our empirical results demonstrated the superiority of the proposed methods, compared to both state-of-the-art discriminative as well as generative approaches in the literature." }, { "heading": "A APPENDIX", "text": "A.1 M1 AND M2 VARIATIONAL AUTO-ENCODERS\nAs the first proposed model, the M1 VAE is the conventional model that is used to learn representations of data (Kingma & Welling, 2014; Rezende et al., 2014). These features are learned from the covariate matrix X only. Figure 7(a) illustrates the encoder and decoder of the M1 VAE. Note the graphical model on the left depicts the encoder; and the one on the right depict the decoder, which has arrows going the other direction.\nProposed by Kingma et al. (2014), the M2 model was an attempt to incorporate the information in target Y into the representation learning procedure. This results in learning representations that separate specifications of individual targets from general properties shared between various targets. In case of digit generation, this translates into separating specifications that distinguish each digit from writing style or lighting condition. Figure 7(b) illustrates the encoder and decoder of the M2 VAE.\nWe can stack the M1 and M2 models as shown in Figure 7(c) to get the best results. This way, we can first learn a representation Z1 from raw covariates, then find a second representation Z2, now learning from Z1 instead of the raw data.\nA.2 PROCEDURE OF GENERATING THE SYNTHETIC DATASETS\nGiven as input the sample size N ; dimensionalities [mΓ,m∆,mΥ] ∈ Z+(3); for each factor L ∈ {Γ,∆,Υ }, the means and covariance matrices (µL,ΣL); and a scalar ζ that determines the slope of the logistic curve.\n• For each latent factor L ∈ {Γ,∆,Υ }, form L by drawing N instances (each of size mL) from N (µL,ΣL). The covariates matrix X is the result of concatenating Γ, ∆, and Υ. Refer to the concatenation of Γ and ∆ as Ψ and that of ∆ and Υas Φ (for later use).\n• For treatment T , sample mΓ+m∆ tuple of coefficients θ from N (0, 1)mΓ+m∆ . Define the logging policy as π0( t=1 | z ) = 11+exp(−ζz) , where z = Ψ · θ. For each instance xi, sample treatment ti from the Bernoulli distribution with parameter π0( t=1 | zi ). • For outcomes Y 0 and Y 1, sample m∆+mΥ tuple of coefficients ϑ0 and ϑ1 from N (0, 1)m∆+mΥ Define y0 = (Φ ◦ Φ ◦ Φ + 0.5) · ϑ0/(m∆+mΥ) + ε and y1 = (Φ ◦ Φ) · ϑ1/(m∆+mΥ) + ε, where ε is a white noise sampled from N (0, 0.1) and ◦ is the symbol for element-wise product.\nA.3 EVALUATING IDENTIFICATION OF THE UNDERLYING FACTORS\nHere, we elaborate on the procedure we followed to evaluate identification performance of the underlying factors. We produced four dummy vectors Vi ∈ RmΓ+m∆+mΥ+mΞ as depicted on the left-side of Figure 8. The first to third vectors had ones (constant) in the positions associated with Γ,\n∆, and Υ respectively, and the remainder of them were filled with zeroes. The fourth vector was all ones, so we can measure the maximum amount of information that is passed to the final layer of each representation network.\nIn the next step, each vector Vi is fed to each trained network Zj , and the output Oi is recorded (see the right-side of Figure 8). The average of Oi represents the power of signal that was communicated from Vi and passed through the Zj channel. The values reported in the tables illustrated in Figure 4 are the ratios of {average ofO1, O2, O3} divided by {average ofO4} for all the learned representation networks.\nA.4 HYPERPARAMETERS\nFor all CFR, DR-CFR, and VAE-CI methods, we trained the neural networks with 3 layers (each consisting 200 hidden neurons)c0, non-linear activation function elu , regularization coefficient of λ=1E-4, Adam optimizer (Kingma & Ba, 2015) with a learning rate of 1E-3, batch size of 300, and maximum number of iterations of 10, 000. See Table 4 for our hyperparameter search space.\nA.5 A DETAILED ANALYSIS OF THE EFFECT OF β\nR1 & R3: Our initial hypothesis in using β-VAE was that it might help further disentangle the underlying factors, in addition to the other constraint already in place (i.e., the architecture as well as the discrepancy penalty). However, Figure 6(b) suggests that close-to-zero or even zero βs also work effectively. To further explore this hypothesis, we examined the decomposition tables (similar to Figure 4) of H-VAE-CI for extreme configurations with β = 0 and observed that they were all effective at decomposing the underlying factors Γ, ∆, and Υ (similar to the performance reported in the green table in Figure 4). Figure 9 shows several of these tables. R1 & R3: Our interpretation of this observation is that the H-VAE-CI’s architecture already takes care of decomposing the Γ, ∆, and Υ factors, without needing the help of a KLD penalty. This means either of the following is happening: (i) β-VAE is not the best performing disentangling method and other disentangling constraints should be used instead — e.g., works of Chen et al. (2018) and Lopez et al. (2018); or (ii) it is theoretically impossible to achieve disentanglement without some\nc0R1 & R2: In addition to this basic configuration, we also perform our grid search with an updated number of layers and/or number of neurons in each layer. This makes sure that all methods enjoy a similar model complexity.\nsupervision (Locatello et al., 2019), which might not be possible to provide in this task. Exploring these options is out of the scope of this paper and is left to future work." } ]
2,020
null
SP:1efc842f413903e41727a6b79b9d3ea89011a85b
[ "The authors consider the problem of efficient modeling of epistemic uncertainty, separated from aleatoric uncertainty, for neural networks. They propose a novel methodology, involving automatically constructing a epistemic uncertainty support data set used to extend a given NN with an epistemic uncertainty output. The method is compared with previous, less efficient, approaches and is applied to the important problem of data-efficient online learning of a controller for real-time use with convincing results." ]
Safety-critical decisions based on machine learning models require a clear understanding of the involved uncertainties to avoid hazardous or risky situations. While aleatoric uncertainty can be explicitly modeled given a parametric description, epistemic uncertainty rather describes the presence or absence of training data. This paper proposes a novel generic method for modeling epistemic uncertainty and shows its advantages over existing approaches for neural networks on various data sets. It can be directly combined with aleatoric uncertainty estimates and allows for prediction in real-time as the inference is sample-free. We exploit this property in a model-based quadcopter control setting and demonstrate how the controller benefits from a differentiation between aleatoric and epistemic uncertainty in online learning of thermal disturbances.
[]
[ { "authors": [ "REFERENCES Thomas Beckers", "Dana Kulić", "Sandra Hirche" ], "title": "Stable Gaussian process based tracking control of Euler-Lagrange systems", "venue": null, "year": 2019 }, { "authors": [ "Felix Berkenkamp", "Riccardo Moriconi", "Angela Schoellig", "Andreas Krause" ], "title": "Safe learning of regions of attraction for uncertain, nonlinear systems with Gaussian processes", "venue": "arXiv preprint arXiv:1603.04915,", "year": 2016 }, { "authors": [ "Christopher M Bishop" ], "title": "Pattern recognition and machine learning", "venue": "springer,", "year": 2006 }, { "authors": [ "Girish Chowdhary", "Hassan A. Kingravi", "Jonathan. P. How", "Patricio A. Vela" ], "title": "Bayesian nonparametric adaptive control using Gaussian processes", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2015 }, { "authors": [ "Thomas H Cormen", "Charles E Leiserson", "Ronald L Rivest", "Clifford Stein" ], "title": "Introduction to algorithms", "venue": "MIT press,", "year": 2009 }, { "authors": [ "Marc Deisenroth", "Jun Wei Ng" ], "title": "Distributed Gaussian processes", "venue": "In International Conference on Machine Learning (ICML),", "year": 2015 }, { "authors": [ "Stefan Depeweg" ], "title": "Modeling Epistemic and Aleatoric Uncertainty with Bayesian Neural Networks and Latent Variables", "venue": "PhD thesis, Technische Universität München,", "year": 2019 }, { "authors": [ "Stefan Depeweg", "José Miguel Hernández-Lobato", "Finale Doshi-Velez", "Steffen Udluft" ], "title": "Learning and policy search in stochastic dynamical systems with Bayesian neural networks", "venue": "arXiv preprint arXiv:1605.07127,", "year": 2016 }, { "authors": [ "Stefan Depeweg", "José Miguel Hernández-Lobato", "Finale Doshi-Velez", "Steffen Udluft" ], "title": "Uncertainty decomposition in Bayesian neural networks with latent variables", "venue": "arXiv preprint arXiv:1706.08495,", "year": 2017 }, { "authors": [ "Stefan Depeweg", "Jose-Miguel Hernandez-Lobato", "Finale Doshi-Velez", "Steffen Udluft" ], "title": "Decomposition of uncertainty in bayesian deep learning for efficient and risk-sensitive learning", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Armen Der Kiureghian", "Ove Ditlevsen" ], "title": "Aleatory or epistemic? does it matter", "venue": "Structural safety,", "year": 2009 }, { "authors": [ "Yunis Fanger", "Jonas Umlauft", "Sandra Hirche" ], "title": "Gaussian processes for dynamic movement primitives with application in knowledge-based cooperation", "venue": "In International Conference on Intelligent Robots and Systems (IROS),", "year": 2016 }, { "authors": [ "Yarin Gal" ], "title": "Uncertainty in Deep Learning", "venue": "PhD thesis,", "year": 2016 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "venue": "In Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Sorin Grigorescu", "Bogdan Trasnea", "Tiberiu Cocias", "Gigel Macesanu" ], "title": "A survey of deep learning techniques for autonomous driving", "venue": "Journal of Field Robotics,", "year": 2020 }, { "authors": [ "Alex Kendall", "Yarin Gal" ], "title": "What uncertainties do we need in Bayesian deep learning for computer vision", "venue": "Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "Yongchan Kwon", "Joong-Ho Won", "Beom Joon Kim", "Myunghee Cho Paik" ], "title": "Uncertainty quantification using bayesian neural networks in classification: Application to biomedical image segmentation", "venue": "Computational Statistics and Data Analysis,", "year": 2020 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "Miguel Lazaro-Gredilla", "Michalis Titsias" ], "title": "Variational heteroscedastic gaussian process regression", "venue": "In International Conference on Machine Learning (ICML),", "year": 2011 }, { "authors": [ "Anqi Liu", "Guanya Shi", "Soon-Jo Chung", "Anima Anandkumar", "Yisong Yue" ], "title": "Robust regression for safe exploration in control", "venue": "Conference on Learning for Dynamics and Control (L4DC),", "year": 2020 }, { "authors": [ "Jose R. Medina", "Dominik Sieber", "Sandra Hirche" ], "title": "Risk-sensitive interaction control in uncertain manipulation tasks", "venue": "In International Conference on Robotics and Automation (ICRA),", "year": 2013 }, { "authors": [ "Duy Nguyen-Tuong", "Jan R. Peters", "Matthias Seeger" ], "title": "Local Gaussian process regression for real time online model learning", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2009 }, { "authors": [ "Duy Nguyen-Tuong", "Matthias Seeger", "Jan Peters" ], "title": "Model learning with local Gaussian process regression", "venue": "Advanced Robotics,", "year": 2009 }, { "authors": [ "Janis Postels", "Francesco Ferroni", "Huseyin Coskun", "Nassir Navab", "Federico Tombari" ], "title": "Samplingfree epistemic uncertainty estimation using approximated variance propagation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Joaquin Quinonero-Candela", "Carl E. Rasmussen" ], "title": "A unifying view of sparse approximate Gaussian process regression", "venue": "The Journal of Machine Learning Research,", "year": 2005 }, { "authors": [ "Carl E. Rasmussen", "Christopher K.I. Williams" ], "title": "Gaussian Processes for Machine Learning", "venue": null, "year": 2006 }, { "authors": [ "Murray Rosenblatt" ], "title": "Remarks on some nonparametric estimates of a density function", "venue": "Ann. Math. Statist., 27(3):832–837,", "year": 1956 }, { "authors": [ "G. Shi", "X. Shi", "M. O’Connell", "R. Yu", "K. Azizzadenesheli", "A. Anandkumar", "Y. Yue", "S. Chung" ], "title": "Neural lander: Stable drone landing control using learned dynamics", "venue": "In International Conference on Robotics and Automation (ICRA),", "year": 2019 }, { "authors": [ "Emanuel Todorov", "Weiwei Li" ], "title": "A generalized iterative LQG method for locally-optimal feedback control of constrained nonlinear stochastic systems", "venue": "In American Control Conference (ACC),", "year": 2005 }, { "authors": [ "Jonas Umlauft", "Sandra Hirche" ], "title": "Feedback linearization based on Gaussian processes with eventtriggered online learning", "venue": "IEEE Transactions on Automatic Control (TAC), pp", "year": 2020 }, { "authors": [ "Jonas Umlauft", "Lukas Pöhler", "Sandra Hirche" ], "title": "An uncertainty-based control Lyapunov approach for control-affine systems modeled by Gaussian process", "venue": "IEEE Control Systems Letters,", "year": 2018 }, { "authors": [ "Björn Wittenmark" ], "title": "Adaptive dual control methods: An overview", "venue": "IFAC Proceedings Volumes,", "year": 1995 }, { "authors": [ "Aydin Yesildirak", "Frank L. Lewis" ], "title": "Feedback linearization using neural networks. Automatica", "venue": null, "year": 1995 } ]
[ { "heading": "1 INTRODUCTION", "text": "With improved sensor quality and more powerful computational resources, data-driven models are increasingly applied in safety-critical domains such as autonomous driving or human-robot interaction (Grigorescu et al., 2020). However, measurements usually suffer from noise and the available data is often scarce compared to all possible states of a complex environment. This requires controllers, which rely on supervised learning techniques, to properly react to ignorance and imprecision in the model to avoid dangerous situations. In order to allow an implementation of risk-averse (for exploitation and safety improvements) or risk-seeking (for exploration) behavior, the model must clearly disaggregate the information in the data into more than just the “best estimate” and differentiate between different sources of uncertainty. Besides the point estimate of a model, one can distinguish aleatoric (uncertainty in the data) and epistemic (uncertainty in the model) uncertainty. The former is irreducible as it is inherent to the stochastic process the data is recorded from, while the latter origins from a limited expressive power of the model or scarce training samples (Der Kiureghian & Ditlevsen, 2009).\nGaussian processes (GPs) inherently provide a measure for its fidelity with the posterior standard deviation prediction (Rasmussen & Williams, 2006). It also allows to differentiate aleatoric uncertainty (typically considered as observation noise) and epistemic uncertainty (modeled by the kernel). However, the former allows only homoscedastic (constant) estimates, while real-world applications typically require heteroscedastic uncertainty models. An extension to heteroscedastic GP regression is presented in (Lazaro-Gredilla & Titsias, 2011), however, it is a variational approximation and further increases the computational complexity and GPs generally suffer from poor scaling to large datasets (Quinonero-Candela & Rasmussen, 2005).\nIn deep learning, the modeling of uncertainties also gained increasing interest over the past years (Kendall & Gal, 2017). Heteroscedastic aleatoric uncertainty can be captured well, if the output of the stochastic process can directly be observed and its parametric distribution is known. However, for more general cases, approximation techniques such as variational inference or sampling is required (Bishop, 2006). For epistemic uncertainty estimation with neural networks (NN), the key idea for most approaches can be summarized as follows. Randomness is introduced to the neural network through sampling during training and inference. While the training robustifies the network against the injected noise at the training locations, it allows the noise to pass to the output at input locations where no training data is available. For inference, multiple predictions of the network are sampled for the same inputs, allowing to compute a statistical measure for the uncertainty\nat the output (Depeweg et al., 2018; Depeweg, 2019). However, sampling the network during inference is a high computational burden, and is therefore not suitable in real-time critical control tasks. An ensemble based approach by (Lakshminarayanan et al., 2017) works with far less instances of a network, but does not differentiate between aleatoric and epistemic uncertainty explicitly.\nDespite those drawbacks in the uncertainty representation of data-driven models, the control community started to incorporate them increasingly in the decision making for various applications. For example Fanger et al. (2016) uses an epistemic uncertainty measure to dynamically assign leader order follower roles for cooperative robotic manipulation. The work by Berkenkamp et al. (2016) ensures a safe exploration of an unknown task space based on GP error bounds and a gain scheduling approach for computed torque control is presented in Beckers et al. (2019). The work by Liu et al. (2020) considers the epistemic uncertainty as an estimate of the distance between source and target domains (known as domain shift) to design a robust controller. In Umlauft & Hirche (2020) and Chowdhary et al. (2015) an online learning control approach for GPs models is considered, which approach the dual control problem (Wittenmark, 1995) as a model-based adaptive control problem. The work by Yesildirak & Lewis (1995) uses neural network for adaptive control in a continuous time fashion, which relies on a time-triggered (periodic) update of the model rather than a eventbased adaptation as we propose in this work. More general, risk averse control strategies have been presented by Umlauft et al. (2018); Medina et al. (2013); Todorov & Li (2005). However, all of these approaches only consider the model fidelity in general and do not differentiate between aleatoric and epistemic uncertainty.\nTherefore, the main contributions of this paper are the following. We propose a deep learning framework with a real-time capable epistemic uncertainty prediction. The resulting online learning model is employed by a controller, which shows a distinct reaction to epistemic and aleatoric uncertainty. We evaluate the proposed methods on synthetic and real-world benchmark data sets, and simulate a quadcopter controller, which learns online the disturbances injected by thermals." }, { "heading": "2 PROBLEM FORMULATION", "text": "Consider the discrete-time dynamical system1 with control u ∈ U ⊆ Rdu and state x ∈ X ⊆ Rdx\nxk+1 = g(xk,uk) + yk, (1)\nwhere g : X× U→ X is known, while y is a i.i.d. random vector sampled in every time step from yk ∼ D(f(xk)), (2)\nwhereD(·) denotes a known distribution over real vectors y ∈ Y ⊆ Rdx and depends on the parameters p ∈ P ⊆ Rdp . These state-dependent parameters arise from an unknown mapping f : X→ P. We denote the unknown component yk of the dynamical system generally as disturbance but it could also be the unmodeled part of the dynamics, such as friction or serve as black-box model for the dynamics if no analytic description is available (g(·, ·) = 0). We assume measurements can be taken to obtain the data set Dtr = {(xi,yi)}Ntri=1 with inputs Xtr = {xi} Ntr i=1 and outputs Ytr = {yi} Ntr i=1, such that a model f̂(·) of f(·) can be learned. Ntr ∈ N denotes the current number of training data points and is initially zero, i.e., the training set is empty.\nThe task is to choose a control input uk, such that the system (1) follows a given reference xdes. Furthermore, the controller can take new measurements of y to improve its model over time. We consider each measurement of y to be costly and therefore new training points should only be collected when necessary. Applications, where data collection is costly can be found in distributed systems, where multiple sensors share the same scarce communication channel, or in autonomous systems with limited data storage capacity.\nThe need for high data efficiency requires models, which judge upon their own fidelity in real-time to identify valuable measurements. As existing approaches for modeling epistemic uncertainty in deep learning suffer from a high computational complexity we first focus on developing a novel method for epistemic uncertainty predictions before proposing an online learning control strategy which makes use of a neural network model decomposing its uncertainties.\n1Bold/capital symbols generally denote vectors/matrices,D(·)/U(·)/N (·)/B(·) a general parametric/the uniform/Gaussian/Bernoulli distribution, respectively." }, { "heading": "3 EPISTEMIC UNCERTAINTY ESTIMATION", "text": "" }, { "heading": "3.1 RELATED WORK", "text": "Learning an epistemic uncertainty estimator is not straight forward as it measures the absence of training data. Most prominently Gaussian processes with stationary kernels offer such a measure implicitly with their posterior variance prediction. However, GPs are known to scale poorly for large data sets: While regression and uncertainty predictions can be performed with O(Ntr) and O(Ntr2), respectively, considering a new data point takesO(Ntr3) computations (also without hyperparameter optimization, O(Ntr2) for rank-1 update methods). While various methods have been proposed to make GP computationally more efficient, including sparse GPs (Quinonero-Candela & Rasmussen, 2005), distributed GPs (Deisenroth & Ng, 2015) and local GPs (Nguyen-Tuong et al., 2009a;b), these approximations typically focus only on the precision of the point estimate and distort the uncertainty prediction. For estimating the ”distance” to the training set, kernel density estimation (KDE) can also be used (Rosenblatt, 1956), however, the non-parametric nature implies that the inference time grows linearly with the number of considered data points, which we aim to avoid.\nMore recently, several different approaches for epistemic uncertainty estimates using deep learning frameworks have been proposed. Popular approaches rely on Bayesian approximations(Depeweg et al., 2016) or permanent dropouts (not only during training to avoid overfitting) (Gal, 2016; Gal & Ghahramani, 2016). Furthermore, latent inputs can also be used to achieve a decomposition into aleatoric and epistemic uncertainty as presented in (Depeweg et al., 2017). However, in particular for Bayesian NNs, these approaches become computationally challenging. Firstly, they have a larger number of parameters to tune than their deterministic counterparts and rely on variational inference methods (Kwon et al., 2020). Secondly, the prediction requires to sample the entire network before the statistics of the output can be computed. For the application in real-time critical control problems (e.g., robotics with a sampling rate of 1 kHz), these computational burdens prohibit an employment of these techniques. A sampling-free estimation methods is proposed by Postels et al. (2019), but suffers from a quadratic space complexity in the number of weights in the network and relies on firstorder Taylor approximations in the propagation of the uncertainties, which might become inaccurate depending on the non-linearity of the activation functions\n3.2 EpiOut - EXPLICITLY LEARNING EPISTEMIC UNCERTAINTY\nIn order to allow the estimation of epistemic uncertainty in real-time, we introduce the idea of explicitly modeling it with a separate output of a neural network, calling it EpiOut. Since the epistemic uncertainty expresses the absence of data, the original data set Dtr does not contain data for training EpiOut. Therefore, we generate an epistemic uncertainty data set, with inputs Xepi = {x̃j} Nepi j=1 and outputs Yepi = {ỹj} Nepi j=1 concatenated in Depi = {(x̃j , ỹj)} Nepi j=1, Nepi ∈ N.\nDifferent variations for sampling the set Xepi can be chosen depending on the desired scalability properties. A naive approach is to sample the entire input space uniformly, which suffers from the curse of dimensionality. Alternatively, we propose to sample around existing training points from a normal distribution\nXepi = Ntr⋃ i=1 {x̃j ∼ N (xi, Γ), j = 1, . . . ,Nepi/Ntr} , (3)\nwhere we implicitly assume that Nepi is chosen such that δ = Nepi/Ntr is a positive integer. Supposing that a standardization of the input space to unity is performed based on Xtr, Γ = I can be chosen if no further knowledge on f(·) is available. Otherwise, scaling Γ can be interpreted similarly to the lengthscale of a GP as a measure for how far away from a training point the prediction is reliable: Larger Γ will lead to further spread of Xepi and therefore low epistemic uncertainty in the neighborhood of the training data, which would be meaningful if the true function is known to have a low Lipschitz constant, and vice versa.\nWe propose to set δ to a multiple of 2dx + 1 which corresponds to the intuition to pad each training point in both directions of each dimension with a epi point x̃. The reasoning behind the additional +1 point will become clear in the following. To define the set Yepi, we first compute the minimal\ndistance (according to any distance metric d : X×X→ R0,+) to the training data for each epi point\ndj = min x∈Xtr d(x̃j ,x), j = 1, . . . ,Nepi, (4)\nkeeping in mind that the closest training data point is not necessarily the one used to generate the sample. Let dNtr be theNtr-th smallest element of all dj , we generate Yepi and update Xepi as follows\nỹj = { 1, if dj > dNtr 0, x̃j ← arg minx∈Xtr d(x̃j ,x) if dj ≤ dNtr . (5)\nThus, theNtr points in Xepi with the least distance to a training point are replaced by the corresponding point in Xtr. Now the choice of 2dx + 1 epi points becomes clear as one of them will be turned into ỹ = 0 corresponding to “low epistemic uncertainty”, while 2dx points further away from the training point ỹ = 0 indicate the opposite.\nGiven the data set Depi, the neural network is now equipped with one additional output, i.e., the parameter layer is dp + 1 dimensional with output [f̂(·) η(·)]T . The new output η(·) is terminated with a neuron using a sigmoidal activation function, such that η : X → [0, 1]. This is beneficial because it allows immediately to judge, whether the predicted uncertainty is high (≈ 1) or low (≈ 0) without any reference evaluation (see comparison to alteranative methods below).\nIndependently of the loss function for the original network, the augmented output, also considered as epistemic output is trained using a binary cross-entropy loss, which is the natural choice for binary outputs. It quantifies the uncertainty in the prediction of the other outputs based on the distance to the training data measured by d(·, ·). For the sake of focus, we will be using the Euclidean distance, however the method can be easily extended to other metrics and we leave it to future work to investigate alternatives." }, { "heading": "3.3 COMPUTATIONAL COMPLEXITY", "text": "The analysis of the computational complexity shows that (3) is a O(Nepi)=̂O(Ntrdx) operation, whereas (4) is for a trivial implementation a O(NtrNepi)=̂O(dxNtr2) operation. However, an implementation based on kd-tree (Cormen et al., 2009) allows an execution in O(Nepi log(Nepi))=̂O(dxNtr log(Ntrdx))) time. Finding the Ntr smallest distances from all Nepi points in (5) can obtained in O(Ntr + (Nepi −Ntr) log(Ntr))=̂O(Ntr +Ntr(dx − 1) log(Ntr)) time. The training of neural network with a fixed number of weights requiresO(Nepi)=̂O(Ntrdx). Hence, the overall complexity results in O(dxNtr log(dxNtr)), and it is straight forward to derive an overall space complexity of O(Nepidx)=̂O(Ntrdx2) for storing the set Xepi. The following should be considered when comparing to classical deep learning frameworks which generally can be trained in linear time.\n• When used on streaming data (as for online learning control), the set Depi can be constructed iteratively, reducing the complexity to O(log(Ntr))\n• The most time critical computation (4) can efficiently be parallelized on a GPU. • The method is designed for applications where measuring data is considered costly and\ntherefore sparse data can be expected." }, { "heading": "3.4 EVALUATION", "text": "For evaluation we compare the following models. The implementation is available in the supplementary material.\n• vanilla GP model with a squared exponential automatic relevance determination kernel based on the GPy implementation2\n• BNN: Bayesian Neural Network with 2 fully connected hidden layers each with 50 hidden units and normal distributions over their weights based on this implementation.3\n2https://sheffieldml.github.io/GPy/ 3https://matthewmcateer.me/blog/a-quick-intro-to-bayesian-neural-networks/\n• Dropout: A neural network with 2 fully connected permanent layers each with 50 hidden units with dropout probability ρ = 0.05. 4.\n• EpiOut: The proposed model with 2 fully connected layers (50 neurons) and Γ = I , δ = 2.\nFor the evaluation we utilize a weighted mean square error measures defined as follows\nρ = ∑Nte i=1(yi − f̂(xi))2(1− η(xi))∑Nte\ni=1(1− η(xi)) , (6)\ni.e., if the model decides that it is uncertain about the prediction at a test point, the squared error for this prediction is discounted (weighted less). However, the model can only achieve a low ρ if it is also certain at some test points, because the denominator, goes to zero for many uncertain predictions. In consequence, ρ is only defined if η(·) < 1 holds for at least one test point. Furthermore, the total discount, defined as ∑N i=1 η(xi) can additionally be utilized for a plausibility check of the epistemic uncertainty predictions since it should generally be larger on the test than on the training data set.\nThe measure in (6) relies on epistemic uncertainty prediction in the interval [0, 1]. This is only ensured for the proposed EpiOut approach and therefore the uncertainty measures, more specifically the predicted standard deviation, provided by the GP, Dropout and BNN are scaled to the unit interval based on the evaluation on all test and training points.\nThe following data sets are utilized for evaluation.\n• 1D Center: The nominal function is f(x) = sin(xπ), with training points Xtr = {xi ∼ U(−1, 1)}100i=1 and Nte = 961 test points are placed on a grid [−4, 4]. • 1D Split: Same as 1D Center, but Xtr = {xi ∼ U(−2,−1)}100i=1 ∪ {xi ∼ U(1, 2)}200i=101. • 2D Gaussian: The nominal function (dx = 2, dp = 1) is f(x) =\nsin(5x1) 5x1 + x22 with training points Xtr = { xi ∼ N ([ −1 0 ] , [ 0.02 0 0 0.1 ])}500 i=1\n∪{ xi ∼ N ([ 1 0 ] , [ 0.02 0 0 0.1 ])}1000 i=501 and Nte = 961 test points are uniformly placed\non a grid [−2, 2]2. • 2D Square: Same as 2D Gaussian, but with withNtr = 80 training points placed uniformly\nalong the boundary of the square [−1, 1]2. • PMSM temperature is a 2Hz recording (dx = 8, dy = 1) of the temperature from a perma-\nnent magnet synchronous motor.5. To allow a comparison with the GP within reasonable computational limits, Ntr = 5000 and Nte = 1000 points were randomly extracted from a total of ≈ 106 samples.\n• Sarcos is a data set for learning the inverse dynamics of a seven degrees-of-freedom SARCOS anthropomorphic robot arm dx = 21, dp = 1.6. Ntr = 10000 and Nte = 2000 points were randomly extracted from a total of ≈ 5× 104 samples." }, { "heading": "3.5 RESULTS & DISCUSSION", "text": "The numerical results are presented in Table 1 and an illustration for the data set 1D Split for all models is shown in Fig. 1. Besides showing empirically an advantage over existing approaches we want to point out the following benefits.\n• The EpiOut model predicts the uncertainty measure in a sample free manner. This is crucial in data-efficient online learning scenarios, where the epistemic uncertainty is used to evaluate the usefulness of an incoming data point to decide upon its rejection. Hence, it is called more frequently than the online training function and must be computationally efficient. The prediction time for EpiOut is typically an order of magnitude faster than Dropout and BNN.\n4https://github.com/yaringal/DropoutUncertaintyDemos 5https://www.kaggle.com/wkirgsn/electric-motor-temperature 6http://www.gaussianprocess.org/gpml/data/\nFor more extensive results and exact computation times, we refer to the supplementary material." }, { "heading": "4 ONLINE LEARNING CONTROL USING UNCERTAINTY DECOMPOSITION", "text": "" }, { "heading": "4.1 EVALUATION IN A QUADCOPTER CONTROL TASK", "text": "As an application of our proposed approach, we consider the task to control a quadcopter which explores a given terrain with unknown thermals7. We assume that the quadcopter dynamics i.e.g(·, ·) in 1 is known (compare (Shi et al., 2019)). The thermals act on the quadcopter as a disturbance on the acceleration in zq-direction8 We model the disturbance as normal distribution N (µ(x),σ2(x)), leading to p = [µ,σ]T and the NN models the state dependency of these parameters f̂ : X → P. Initially, there is no training data available and the desired trajectory xdes are three rounds at constant height zq = 0 on a square in the xq-yq-plane with edge length 0.1 followed by three rounds with edge length 0.05.\nThe online learning control approach for achieving this task with a high tracking precision and dataefficiency is presented in the following." }, { "heading": "4.2 LEARNING CONTROL DESIGN", "text": "The EpiOut approach is sufficiently fast to be implemented in a mobile real-time control loop and therefore serves as trigger when new data points must be measured. By sampling from a Bernoulli\n7The data of the thermals is taken from publicly available paragliding data https://thermal.kk7.ch. 8Superscripts q distinguish the quadcopter position xq ,yq ,zq from the general inputs x and outputs y.\ndistribution, whose parameter corresponds to η(·), new measurements (xi,yi) are added to the training according to\nDtr ← Dtr ∪ {\n(xi,yi) if i = 1 ∅ if i = 0 where i ∼ B(α), α = η(xi). (7)\nThis ensures a high accuracy of the disturbance model f̂(·) as training data is added when necessary. It implements a stochastic event-triggered adaptive control approach, which is highly data-efficient and reduces the number of costly measurements. In particular for mobile platforms this cost comes in terms of reduced battery life (processing data requires power for computations), shorter operation time (local data storage quickly fills up if operating near 1 kHz sampling rates) or increased communication efforts (in case data is not process or stored onboard).\nThe system (1) is inherently random due to the stochastic nature of the disturbance f(·). Therefore, we combine feedback and feedforward control law\nu = K ( x− xdes ) + uff, (8)\nwhere x, xdes is the (desired) state of the quadcopter and uff is a feedforward control term determined based on the known model g(·, ·) (the gravitational force on the quadcopter) and the learned disturbance model f̂(·), more specifically the mean of the predicted disturbance µ(·). The choice of the feedback gain K, which compensates for imprecision in the model and the stochasticity of the disturbance, is a difficult trade-off because high control gains lead to high tracking performance, but also consume more energy and reinforce measurement noise, which can lead to instability. Therefore, it is generally advisable to let the feedforward term uff take over most of the control effort and keep the feedback term small when the model is reliable. Therefore, we increase the gains only if the aleatoric uncertainty inferred by our model as σ(·) is high. Since the disturbance acts only in zq direction, only the z component of the gain is affected i.e.,\nkz = k̄z ( 1 + βσ(x)), (9)\nwhere β ∈ R0,+ is the sensitivity and k̄ ∈ R+ defines the minimum control gain. This gain scheduling allows to robustify the closed-loop against process noise and can even guarantee stability Beckers et al. (2019), while at the same time we can keep the energy consumption low. While previous works by Fanger et al. (2016); Beckers et al. (2019) take a general uncertainty measure, we tune the feedback gains only based on the irreducible (aleatoric) uncertainty, while we combat the epistemic uncertainty with an increased data collection rate (7).\nA summary of the control strategy is given in Algorithm 1 and a visualization of the neural network in Fig. 2. The outputs (red nodes) are iteratively trained with the indicated loss functions and respective data sets D, Depi for two epochs with a a learning rate of 0.01 with each incoming data point. The current implementation pauses during the training of the disturbance model before the control loop continuous.\nAlgorithm 1 Online learning controller 1: initialize disturbance model 2: while control task is not completed do 3: get state xi 4: evaluate disturbance model 5: update control gains (9) 6: if measurement is required (7) then 7: measure disturbance yi 8: update data set Dtr ← Dtr ∪ (xi,yi) 9: resample Depi (3) 10: retrain disturbance model 11: end if 12: apply control uk (8) 13: i← i + 1 14: end while\nThe tracking performance of the proposed quadcopter controller, the data collections rate and the epistemic uncertainty model is illustrated in Fig. 3. The implementation and further results are provided in the supplementary material." }, { "heading": "5 CONCLUSION", "text": "This paper presents a novel deep learning structure for decomposing epistemic and aleatoric uncertainty and proposes an advanced control framework making distinct use of these uncertainty measures. As the prediction by the model are obtained sample-free, it allows for real-time critical online learning and outperforms existing methods on the proposed uncertainty-weighted precision measure. The proposed online learning control algorithm is inherently data-efficient by adding only required points to the data set.\nFor future work will consider alternative functions for sorting the epi points to encode prior knowledge, such as as periodicity (similar to a kernel of GP) and investigate the effect of a continuous valued ỹ." } ]
2,020
null
SP:085509d909d9fc476066424fd561bcebf6c57e51
[ "This paper shows that batch simulation can accelerate reinforcement learning in 3D environments. Batch simulation accepts and executes large batches of simulation requests at the same time on one accelerator. The authors demonstrate that this technique can substantially speed up the processing and achieve ~100x speed up in convergence. They also propose minor-changes to DD-PPO to speed up the convergence even further. The authors also included the code which is always appreciated. " ]
We accelerate deep reinforcement learning-based training in visually complex 3D environments by two orders of magnitude over prior work, realizing end-to-end training speeds of over 19,000 frames of experience per second on a single GPU and up to 72,000 frames per second on a single eight-GPU machine. The key idea of our approach is to design a 3D renderer and embodied navigation simulator around the principle of “batch simulation”: accepting and executing large batches of requests simultaneously. Beyond exposing large amounts of work at once, batch simulation allows implementations to amortize in-memory storage of scene assets, rendering work, data loading, and synchronization costs across many simulation requests, dramatically improving the number of simulated agents per GPU and overall simulation throughput. To balance DNN inference and training costs with faster simulation, we also build a computationally efficient policy DNN that maintains high task performance, and modify training algorithms to maintain sample efficiency when training with large mini-batches. By combining batch simulation and DNN performance optimizations, we demonstrate that PointGoal navigation agents can be trained in complex 3D environments on a single GPU in 1.5 days to 97% of the accuracy of agents trained on a prior state-of-the-art system using a 64-GPU cluster over three days. We provide open-source reference implementations of our batch 3D renderer and simulator to facilitate incorporation of these ideas into RL systems.
[ { "affiliations": [], "name": "Brennan Shacklett" }, { "affiliations": [], "name": "Erik Wijmans" }, { "affiliations": [], "name": "Aleksei Petrenko" }, { "affiliations": [], "name": "Manolis Savva" }, { "affiliations": [], "name": "Dhruv Batra" }, { "affiliations": [], "name": "Vladlen Koltun" }, { "affiliations": [], "name": "Kayvon Fatahalian" } ]
[ { "authors": [ "Tomas Akenine-Möller", "Eric Haines", "Naty Hoffman" ], "title": "Real-time rendering", "venue": null, "year": 2018 }, { "authors": [ "Peter Anderson", "Angel Chang", "Devendra Singh Chaplot", "Alexey Dosovitskiy", "Saurabh Gupta", "Vladlen Koltun", "Jana Kosecka", "Jitendra Malik", "Roozbeh Mottaghi", "Manolis Savva" ], "title": "On evaluation of embodied navigation agents", "venue": null, "year": 2018 }, { "authors": [ "Marc G Bellemare", "Yavar Naddaf", "Joel Veness", "Michael Bowling" ], "title": "The arcade learning environment: An evaluation platform for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2013 }, { "authors": [ "Angel Chang", "Angela Dai", "Thomas Funkhouser", "Maciej Halber", "Matthias Niessner", "Manolis Savva", "Shuran Song", "Andy Zeng", "Yinda Zhang" ], "title": "Matterport3D: Learning from RGB-D data in indoor environments", "venue": "In International Conference on 3D Vision (3DV),", "year": 2017 }, { "authors": [ "Steven Dalton", "Iuri Frosio", "Michael Garland" ], "title": "Accelerating reinforcement learning through gpu atari emulation", "venue": "NeurIPS,", "year": 2020 }, { "authors": [ "Alexey Dosovitskiy", "German Ros", "Felipe Codevilla", "Antonio Lopez", "Vladlen Koltun" ], "title": "CARLA: An open urban driving simulator", "venue": "In Proceedings of the 1st Annual Conference on Robot Learning,", "year": 2017 }, { "authors": [ "Lasse Espeholt", "Hubert Soyer", "Remi Munos", "Karen Simonyan", "Volodymir Mnih", "Tom Ward", "Yotam Doron", "Vlad Firoiu", "Tim Harley", "Iain Dunning" ], "title": "Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures", "venue": "In Proceedings of the International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Lasse Espeholt", "Raphaël Marinier", "Piotr Stanczyk", "Ke Wang", "Marcin Michalski" ], "title": "Seed rl: Scalable and efficient deep-rl with accelerated central inference", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Chuang Gan", "Jeremy Schwartz", "Seth Alter", "Martin Schrimpf", "James Traer", "Julian De Freitas", "Jonas Kubilius", "Abhishek Bhandwaldar", "Nick Haber", "Megumi Sano" ], "title": "Threedworld: A platform for interactive multi-modal physical simulation", "venue": null, "year": 2007 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross B. Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch SGD: Training ImageNet in 1 hour", "venue": null, "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "Dan Horgan", "John Quan", "David Budden", "Gabriel Barth-Maron", "Matteo Hessel", "Hado Van Hasselt", "David Silver" ], "title": "Distributed prioritized experience replay", "venue": "Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Jie Hu", "Li Shen", "Gang Sun" ], "title": "Squeeze-and-excitation networks", "venue": "In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition", "year": 2018 }, { "authors": [ "Stephen James", "Zicong Ma", "David Rovick Arrojo", "Andrew J. Davison" ], "title": "Rlbench: The robot learning benchmark & learning environment", "venue": "IEEE Robotics and Automation Letters,", "year": 2020 }, { "authors": [ "Xianyan Jia", "Shutao Song", "Wei He", "Yangzihao Wang", "Haidong Rong", "Feihu Zhou", "Liqiang Xie", "Zhenyu Guo", "Yuanzhou Yang", "Liwei Yu" ], "title": "Highly scalable deep learning training system with mixed-precision", "venue": "Training ImageNet in four minutes", "year": 2018 }, { "authors": [ "Michał Kempka", "Marek Wydmuch", "Grzegorz Runc", "Jakub Toczek", "Wojciech Jaśkowski" ], "title": "Vizdoom: A doom-based ai research platform for visual reinforcement learning", "venue": "In IEEE Conference on Computational Intelligence and Games,", "year": 2016 }, { "authors": [ "Nitish Shirish Keskar", "Dheevatsa Mudigere", "Jorge Nocedal", "Mikhail Smelyanskiy", "Ping Tak Peter Tang" ], "title": "On large-batch training for deep learning: Generalization gap and sharp minima", "venue": "Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Eric Kolve", "Roozbeh Mottaghi", "Winson Han", "Eli VanderBilt", "Luca Weihs", "Alvaro Herrasti", "Daniel Gordon", "Yuke Zhu", "Abhinav Gupta", "Ali Farhadi" ], "title": "AI2-THOR: An interactive 3D environment for visual AI", "venue": null, "year": 2017 }, { "authors": [ "Heinrich Küttler", "Nantas Nardelli", "Thibaut Lavril", "Marco Selvatici", "Viswanath Sivakumar", "Tim Rocktäschel", "Edward Grefenstette" ], "title": "Torchbeast: A pytorch platform for distributed rl", "venue": null, "year": 1910 }, { "authors": [ "Youngwoon Lee", "Edward S Hu", "Zhengyu Yang", "Alex Yin", "Joseph J Lim" ], "title": "IKEA furniture assembly environment for long-horizon complex manipulation", "venue": null, "year": 1911 }, { "authors": [ "Eric Liang", "Richard Liaw", "Robert Nishihara", "Philipp Moritz", "Roy Fox", "Ken Goldberg", "Joseph E. Gonzalez", "Michael I. Jordan", "Ion Stoica" ], "title": "RLlib: Abstractions for distributed reinforcement learning", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Decoupled weight decay regularization", "venue": "Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Aleksei Petrenko", "Zhehui Huang", "Tushar Kumar", "Gaurav Sukhatme", "Vladlen Koltun" ], "title": "Sample factory: Egocentric 3D control from pixels at 100000 fps with asynchronous reinforcement learning", "venue": "Proceedings of the International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Tal Ridnik", "Hussam Lawen", "Asaf Noy", "Itamar Friedman" ], "title": "Tresnet: High performance gpudedicated architecture", "venue": null, "year": 2003 }, { "authors": [ "Manolis Savva", "Angel X. Chang", "Alexey Dosovitskiy", "Thomas Funkhouser", "Vladlen Koltun" ], "title": "MINOS: Multimodal indoor simulator for navigation in complex environments", "venue": null, "year": 2017 }, { "authors": [ "Manolis Savva", "Abhishek Kadian", "Oleksandr Maksymets", "Yili Zhao", "Erik Wijmans", "Bhavana Jain", "Julian Straub", "Jia Liu", "Vladlen Koltun", "Jitendra Malik", "Devi Parikh", "Dhruv Batra" ], "title": "Habitat: A Platform for Embodied AI Research", "venue": "In Proceedings of IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "John Schulman", "Philipp Moritz", "Sergey Levine", "Michael Jordan", "Pieter Abbeel" ], "title": "Highdimensional continuous control using generalized advantage estimation", "venue": "Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2016 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": null, "year": 2017 }, { "authors": [ "David Silver", "Julian Schrittwieser", "Karen Simonyan", "Ioannis Antonoglou", "Aja Huang", "Arthur Guez", "Thomas Hubert", "Lucas Baker", "Matthew Lai", "Adrian Bolton" ], "title": "Mastering the game of Go without human knowledge", "venue": null, "year": 2017 }, { "authors": [ "Greg Snook" ], "title": "Simplified 3d movement and pathfinding using navigation meshes. In Mark DeLoura (ed.), Game Programming Gems, pp. 288–304", "venue": "Charles River Media,", "year": 2000 }, { "authors": [ "Adam Stooke", "Pieter Abbeel" ], "title": "rlpyt: A research code base for deep reinforcement learning in pytorch", "venue": null, "year": 1909 }, { "authors": [ "Oriol Vinyals", "Igor Babuschkin", "Wojciech M. Czarnecki", "Michaël Mathieu", "Andrew Dudzik", "Junyoung Chung", "David H. Choi", "Richard Powell", "Timo Ewalds", "Petko Georgiev" ], "title": "Grandmaster level in StarCraft II using multi-agent reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Luca Weihs", "Jordi Salvador", "Klemen Kotar", "Unnat Jain", "Kuo-Hao Zeng", "Roozbeh Mottaghi", "Aniruddha Kembhavi" ], "title": "Allenact: A framework for embodied ai research", "venue": null, "year": 2020 }, { "authors": [ "Erik Wijmans", "Abhishek Kadian", "Ari Morcos", "Stefan Lee", "Irfan Essa", "Devi Parikh", "Manolis Savva", "Dhruv Batra" ], "title": "DD-PPO: Learning near-perfect pointgoal navigators from 2.5 billion frames", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Fei Xia", "Amir R Zamir", "Zhiyang He", "Alexander Sax", "Jitendra Malik", "Silvio Savarese. Gibson" ], "title": "env: Real-world perception for embodied agents", "venue": "In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Fei Xia", "William B Shen", "Chengshu Li", "Priya Kasimbeg", "Micael Edmond Tchapmi", "Alexander Toshev", "Roberto Martı́n-Martı́n", "Silvio Savarese" ], "title": "Interactive Gibson benchmark: A benchmark for interactive navigation in cluttered environments", "venue": "IEEE Robotics and Automation Letters,", "year": 2020 }, { "authors": [ "Fanbo Xiang", "Yuzhe Qin", "Kaichun Mo", "Yikuan Xia", "Hao Zhu", "Fangchen Liu", "Minghua Liu", "Hanxiao Jiang", "Yifu Yuan", "He Wang" ], "title": "SAPIEN: A simulated part-based interactive environment", "venue": "In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Yang You", "Igor Gitman", "Boris Ginsburg" ], "title": "Scaling SGD batch size to 32K for ImageNet training", "venue": null, "year": 2017 }, { "authors": [ "Yang You", "Jing Li", "Sashank Reddi", "Jonathan Hseu", "Sanjiv Kumar", "Srinadh Bhojanapalli", "Xiaodan Song", "James Demmel", "Kurt Keutzer", "Cho-Jui Hsieh" ], "title": "Large batch optimization for deep learning: Training BERT in 76 minutes", "venue": "Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Andy Zeng", "Pete Florence", "Jonathan Tompson", "Stefan Welker", "Jonathan Chien", "Maria Attarian", "Travis Armstrong", "Ivan Krasin", "Dan Duong", "Vikas Sindhwani", "Johnny Lee" ], "title": "Transporter networks: Rearranging the visual world for robotic manipulation", "venue": "Conference on Robot Learning (CoRL),", "year": 2020 }, { "authors": [ "Hongyi Zhang", "Yann N Dauphin", "Tengyu Ma" ], "title": "Fixup initialization: Residual learning without normalization", "venue": "Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "2015 Ba", "Loshchilov", "2018). Hutter" ], "title": "weight-decay both to add back regularization lost by removing normalization layers and to stabilize Lamb; we use λ=10−2. We find one epoch of PPO with two mini-batches to be sufficient (instead of two epochs with two mini-batches), thus effectively doubling the learning speed", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Speed matters. It is now common for modern reinforcement learning (RL) algorithms leveraging deep neural networks (DNNs) to require billions of samples of experience from simulated environments (Wijmans et al., 2020; Petrenko et al., 2020; OpenAI et al., 2019; Silver et al., 2017; Vinyals et al., 2019). For embodied AI tasks such as visual navigation, where the ultimate goal for learned policies is deployment in the real world, learning from realistic simulations is important for successful transfer of learned policies to physical robots. In these cases simulators must render detailed 3D scenes and simulate agent interaction with complex environments (Kolve et al., 2017; Dosovitskiy et al., 2017; Savva et al., 2019; Xia et al., 2020; Gan et al., 2020).\nEvaluating and training a DNN on billions of simulated samples is computationally expensive. For instance, the DD-PPO system (Wijmans et al., 2020) used 64 GPUs over three days to learn from 2.5 billion frames of experience and achieve near-perfect PointGoal navigation in 3D scanned environments of indoor spaces. At an even larger distributed training scale, OpenAI Five used over 50,000 CPUs and 1000 GPUs to train Dota 2 agents (OpenAI et al., 2019). Unfortunately, experiments at this scale are out of reach for most researchers. This problem will only grow worse as the field explores more complex tasks in more detailed environments.\nMany efforts to accelerate deep RL focus on improving the efficiency of DNN evaluation and training – e.g., by “centralizing” computations to facilitate efficient batch execution on GPUs or TPUs (Espeholt et al., 2020; Petrenko et al., 2020) or by parallelizing across GPUs (Wijmans et al., 2020). However, most RL platforms still accelerate environment simulation by running many copies of off-the-shelf, unmodified simulators, such as simulators designed for video game engines (Bellemare et al., 2013; Kempka et al., 2016; Beattie et al., 2016; Weihs et al., 2020), on large numbers\n∗Correspondence to bps@cs.stanford.edu\nof CPUs or GPUs. This approach is a simple and productive way to improve simulation throughput, but it makes inefficient use of computation resources. For example, when rendering complex environments (Kolve et al., 2017; Savva et al., 2019; Xia et al., 2018), a single simulator instance might consume gigabytes of GPU memory, limiting the total number of instances to far below the parallelism afforded by the machine. Further, running many simulator instances (in particular when they are distributed across machines) can introduce overhead in synchronization and communication with other components of the RL system. Inefficient environment simulation is a major reason RL platforms typically require scale-out parallelism to achieve high end-to-end system throughput.\nIn this paper, we crack open the simulation black box and take a holistic approach to co-designing a 3D renderer, simulator, and RL training system. Our key contribution is batch simulation for RL: designing high-throughput simulators that accept large batches of requests as input (aggregated across different environments, potentially with different assets) and efficiently execute the entire batch at once. Exposing work en masse facilitates a number of optimizations: we reduce memory footprint by sharing scene assets (geometry and textures) across rendering requests (enabling orders of magnitude more environments to be rendered simultaneously on a single GPU), amortize rendering work using GPU commands that draw triangles from multiple scenes at once, hide latency of scene I/O, and exploit batch transfer to reduce data communication and synchronization costs between the simulator, DNN inference, and training. To further improve end-to-end RL speedups, the DNN workload must be optimized to match high simulation throughput, so we design a computationally efficient policy DNN that still achieves high task performance in our experiments. Large-batch simulation increases the number of samples collected per training iteration, so we also employ techniques from large-batch supervised learning to maintain sample efficiency in this regime.\nWe evaluate batch simulation on the task of PointGoal navigation (Anderson et al., 2018) in 3D scanned Gibson and Matterport3D environments, and show that end-to-end optimization of batched rendering, simulation, inference, and training yields a 110× speedup over state-of-the-art prior systems, while achieving 97% of the task performance for depth-sensor-driven agents and 91% for RGB-camera-driven agents. Concretely, we demonstrate sample generation and training at over 19,000 frames of experience per second on a single GPU.1 In real-world terms, a single GPU is capable of training a virtual agent on 26 years of experience in a single day.2 This new performance regime significantly improves the accessibility and efficiency of RL research in realistic 3D environments, and opens new possibilities for more complex embodied tasks in the future." }, { "heading": "2 RELATED WORK", "text": "Systems for high-performance RL. Existing systems for high-performance RL have primarily focused on improving the efficiency of DNN components of the workload (policy inference and optimization) and use a simulator designed for efficient single agent simulation as a black box. For example, Impala and Ape-X used multiple worker processes to asynchronously collect experience for a centralized learner (Espeholt et al., 2018; Horgan et al., 2018). SEED RL and Sample Factory built upon this idea and introduced inference workers that centralize network inference, thereby allowing it to be accelerated by GPUs or TPUs (Espeholt et al., 2020; Petrenko et al., 2020). DD-PPO proposed a synchronous distributed system for similar purposes (Wijmans et al., 2020). A number\n1Samples of experience used for learning, not ‘frameskipped’ metrics typically used in Atari/DMLab. 2Calculated on rate a physical robot (LoCoBot (Carnegie Mellon University, 2019)) collects observations\nwhen operating constantly at maximum speed (0.5 m/s) and capturing 1 frame every 0.25m.\nof efficient implementations of these ideas have been proposed as part of RL frameworks or in other deep learning libraries (Liang et al., 2018; Stooke & Abbeel, 2019; Küttler et al., 2019).\nWe extend the idea of centralizing inference and learning to simulation by cracking open the simulator black box and designing a new simulation architecture for RL workloads. Our large-batch simulator is a drop-in replacement for large numbers of (non-batched) simulation workers, making it synergistic with existing asynchronous and synchronous distributed training schemes. It reduces the number of processes and communication overhead needed for asynchronous methods and eliminates separate simulation worker processes altogether for synchronous methods. We demonstrate this by combining our system with DD-PPO (Wijmans et al., 2020).\nConcurrently with our work, CuLE, a GPU-accelerated reimplementation of the Atari Learning Environment (ALE), demonstrates the benefits of centralized batch simulation (Dalton et al., 2020). While both our work and CuLE enable wide-batch execution of their respective simulation workloads, our focus is on high-performance batch rendering of complex 3D environments. This involves optimizations (GPU-driven pipelined geometry culling, 3D asset sharing, and asynchronous data transfer) not addressed by CuLE due to the simplicity of rendering Atari-like environments. Additionally, like CuLE, we observe that the large training batches produced by batch simulation reduce RL sample efficiency. Our work goes further and leverages large-batch optimization techniques from the supervised learning literature to mitigate the loss of sample efficiency without shrinking batch size.\nLarge mini-batch optimization. A consequence of large batch simulation is that more experience is collected between gradient updates. This provides the opportunity to accelerate learning via large mini-batch optimization. In supervised learning, using large mini-batches during optimization typically decreases the generalization performance of models (Keskar et al., 2017). Goyal et al. (2017) demonstrated that model performance can be improved by scaling the learning rate proportionally with the batch size and “warming-up” the learning rate at the start of training. You et al. (2017) proposed an optimizer modification, LARS, that adaptively scales the learning rate at each layer, and applied it to SGD to improve generalization further. In reinforcement learning and natural language processing, the Adam optimizer (Kingma & Ba, 2015) is often used instead of SGD. Lamb (You et al., 2020) combines LARS (You et al., 2017) with Adam (Kingma & Ba, 2015). We do not find that large mini-batch optimization harms generalization in reinforcement learning, but we do find it decreases sample efficiency. We adapt the techniques proposed above – learning rate scaling (You et al., 2017) and the Lamb optimizer (You et al., 2020) – to improve sample efficiency.\nSimulators for machine learning. Platforms for simulating realistic environments for model training fall into two broad categories: those built on top of pre-existing game engines (Kolve et al., 2017; Dosovitskiy et al., 2017; Lee et al., 2019; Gan et al., 2020; James et al., 2020), and those built from scratch using open-source 3D graphics and physics libraries (Savva et al., 2017; 2019; Xia et al., 2018; 2020; Xiang et al., 2020; Zeng et al., 2020). While improving simulator performance has been a focus of this line of work, it has been evaluated in a narrow sense (i.e. frame rate benchmarks for predetermined agent trajectories), not accounting for the overall performance of end-to-end RL training. We instead take a holistic approach to co-design rendering and simulation modules and their interfaces to the RL training system, obtaining significant gains in end-to-end throughput over the state of the art." }, { "heading": "3 SYSTEM DESIGN & IMPLEMENTATION", "text": "Batch simulation accelerates rollout generation during RL training by processing many simulated environments simultaneously in large batches. Fig. 2 illustrates how batch simulation interacts with policy inference to generate rollouts. Simulation for sensorymotor agents, such as the PointGoal navigation task targeted by our implementation, can be separated into two tasks: determining the next environment state given an agent’s actions and rendering its sensory observations. Therefore, our design utilizes two components: a batch simulator that performs geodesic distance and navigation mesh (Snook, 2000) computations on the CPU, and a batch renderer that renders complex 3D environments on the GPU.\nDuring rollout generation, batches of requests are passed between these components – given N agents, the simulator produces a batch of N environment states. Next, the renderer processes the batch of environment states by simultaneously rendering N frames and exposing the result directly in GPU memory. Agent observations (from both the simulator and the renderer) are then provided as a batch to policy inference to determine the next actions for the N agents.\nThe key idea is that the batch simulator and renderer implementations (in addition to the DNN workload) take responsibility for their own parallelization. Large batch sizes (values of N on the order of hundreds to thousands of environments) provide opportunities for implementations to efficiently utilize parallel execution resources (e.g., GPUs) as well as amortize processing, synchronization, and data communication costs across many environments. The remainder of this section describes the design and key implementation details of our system’s batch simulator and batch renderer, as well as contributions that improve the efficiency of policy inference and optimization in this regime." }, { "heading": "3.1 BATCH ENVIRONMENT SIMULATION", "text": "Our CPU-based batch simulator executes geodesic distance and navigation mesh computations in parallel for a large batch of environments. Due to differences in navigation mesh complexity across environments, the time to perform simulation may differ per environment. This variance is the source of workload imbalance problems in parallel synchronous RL systems (Wijmans et al., 2020; Savva et al., 2019) and one motivation for recent asynchronous designs (Petrenko et al., 2020; Espeholt et al., 2020; 2018). To ensure good workload balance, our batch simulator operates on large batches that contain significantly more environments than the number of available CPU cores and dynamically schedules work onto cores using a pool of worker threads (simulation for each environment is carried out sequentially). Worker threads report simulation results into a designated per-environment slot in a results buffer that is communicated to the renderer via a single batched request when all environment simulation for a batch is complete. To minimize CPU memory usage, the simulator only loads navigation meshes and does not utilize the main rendering assets." }, { "heading": "3.2 BATCH RENDERING", "text": "A renderer for producing RL agent observations in scanned real-world environments must efficiently synthesize many low-resolution renderings (e.g., 64×64 pixels) of scenes featuring high-resolution textures and complex meshes.3 Low-resolution output presents challenges for GPU acceleration. Rendering images one at a time produces too little rendering work to efficiently utilize a modern GPU rendering pipeline’s parallel processing resources. Rendering many environments concurrently but individually (e.g., from different worker threads or processes) exposes more rendering work to the GPU, but incurs the overhead of sending the GPU many fine-grained rendering commands.\nTo address the problem of rendering many small images efficiently, our renderer combines the GPU commands required to render observations for an entire simulation batch of N environments into a single rendering request to the GPU – effectively drawing the entire batch as a single large frame (individual environment observations are tiles in the image). This approach exposes large amounts of rendering work to the GPU and amortizes GPU pipeline configuration and rendering overhead over an entire batch. Our implementation makes use of modern GPU pipeline features (Khronos Group, 2017) that allow rendering tasks that access different texture and mesh assets to proceed as part of a single large operation (avoiding GPU pipeline flushes due to pipeline state reconfiguration).\nScene asset sharing. Efficiently utilizing a GPU requires batches to be large (we useN up to 1024). However, geometry and texture assets for a single environment may be gigabytes in size, so naively loading unique assets for each environment in a large batch would exceed available GPU memory. Our implementation allows multiple environments in a batch to reference the same 3D scene assets in\n3The Matterport3D dataset contains up to 600K triangles per 3D scan.\nGPU memory. Specifically, our system materializes K unique assets in GPU memory (K N ) and constructs batches ofN environments that reference these assets. Asset reuse decreases the diversity of training experiences in a batch, so to preserve diversity we limit the ratio of N to K in any one batch to 32, and continuously rotate the set of K assets in GPU memory. The renderer refreshes the set of K assets by asynchronously loading new scene assets into GPU memory during the main rollout generation and learning loop. As episodes complete, new environments are constructed to reference the newly loaded assets, and assets no longer referenced by active environments are removed from GPU memory. This design allows policy optimization to learn from an entire dataset of assets without exceeding GPU memory or incurring the latency costs of frequent asset loading.\nPipelined geometry culling. When rendering detailed geometry to low-resolution images, most scene triangles cover less than one pixel. As a result, rendering performance is determined by the rate the GPU’s rasterization hardware processes triangles, not the rate the GPU can shade covered pixels. To reduce the number of triangles the GPU pipeline must process, the renderer uses idle GPU cores to identify and discard geometry that lies outside the agent’s view—a process known as frustum culling (Akenine-Möller et al., 2018). Our implementation pipelines frustum culling operations (implemented using GPU compute shaders) with rendering for different environments in a batch. This pipelined design increases GPU utilization by concurrently executing culling work on the GPU’s programmable cores and rendering work on the GPU’s rasterization hardware." }, { "heading": "3.3 POLICY DNN ARCHITECTURE", "text": "High-throughput batch simulation creates a need for high-throughput policy DNN inference. Therefore, we develop a policy DNN architecture designed to achieve an efficient balance between high task performance and low computational cost. Prior work in PointGoal navigation (Wijmans et al., 2020) used a policy DNN design where a visual encoder CNN processes an agent’s visual sensory information followed by an LSTM (Hochreiter & Schmidhuber, 1997) that determines the policy’s actions. Our policy DNN uses this core design augmented with several performance optimizations.\nFirst, we reduce DNN effective input resolution from 128×128 (Wijmans et al., 2020) to 64×64. Beyond this simple optimization, we choose a shallow visual encoder CNN – a nine-layer ResNet (He et al., 2016) (ResNet18 with every other block removed), rather than the 50 layer (or larger) ResNets used by prior work. To counteract reduced task performance from the ResNet’s relatively low capacity, all stages include Squeeze-Excite (SE) blocks (Hu et al., 2018) with r=16. Additionally, we use a SpaceToDepth stem (Ridnik et al., 2020), which we find performs equally to the standard Conv+MaxPool stem while using less GPU memory and compute.\nFinally, we avoid the use of normalization layers in the ResNet as these require spatial reductions over the feature maps, preventing layer-fusion optimizations. Instead, the CNN utilizes Fixup Initialization (Zhang et al., 2019) to improve training stability. Fixup Initialization replaces expensive normalization layers after each convolution with cheap elementwise multiplication and addition." }, { "heading": "3.4 LARGE MINI-BATCH POLICY OPTIMIZATION", "text": "In on-policy reinforcement learning, policy optimization utilizes trajectories of experience to reduce bias and for backpropagation-through-time. When generating trajectories of length L with a simulation batch size of N , a rollout will have N×L steps of experience. Therefore, a consequence of simulation with large N is that more experience is collected per rollout.\nLarge N presents the opportunity to utilize large mini-batches to improve the throughput of policy optimization; however, throughput must be balanced against generalization and sample efficiency to ensure that reduced task performance does not offset the throughput gains. Although large minibatch training is known to hurt generalization in supervised learning (Keskar et al., 2017), we do not see evidence of this for RL. Conversely, we do find that sample efficiency for PointGoal navigation is harmed by naively increasingN . Fortunately, we are able to mitigate this loss of sample efficiency using techniques for improving generalization from the large mini-batch optimization literature.\nFirst, we scale the learning rate by √\nB Bbase , where Bbase=256 andB, the training batch size, isN×L divided by the number of mini-batches per training iteration. We find it beneficial to use the scaled learning rate immediately instead of ‘warming-up’ the learning rate (Goyal et al., 2017). Second, we use and adapt the Lamb optimizer (You et al., 2020). Lamb is a modification to Adam (Kingma & Ba, 2015) that applies LARS (You et al., 2017) to the step direction estimated by Adam to better handle high learning rates. Since the Adam optimizer is often used with PPO (Schulman et al.,\n2017), Lamb is a natural choice. Given the Adam step direction s(k)t for weights θ (k) t ,\nθ (k) t+1 = θ (k) t − ηtr (k) t (s (k) t + λθ (k) t ) r (k) t = φ(||θ(k)t ||) ||s(k)t + λθ (k) t ||\n(1)\nwhere ηt is the learning rate and λ is the weight decay coefficient. We set φ(||θ(k)t ||) as min{||θ(k)t ||, 10.0} and introduce an additional clip on the trust ratio r (k) t :\nr (k) t = min\n{ max { φ(||θ(k)t ||)\n||s(k)t + λθ (k) t ||\n, ρ } , 1\nρ\n} (2)\nWe find the exact value of ρ to be flexible (we observed similar training with ρ ∈ {10−2, 10−3, 10−4}) and also observed that this clip is only influential at the start of training, suggesting that there is an initialization scheme where it is unnecessary." }, { "heading": "4 RESULTS", "text": "We evaluate the impact of our contributions on end-to-end training speed and task performance by training PointGoal navigation agents in the complex Gibson (Xia et al., 2018) and Matterport3D (Chang et al., 2017) environments. The fastest published end-to-end training performance in these environments is achieved with the synchronous RL implementation presented with DD-PPO (Wijmans et al., 2020). Therefore, both our implementation and the baselines we compare against are synchronous PPO-based RL systems." }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "PointGoal navigation task. We train and evaluate agents via the same procedure as Wijmans et al. (2020): agents are trained for PointGoalNav (Anderson et al., 2018) with either a Depth sensor or an RGB camera. Depth agents are trained on Gibson-2plus (Xia et al., 2018) and, consistent with Wijmans et al. (2020), RGB agents are also trained on Matterport3D (Chang et al., 2017). RGB camera simulation requires textures for the renderer, increasing the GPU memory consumed by each scene significantly. Both classes of agent are trained on 2.5 billion simulated samples of experience.\nAgents are evaluated on the Gibson dataset (Xia et al., 2018). We use two metrics: Success, whether or not the agent reached the goal, and SPL (Anderson et al., 2018), a measure of both Success and efficiency of the agent’s path. We perform policy evaluation using Habitat-Sim (Savva et al., 2019), unmodified for direct comparability to prior work.\nBatch Processing Simulator (BPS). We provide an RL system for learning PointGoalNav built around the batch simulation techniques and system-wide optimizations described in Section 3. The remainder of the paper refers to this system as BPS (Batch Processing Simulator). To further accelerate the policy DNN workload, BPS uses half-precision inference and mixed-precision training.\nBaseline. The primary baseline for this work is Wijmans et al. (2020)’s open-source PointGoalNav implementation, which uses Habitat-Sim (Savva et al., 2019) – the prior state of the art in highperformance simulation of realistic environments such as Gibson. Unlike BPS, multiple environments are simulated simultaneously using parallel worker processes that render frames at 256×256 pixels before downsampling to 128×128 for the visual encoder. The fastest published configuration uses a ResNet50 visual encoder. Subsequent sections refer to this implementation as WIJMANS20.\nAblations. As an additional baseline, we provide WIJMANS++, which uses the optimized SEResNet9-based policy DNN (including performance optimizations and resolution reduction relative to WIJMANS20) developed for BPS, but otherwise uses the same system design and simulator as WIJMANS20 (with a minor modification to not load textures for Depth agents). WIJMANS++ serves to isolate the impact of two components of BPS: first, the low-level DNN efficiency improvements, and, more importantly, the performance of batch simulation versus WIJMANS20’s independent simulation worker design. Additionally, to ablate the effect of our encoder CNN architecture optimizations, we include a variant of BPS, BPS-R50, that uses the same ResNet50 visual encoder and input resolution as WIJMANS20, while maintaining the other of optimizations BPS.\nMulti-GPU training. To support multi-GPU training, all three systems replace standard PPO with DD-PPO (Wijmans et al., 2020). DD-PPO scales rollout generation and policy optimization across all available GPUs, scaling the number of environments simulated and the number of samples gathered between training iterations proportionally. We report results with eight GPUs.\nSensor System CNN Agent Res. RTX 3090 RTX 2080Ti Tesla V100 8×2080Ti 8×V100\nDetermining batch size. The per-GPU batch size, N , controls a trade-off between memory usage, sample efficiency, and speed. For BPS, N designates the batch size for simulation, inference, and training. For WIJMANS20 and WIJMANS++, N designates the batch size for inference and training, as well as the number of simulation processes. WIJMANS20 sets N=4 for consistency with Wijmans et al. (2020). To maximize performance of single-GPU runs, BPS uses the largest batch size that fits in GPU memory, subject to the constraint that no one scene asset can be shared by more than 32 environments in the batch. In eight-GPU configurations, DD-PPO scales the number of parallel rollouts with the number of GPUs, so to maintain reasonable sample efficiency BPS limits perGPU batch size to N=128, with K=4 active scenes per GPU. WIJMANS++ Depth experiments use N=64 (limited by system memory due to N separate processes running Habitat-Sim). Batch size in WIJMANS++ RGB experiments is limited by GPU memory (N ranges from 6 to 20 depending on the GPU). Appendix B provides the batch sizes used in all experiments.\nBenchmark evaluation. We report end-to-end performance benchmarks in terms of average frames per second (FPS) achieved by each system. We measure FPS as the number of samples of experience processed over 16,000 inference batches divided by the time to complete rollout generation and training for those samples. In experiments that run at 128×128 pixel sensor resolution, rendering occurs at 256×256 and is downsampled for the policy DNN to match the behavior of WIJMANS20 regardless of system, while 64×64 resolution experiments render without downsampling. Results are reported across three models of NVIDIA GPUs: Tesla V100, GeForce RTX 2080Ti, and GeForce RTX 3090. (The different GPUs are also accompanied by different CPUs, see Appendix C.)" }, { "heading": "4.2 END-TO-END TRAINING SPEED", "text": "Single-GPU performance. On a single GPU, BPS trains agents 45× (9000 vs. 190 FPS, Tesla V100) to 110× (19900 vs. 180 FPS, RTX 3090) faster than WIJMANS20 (Table 1). The greatest speedup was achieved using the RTX 3090, which trains Depth agents at 19,900 FPS and RGB agents at 13,300 FPS – a 110× and 95× increase over WIJMANS20, respectively. This 6000 FPS performance drop from Depth to RGB is not caused by the more complex rendering workload, because the addi-\ntional cost of fetching RGB textures is masked by the dominant cost of geometry processing. Instead, due to memory constraints, BPS must reduce the batch size (N ) for RGB tasks, reducing the performance of all components (further detail in Section 4.4).\nTo assess how much of the BPS speedup is due to the SE-ResNet9 visual encoder and lower input resolution, we also compare BPS-R50 and WIJMANS20, which have matching encoder architecture and resolution. For Depth agents training on the the RTX 3090, BPS-R50 still achieves greater than 10× performance improvement over WIJMANS20 (2,300 vs. 180 FPS), demonstrating the benefits of batch simulation even in DNN heavy workloads. BPS-R50 is only 6× faster than WIJMANS20 on the RTX 2080Ti, since the ResNet50 encoder’s larger memory footprint requires batch size to be reduced from N=128 on the RTX 3090 (24 GB RAM) to N=64 on the RTX 2080Ti (11 GB RAM). Similarly, increasing DNN input resolution increases memory usage, forcing batch size to be decreased and reducing performance (Table A1).\nThe BPS batch simulation architecture is significantly faster than the WIJMANS++ design that uses multiple worker processes. When training Depth agents, BPS outperforms WIJMANS++ by 4.5× to 7.8×, with a greater speedup of 6× to 13× for RGB agents. Since BPS and WIJMANS++ use the same policy DNN and input resolution, this comparison isolates the performance advantage of batch simulation and rendering against an optimized version of the multiple-worker-process-based design: WIJMANS++ is up to 15× faster than WIJMANS20. The relative speedup of BPS for RGB agents is larger because WIJMANS++ does not share environment assets between simulator instances. Textures needed for RGB rendering significantly increase the memory footprint of each simulator instance and limit WIJMANS++ to as few as N=6 workers (compared to N=64 for Depth agents). Conversely, BPS shares 3D assets across environments and maintains a batch size at least N=128 for RGB agents.\nMulti-GPU performance. BPS achieves high end-to-end throughput when running in eight-GPU configurations: up to 72,000 FPS for Depth agents on eight RTX 2080Ti. Relative to WIJMANS20, BPS is 29× to 34× faster with eight Telsa V100s and 45× faster with eight RTX 2080Ti. These speedups are lower than the single-GPU configurations, because BPS reduces the per-GPU batch size in eight-GPU configurations to avoid large aggregate batches that harm sample efficiency. This leads to imperfect multi-GPU scaling for BPS: for Depth agents, each RTX 2080Ti is approximately 4000 FPS slower in an eight-GPU configuration than in a single-GPU configuration. Eight-GPU scaling for Depth is lower on the Tesla V100s (3.7×) compared to the 2080Ti (5.6×) because larger batch sizes are needed to utilize the large number of parallel compute units on the Tesla V100." }, { "heading": "4.3 POLICY TASK PERFORMANCE", "text": "To understand how the system design and visual encoder architecture of BPS impact learning, we evaluate the task performance of agents trained with BPS in an eight-GPU configuration with aggregate batch size of N=1024. For Depth agents, the reduction in encoder CNN depth results in a 1% and 3% decrease in SPL on Val and Test respectively with a negligible Success change on Val and a 0.9 Success decrease on Test (Table 2, row 1 vs. 2). For RGB agents, BPS suffers a performance loss of 3.8/1.3 SPL/Success on Val and 8.3/2.0 SPL/Success on Test (Table 2, row 3 vs. 4). Despite this performance reduction, the RGB agent trained by BPS would have won the 2019 Habitat challenge by 4 SPL and is only beaten by WIJMANS20’s ResNet50-based policy on Test.\nSPL vs. training time. BPS significantly outperforms the baselines in terms of wall-clock training time to reach a given SPL. After 10 hours of training on a single RTX 3090, BPS reaches over 80% SPL (on Val) while WIJMANS20 and WIJMANS++ reach only 40% and 65% SPL respectively (Fig. 3). Furthermore, BPS converges within 1% of peak SPL at approximately 20 hours; conversely, neither baseline reaches convergence within 48 hours. BPS converges to a lower final SPL in Fig. 3 than Table 2, likely due to the tested single-GPU configuration differing in batch size and scene asset swapping frequency compared to the eight-GPU configuration used to produce Table 2.\nEffect of batch size. The end-to-end training efficiency of BPS is dependent on batch size (N ): larger N will increase throughput and reduce wall-clock time to reach a given number of samples, but may harm sample efficiency and final task performance at convergence. We evaluate this relationship by training Depth agents with BPS across a range of N . As shown in Fig. 4, all experiments converge within 1% of the peak SPL achieved; however, N=256 halves total throughput compared to N=1024 (the setting used elsewhere in the paper for eight-GPU configurations). At the high end, N=4096 yields slightly worse SPL than N=1024 and is only 20% faster. Larger batch sizes also require more memory for rollout storage and training, which is prohibitive for RGB experiments that require significant GPU memory for texture assets. In terms of sample efficiency alone, Fig. A1 shows that smaller batch sizes have a slight advantage (without considering training speed).\n4.4 RUNTIME BREAKDOWN\nFig. 5 provides a breakdown of time spent in each of the main components of the BPS system (µs per frame). Nearly 60% of BPS runtime on the RTX 3090 GPU (for both Depth and RGB) is spent in DNN inference and training, even when rendering complex 3D environments and using a small, low-cost policy DNN. This demonstrates the high degree of simulation efficiency achieved by BPS. Furthermore, the results in Table A2 for BPS-R50 show that, with the larger visual encoder, over 90% of per-frame time (on Depth tasks) is spent in the DNN workload (70% on learning).\nBatch size (N ) heavily impacts DNN performance. DNN operations for Depth (N=1024) are 2× faster than RGB (N=256) on the RTX 3090, because RGB must use a smaller batch size to fit texture assets in GPU memory. The larger batch size improves GPU utilization for all system components. A similar effect is visible when comparing the single-GPU and eight-GPU V100 breakdowns. BPS reduces the per-GPU batch size from N=1024 to N=128 in eight-GPU experiments to maintain an aggregate batch size of 1024 for sample efficiency. Further work in policy optimization to address this learning limitation would improve multi-GPU scaling by allowing larger aggregate batch sizes." }, { "heading": "5 DISCUSSION", "text": "We demonstrated that architecting an RL training system around the idea of batch simulation can accelerate learning in complex 3D environments by one to two orders of magnitude over prior work. With these efficiency gains, agents can be trained with billions of simulated samples from complex environments in about a day using only a single GPU. We believe these fast turnaround times stand to make RL in realistic simulated environments accessible to a broad range of researchers, increase the scale and complexity of tasks and environments that can be explored, and facilitate new studies of how much visual realism is needed to learn a given task (e.g., dynamic lighting, shadows, custom augmentations). To facilitate such efforts, our system is available open-source at https://github.com/shacklettbp/bps-nav.\nMore generally, this work demonstrates the value of building RL systems around components that have been specifically designed for RL workloads, not repurposed from other application domains. We believe this philosophy should be applied to other components of future RL systems, in particular to new systems for performing physics simulation in complex environments." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work was supported in part by NSF, DARPA, ONR YIP, ARO PECASE, Intel, and Facebook. EW is supported in part by an ARCS fellowship. We thank NVIDIA for GPU equipment donations. We also thank the Habitat team for helpful discussions and their support of this project." }, { "heading": "A ADDITIONAL RESULTS", "text": "A.1 FLEE AND EXPLORE TASKS ON AI2-THOR DATASET\nTo demonstrate batch simulation and rendering on additional tasks besides PointGoal navigation, BPS also supports the Flee (find the farthest valid location from a given point) and Explore (visit as much of an area as possible) tasks. We evaluate BPS’s performance on these tasks on the AI2-THOR (Kolve et al., 2017) dataset to additionally show how batch rendering performs on assets with less geometric complexity than the scanned geometry in Gibson and Matterport3D.\nTable A3 shows the learned task performance and end-to-end training speed of BPS on these two tasks for Depth-sensor-driven agents. For both tasks, BPS outperforms its results on PointGoal navigation by around 5000 frames per second, largely due to the significantly reduced geometric complexity of the AI2-THOR dataset versus Gibson. Additionally, the Explore task slightly outperforms the Flee task by 600 FPS on average due to a simpler simulation workload, because no geodesic distance computation is necessary.\nA.2 STANDALONE BATCH RENDERER PERFORMANCE\nTo evaluate the absolute performance of BPS’s batch renderer independently from other components of the system, Fig. A2 shows the performance of the standalone renderer on the “Stokes” scene from the Gibson dataset using a set of camera positions taken from a training run. A batch size of 512\nTask FPS Training Score Validation Score\nExplore 25300 6.42 5.61 Flee 24700 4.27 3.65\nTable A3: Task and FPS results for Flee and Explore tasks with Depth agents (on a RTX 3090), where the Training / Validation Score is measured in meters for the Flee task and number of cells visited on the navigation mesh for the Explore task. These tasks achieve higher throughput than PointGoal navigation due to the lower complexity AI2-THOR meshes used. The relatively low scores are a result of the small spatial size of the AI2-THOR assets.\n0 500 1000 1500 2000 2500 Steps of Experience (in millions)\n84.0%\n86.0%\n88.0%\n90.0%\n92.0%\n94.0%\n96.0%\nSP L\n(h ig\nhe r i\ns b et\nte r)\nAggregate Batch Size (N) 256 512 1024 4096\nFigure A1: BPS’s validation set SPL for Depth vs. number of training samples across a range of batch sizes. This graph shows that sample efficiency slightly decreases with larger batch sizes (with the exception of N=512 vs. N=1024, where N=1024 exhibits better validation score). Ultimately, the difference in converged performance is less than 1% SPL between different batch sizes. AlthoughN=256 converges the fastest in terms of training samples needed, Fig. 4 shows that N=256 performs poorly in terms of SPL achieved per unit of training time.\n32 64 128 256 512 Agent Sensor Resolution\n10000\n15000\n20000\n25000\n30000\n35000\n40000\n45000\n50000\nFr am\nes p\ner S\nec on\nd\nBatch Size 1 2 8 32 128 512 1024\nFigure A2: Frames per second achieved by the standalone renderer on a RTX 3090 across a range of resolutions and batch sizes for a RGB sensor on the Gibson dataset. Performance saturates at a batch size of 512. For lower batch sizes, increasing resolution has a minimal performance impact, because the GPU still isn’t fully utilized. As resolution increases with larger batches, the relative decrease in performance from higher resolution increases.\nachieves a 3.7x performance increase over a batch size of 1, which emphasizes the fact that much of the end to end speedup provided by batch rendering comes from the performance benefits of larger inference and training batches made possible by the batch renderer’s 3D asset sharing.\nFig. A2 also demonstrates that the batch renderer can maintain extremely high performance (approximately 23,000 FPS) at much higher resolutions than used in the RL tasks presented in this work. While this may be useful for tasks requiring higher resolution inputs, considerable advancements would need to be made in DNN performance to handle these high resolution frames at a comparable framerate to the renderer.\nA.3 LAMB OPTIMIZER ABLATION STUDY\nTo demonstrate the benefit provided by the Lamb optimizer with regard to sample efficiency, Fig. A3 shows a comparison between the Lamb optimizer used by BPS and the Adam optimizer used by WIJMANS20 and WIJMANS++. The training setup for these two optimizers is identical, with the exception of the removal of learning rate scaling for Adam, as this causes training to diverge. The benefits of Lamb are most pronounced early in training, allowing Lamb to reach within 0.7% SPL of convergence after just 1 billion samples of experience (while Adam trails Lamb by 1.5% at the same point). As training progresses, the difference shrinks as Adam slowly converges for a final difference of 0.6% SPL after 2.5 billion frames.\n0 500 1000 1500 2000 2500 Steps of Experience (in millions)\n85.0\n87.5\n90.0\n92.5\n95.0\n97.5\n100.0\nSP L\n(H ig\nhe r i\ns b et\nte r)\nOptimizer Lamb Adam\nFigure A3: The effect of the Lamb optimizer versus the baseline Adam optimizer on sample efficiency while training a Depth sensor driven agent. Lamb maintains a consistent lead in terms of SPL throughout training, especially in the first half of training." }, { "heading": "B EXPERIMENT AND TRAINING ADDITIONAL DETAILS", "text": "Complete PointGoal navigation description. We train and evaluate agents via the same procedure as Wijmans et al. (2020). Specifically, agents are trained for PointGoalNav (Anderson et al., 2018) where the agent is tasked with navigating to a point specified relative to its initial location. Agents are equipped with a GPS+Compass sensor (providing the agent with its position and orientation relative to the starting position) and either a Depth sensor or RGB camera. The agent has access to 4 low-level actions, forward (0.25m), turn left(10◦), turn right(10◦), and stop.\nAgents are evaluated on the Gibson dataset (Xia et al., 2018). We use two metrics to evaluate the agents: Success, whether or not the agent called stop within 0.2m of the goal, and SPL (Anderson et al., 2018), a measure of both Success and efficiency of the agent’s path. During evaluation, the agent does not have access to reward.\nHalf-precision inference and mixed-precision training. We perform inference in half precision for all components except the action distribution. We train in mixed precision (Jia et al., 2018), utilizing the Apex library in O2 mode. We use half precision for all computations except the action distribution and losses. Additionally, The optimizer still utilizes single precision for all computations and applies gradients to a single-precision copy of the weights.\nTraining hyper-parameters Our hyper-parameters for eight-GPU runs are given in Table A4. We additionally employ a gradual learning rate decay where we decay the learning rate from its scaled value back to the base value over the first half of training. We use a cosine schedule.\nWe find it necessary to set ρ=1.0 for the bias parameters, fixup parameters, and layer-norm parameters of the network, making the optimizer for these parameters equivalent to AdamW (Kingma & Ba, 2015; Loshchilov & Hutter, 2018). We also use L2 weight-decay both to add back regularization lost by removing normalization layers and to stabilize Lamb; we use λ=10−2.\nWe find one epoch of PPO with two mini-batches to be sufficient (instead of two epochs with two mini-batches), thus effectively doubling the learning speed. We also evaluated one mini-batch, but found two to be beneficial while also having little penalty on overall training speed." }, { "heading": "C BENCHMARKING ADDITIONAL DETAILS", "text": "Pretrained benchmarking. A pretrained DNN is used when benchmarking to avoid frequent environment resets at the start of training.\nBenchmarking hyper-parameters. Table A5 shows the setting for hyper-parameters that impact system throughput.\nGPU details We report FPS results on three models of NVIDIA GPUs: Tesla V100, GeForce RTX 2080 TI, and GeForce RTX 3090. We demonstrate scaling to multiple GPUs with eight GPU configurations for all but the RTX 3090. Single GPU and eight GPU results are benchmarked on the same machines; however single GPU configurations are limited to 12 cores and 64 GB of RAM as this is a reasonable configuration for a single GPU workstation.\nPPO Parameters\nPPO Epochs 1 PPO Mini-Batches 2 PPO Clip 0.2 Clipped value loss No Per mini-batch advantage normalization No γ 0.99 GAE-λ (Schulman et al., 2016) 0.95 Learning rate 5.0× 10−4 Depth, 2.5× 10−4 RGB Learning rate scaling √ B\nBbase\nBbase 256 Max gradient norm 1.0 Weight decay 0.01 Lamb ρ 0.01\nPer GPU parameters\nNumber of unique scenes (K) 4 Simulation batch size/Number of Environments (N ) 128 Rollout length (L) 32\nTable A4: Hyper-parameters used for BPS training on 8 GPUs.\nCPU details. Each GPU configuration also uses different CPU configurations based on hardware access. Tesla V100 benchmarking was done with 2x Intel Xeon E5-2698 v4 (a DGX-1 station). RTX 2080 TI benchmarking was done with 2x Intel Xeon Gold 6226. RTX 3090 benchmarking was done with with 1x Intel i7-5820k. On all CPUs, we disable Hardware P-State (HWP) (where applicable) and put software P-State in performance mode. Our CPU load on simulation worker cores is inherently sporadic and we find that certain CPUs are unable to change clock frequencies fast enough to not incur a considerable performance penalty when allowed to enter a power saving state.\nSensor System CNN Resolution Tesla V100 RTX 2080Ti RTX 3090\n1 GPU 8 GPUs 1 GPU 8 GPUs 1 GPU\nDepth\nBPS SE-ResNet9 64 PPO Epochs 1 Rollout length (L) 32 Number of Environments (N ) 1024 128 512 128 1024\nBPS SE-ResNet9 128 PPO Epochs 1 Rollout length (L) 32 Number of Environments (N ) 512 128 128 128 512\nBPS-R50 ResNet50 64 PPO Epochs 1 Rollout length (L) 32 Number of Environments (N ) 512 128 256 128 512\nBPS-R50 ResNet50 128 PPO Epochs 1 Rollout length (L) 32 Number of Environments (N ) 256 128 64 64 128\nWIJMANS++ SE-ResNet9 64 PPO Epochs 1 Rollout length (L) 32 Number of Environments (N ) 64\nWIJMANS20 ResNet50 128 PPO Epochs 2 Rollout length (L) 128 Number of Environments (N ) 4\nRGB\nBPS SE-ResNet9 64 PPO Epochs 1 Rollout length (L) 32 Number of Environments (N ) 512 128 128 128 256\nBPS SE-ResNet9 128 PPO Epochs 1 Rollout length (L) 32 Number of Environments (N ) 256 128 64∗ 64∗ 256\nBPS-R50 ResNet50 64 PPO Epochs 1 Rollout length (L) 32 Number of Environments (N ) 256 128 64 64 256\nBPS-R50 ResNet50 128 PPO Epochs 1 Rollout length (L) 32 Number of Environments (N ) 128 128 32∗ 32∗ 64\nWIJMANS++ SE-ResNet9 64 PPO Epochs 1 Rollout length (L) 32 Number of Environments (N ) 20 20 6 6 16\nWIJMANS20 ResNet50 128 PPO Epochs 2 Rollout length (L) 128 Number of Environments (N ) 4\nTable A5: System configuration parameters for Table 1. ∗ indicates 4 mini batches per epoch instead of 2." } ]
2,021
LARGE BATCH SIMULATION FOR DEEP REINFORCEMENT LEARNING
SP:24ee3df238dc009de59a51589f2e171d750b345e
[ "The paper proposes an adversarial framework DINO to train translation models from source to target and target to source. The basic idea is to replace generator and discriminator in the energy based GAN with two source-to-target generation models. The discriminator(reverse generator) and the generator competes in a minimax game to reconstruct the data. The framework is further extended with duplicate output heads for both discriminator and generator to enhance the training robustness. " ]
Domain translation is the process of transforming data from one domain to another while preserving the common semantics. Some of the most popular domain translation systems are based on conditional generative adversarial networks, which use source domain data to drive the generator and as an input to the discriminator. However, this approach does not enforce the preservation of shared semantics since the conditional input can often be ignored by the discriminator. We propose an alternative method for conditioning and present a new framework, where two networks are simultaneously trained, in a supervised manner, to perform domain translation in opposite directions. Our method is not only better at capturing the shared information between two domains but is more generic and can be applied to a broader range of problems. The proposed framework performs well even in challenging cross-modal translations, such as video-driven speech reconstruction, for which other systems struggle to maintain correspondence.
[ { "affiliations": [], "name": "Konstantinos Vougioukas" }, { "affiliations": [], "name": "Stavros Petridis" }, { "affiliations": [], "name": "Maja Pantic" } ]
[ { "authors": [ "Sercan O. Arik", "Gregory Diamos", "Andrew Gibiansky", "John Miller", "Kainan Peng", "Wei Ping", "Jonathan Raiman", "Yanqi Zhou" ], "title": "Deep voice 2: Multi-speaker neural text-to-speech", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Mikel Artetxe", "Gorka Labaka", "Eneko Agirre", "Kyunghyun Cho" ], "title": "Unsupervised neural machine translation", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "David Berthelot", "Thomas Schumm", "Luke Metz" ], "title": "Began: Boundary equilibrium generative adversarial networks", "venue": "In arXiv preprint:1703.10717,", "year": 2017 }, { "authors": [ "Lukas Biewald" ], "title": "Experiment tracking with weights and biases, 2020", "venue": "URL https://www. wandb.com/. Software available from wandb.com", "year": 2020 }, { "authors": [ "Chia Che Chang", "Chieh Hubert Lin", "Che Rung Lee", "Da Cheng Juan", "Wei Wei", "Hwann Tzong Chen" ], "title": "Escaping from collapsing modes in a constrained space", "venue": "Lecture Notes in Computer Science,", "year": 2018 }, { "authors": [ "Liang-Chieh Chen", "Yukun Zhu", "George Papandreou", "Florian Schroff", "Hartwig Adam" ], "title": "Encoderdecoder with atrous separable convolution for semantic image segmentation", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Runfa Chen", "Wenbing Huang", "Binghui Huang", "Fuchun Sun", "Bin Fang" ], "title": "Reusing discriminators for encoding: Towards unsupervised image-to-image translation", "venue": null, "year": 2020 }, { "authors": [ "Joon Son Chung", "Amir Jamaludin", "Andrew Zisserman" ], "title": "You said that", "venue": "In BMVC,", "year": 2017 }, { "authors": [ "Martin Cooke", "Jon Barker", "Stuart Cunningham", "Xu Shao" ], "title": "An audio-visual corpus for speech perception and automatic speech recognition", "venue": "The Journal of the Acoustical Society of America,", "year": 2006 }, { "authors": [ "Marius Cordts", "Mohamed Omran", "Sebastian Ramos", "Timo Rehfeld", "Markus Enzweiler", "Rodrigo Benenson", "Uwe Franke", "Stefan Roth", "Bernt Schiele" ], "title": "The cityscapes dataset for semantic urban scene understanding", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Chris Donahue", "Julian McAuley", "Miller Puckette" ], "title": "Adversarial audio synthesis", "venue": "ICLR 2019,", "year": 2019 }, { "authors": [ "Xun Huang", "Ming Yu Liu", "Serge Belongie", "Jan Kautz" ], "title": "Multimodal unsupervised image-toimage translation", "venue": "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics),", "year": 2018 }, { "authors": [ "Phillip Isola", "Jun Yan Zhu", "Tinghui Zhou", "Alexei A. Efros" ], "title": "Image-to-image translation with conditional adversarial networks", "venue": "In CVPR, volume 2017-Janua,", "year": 2017 }, { "authors": [ "Justin Johnson", "Alex Alahi", "re re", "Li Fei-Fei" ], "title": "Perceptual losses for real-time style transfer and super-resolution", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Jimmy Lei Ba" ], "title": "Adam: A method for stochastic gradient descent", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "R. Kubichek" ], "title": "Mel-cepstral distance measure for objective speech quality assessment", "venue": "In Proceedings of IEEE Pacific Rim Conference on Communications Computers and Signal Processing,", "year": 1993 }, { "authors": [ "Guillaume Lample", "Alexis Conneau", "Ludovic Denoyer", "Marc’Aurelio Ranzato" ], "title": "Unsupervised machine translation using monolingual corpora only", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Yann Lecun", "Sumit Chopra", "Raia Hadsell", "Marc Aurelio Ranzato", "Fu Jie Huang" ], "title": "Energy-based models", "venue": "In Predicting Structured Data,", "year": 2006 }, { "authors": [ "Cheng-Han Lee", "Ziwei Liu", "Lingyun Wu", "Ping Luo" ], "title": "Maskgan: Towards diverse and interactive facial image manipulation", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Ming Yu Liu", "Oncel Tuzel" ], "title": "Coupled generative adversarial networks", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Ming Yu Liu", "Thomas Breuel", "Jan Kautz" ], "title": "Unsupervised image-to-image translation networks", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Michael Mathieu", "Camille Couprie", "Yann LeCun" ], "title": "Deep multi-scale video prediction beyond mean square error", "venue": null, "year": 2015 }, { "authors": [ "Mehdi Mirza", "Simon Osindero" ], "title": "Conditional generative adversarial nets", "venue": "arXiv preprint:1411.1784,", "year": 2014 }, { "authors": [ "Takeru Miyato", "Masanori Koyama" ], "title": "Cgans with projection discriminator", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Niranjan D. Narvekar", "Lina J. Karam" ], "title": "A no-reference perceptual image sharpness metric based on a cumulative probability of blur detection", "venue": "International Workshop on Quality of Multimedia Experience, QoMEx,", "year": 2009 }, { "authors": [ "Augustus Odena", "Christopher Olah", "Jonathon Shlens" ], "title": "Conditional image synthesis with auxiliary classifier gans", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Taesung Park", "Ming-Yu Liu", "Ting-Chun Wang", "Jun-Yan Zhu" ], "title": "Semantic image synthesis with spatially-adaptive normalization", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Santiago Pascual", "Antonio Bonafonte", "Joan Serrà" ], "title": "Segan: Speech enhancement generative adversarial network", "venue": "In INTERSPEECH,", "year": 2017 }, { "authors": [ "Tingting Qiao", "Jing Zhang", "Duanqing Xu", "Dacheng Tao" ], "title": "Mirrorgan: Learning text-to-image generation by redescription", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Scott Reed", "Zeynep Akata", "Xinchen Yan", "Lajanugen Logeswaran", "Bernt Schiele", "Honglak Lee" ], "title": "Generative adversarial text to image synthesis", "venue": "In ICML,", "year": 2016 }, { "authors": [ "A.W. Rix", "J.G. Beerends", "M.P. Hollier", "A.P. Hekstra" ], "title": "Perceptual evaluation of speech quality (pesq) - a new method for speech quality assessment of telephone networks and codecs", "venue": "In ICASSP,", "year": 2001 }, { "authors": [ "Mihaela Rosca", "Balaji Lakshminarayanan", "David Warde-Farley", "Shakir Mohamed" ], "title": "Variational approaches for auto-encoding generative adversarial networks. 2017", "venue": null, "year": 2017 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": null, "year": 2016 }, { "authors": [ "Jost Tobias Springenberg" ], "title": "Unsupervised and semi-supervised learning with categorical generative adversarial networks", "venue": "arXiv preprint arXiv:1511.06390,", "year": 2015 }, { "authors": [ "Cees H. Taal", "Richard C. Hendriks", "Richard Heusdens", "Jesper Jensen" ], "title": "An algorithm for intelligibility prediction of time-frequency weighted noisy speech", "venue": "IEEE Transactions on Audio, Speech and Language Processing,", "year": 2011 }, { "authors": [ "Dmitry Ulyanov", "Andrea Vedaldi", "Victor Lempitsky" ], "title": "Instance normalization: The missing ingredient for fast stylization", "venue": "arXiv preprint arXiv:1607.08022,", "year": 2016 }, { "authors": [ "Konstantinos Vougioukas", "Pingchuan Ma", "Stavros Petridis", "Maja Pantic" ], "title": "Video-driven speech reconstruction using generative adversarial networks", "venue": "In INTERSPEECH,", "year": 2019 }, { "authors": [ "Konstantinos Vougioukas", "Stavros Petridis", "Maja Pantic" ], "title": "Realistic speech-driven facial animation with gans", "venue": null, "year": 2020 }, { "authors": [ "Ting Chun Wang", "Ming Yu Liu", "Jun Yan Zhu", "Andrew Tao", "Jan Kautz", "Bryan Catanzaro" ], "title": "Highresolution image synthesis and semantic manipulation with conditional gans", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Dingdong Yang", "Hong Seunghoon", "Ynseok Jang", "Tianchen Zhao", "Honglak Lee" ], "title": "Diversitysensitive conditional generative adversarial networks", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Ran Yi", "Yong Jin Liu", "Yu Kun Lai", "Paul L. Rosin" ], "title": "Apdrawinggan: Generating artistic portrait drawings from face photos with hierarchical gans", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Han Zhang", "Ian Goodfellow", "Dimitris Metaxas", "Augustus Odena" ], "title": "Self-attention generative adversarial networks", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Junbo Zhao", "Michael Mathieu", "Yann LeCun" ], "title": "Energy-based generative adversarial network", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Jun Yan Zhu", "Taesung Park", "Phillip Isola", "Alexei A. Efros" ], "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Jun Yan Zhu", "Richard Zhang", "Deepak Pathak", "Trevor Darrell", "Alexei A. Efros", "Oliver Wang", "Eli Shechtman" ], "title": "Toward multimodal image-to-image translation", "venue": "In NeurIPS,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Domain translation methods exploit the information redundancy often found in data from different domains in order to find a mapping between them. Successful applications of domain translation include image style transfer (Zhu et al., 2017a) and speech-enhancement (Pascual et al., 2017). Furthermore, these systems are increasingly being used to translate across modalities in applications such as speech-driven animation (Chung et al., 2017) and caption-based image generation (Reed et al., 2016). Some of the most popular methods for domain translation are based on conditional Generative Adversarial Networks (cGANs) (Mirza & Osindero, 2014). The conditional information in cGANs is used to drive the generation and to enforce the correspondence between condition and sample. Various alternatives have been proposed for how the condition should be included in the discriminator (Miyato & Koyama, 2018; Reed et al., 2016) but the majority of frameworks provide it as an input, hoping that the sample’s correlation with the condition will play a role in distinguishing between synthesized and genuine samples. The main drawback of this approach is that it does not encourage the use of the conditional information and therefore its contribution can be diminished or even ignored. This may lead to samples that are not semantically consistent with the condition.\nIn this paper, we propose the Dual Inverse Network Optimisation (DINO) framework1 which is based on energy-based GANs (Zhao et al., 2017) and consists of two networks that perform translation in opposite directions as shown in Figure 1. In this framework, one network (Forward network) translates data from the source domain to the target domain while the other (Reverse Network) performs the inverse translation. The Reverse network’s goal is to minimize the reconstruction error for genuine data and to maximize it for generated data. The Forward network aims to produce samples that can be accurately reconstructed back to the source domain by the Reverse Network. Therefore, during training the Forward network is trained as a generator and the Reverse as a discriminator. Since discrimination is based on the ability to recover source domain samples, the Forward network is driven to produce samples that are not only realistic but also preserve the shared semantics. We show that this approach is effective across a broad range of supervised translation problems, capturing the correspondence even when domains are from different modalities (i.e., video-audio). In detail, the contributions of this paper are:\n1Source code: https://github.com/DinoMan/DINO\n• A domain translation framework, based on a novel conditioning mechanism for energybased GANs, where the adversarial loss is based on the prediction of the condition. • An adaptive method for balancing the Forward and Reverse networks, which makes training\nmore robust and improves performance. • A method for simultaneously training two networks to perform translation in inverse direc-\ntions, which requires fewer parameters than other domain translation methods. • The first end-to-end trainable model for video-driven speech reconstruction capable of pro-\nducing intelligible speech without requiring task-specific losses to enforce correct content." }, { "heading": "2 RELATED WORK", "text": "Domain translation covers a wide range of problems including image-to-image translation (Isola et al., 2017), caption-based image synthesis (Qiao et al., 2019), and text-to-speech synthesis (Arik et al., 2017). Unsupervised translation methods attempt to find a relationship between domains using unpaired training data. However, finding correspondence without supervision is an ill-posed problem which is why these methods often impose additional constraints on their networks or objectives. The majority of unsupervised methods are applied to image-to-image translation problems. The CoGAN model (Liu & Tuzel, 2016) imposes a weight-sharing constraint on specific layers of two GANs, which are trained to produce samples from different domains. The motivation is that sharing weights in layers associated with high-level features should help preserve the overall structure of the images. This approach is extended in the UNIT framework (Liu et al., 2017), where the generative networks are Variational Autoencoders (VAEs) with a shared latent space. The weight-sharing used in the CoGAN and UNIT frameworks restricts them to problems where both domains are of the same modality. A more generic method of achieving domain-correspondence is presented in the CycleGAN model proposed by Zhu et al. (2017a). The CycleGAN objective includes a cycle-consistency loss to ensure that image translation between two domains is invertible. Recently, Chen et al. (2020) showed that reusing part of the discriminators in CycleGAN as encoders for the generators achieves parameter reduction as well as better results. Although it is possible to apply the cycle consistency loss for cross-modal translation it has not been widely used in such scenarios.\nUnlike unsupervised methods, supervised approaches rely on having a one-to-one correspondence between the data from different domains. The Pix2Pix model (Isola et al., 2017) uses cGANs to perform image-to-image translation and has inspired many subsequent works (Zhu et al., 2017a; Wang et al., 2018; Park et al., 2019). Compared to unsupervised methods, supervised approaches have had more success in translating across different modalities. Notable applications include speechdriven facial animation (Vougioukas et al., 2020) and text-to-image synthesis (Reed et al., 2016; Qiao et al., 2019). It is important to note that the adversarial loss in cGANs alone is often not capable of establishing domain correspondence, which is why these approaches also rely on additional reconstruction or perceptual losses (Johnson et al., 2016) in order to accurately capture semantics.\nIn many scenarios, the relationship between domains is not bijective (e.g. one-to-many mapping) hence it is desirable for translation systems to produce a diverse set of outputs for a given input. Achieving this diversity is a common issue with GAN-based translation systems (Isola et al., 2017; Liu et al., 2017) since they often suffer from mode collapse. The Pix2Pix model (Isola et al., 2017) proposes using dropout in both training and inference stages as a solution to this problem. Another successful approach is to apply the diversity regularisation presented in Yang et al. (2019). Furthermore, many works (Zhu et al., 2017b; Huang et al., 2018; Chang et al., 2018) attempt to solve this issue by enforcing a bijective mapping between the latent space and the target image domain. Finally, adding a reconstruction loss to the objective also discourages mode collapse (Rosca et al., 2017), by requiring that the entire support of the distribution of training images is covered." }, { "heading": "2.1 CONDITIONAL GANS", "text": "The most common method for conditioning GANs is proposed by Mirza & Osindero (2014) and feeds the conditional information as input to both the generator and the discriminator. Using the condition in the discriminator assumes that the correlation of samples with the condition will be considered when distinguishing between real and fake samples. However, feeding the condition to the discriminator does not guarantee that the correspondence will be captured and could even lead to the condition being ignored by the network. This issue is shared across all methods which use the condition as input to the discriminator (Miyato & Koyama, 2018; Reed et al., 2016). Furthermore, it explains why these models perform well when there is structural similarity between domains (e.g. image-to-image translation) but struggle to maintain semantics in cases where domains are significantly different such as cross-modal applications (e.g. video-to-speech).\nAnother method presented in Park et al. (2019) proposes generator conditioning through spatiallyadaptive normalisation layers (SPADE). This approach has been used to produce state of the art results in image generation. It should be noted that this approach requires that source domain data be one-hot encoded semantic segmentation maps and is therefore limited to specific image-translation problems (i.e. segmentation maps to texture image translations). More importantly, conditioning of the discriminator is still done by feeding the condition as an input and hence will have similar drawbacks as other cGAN based methods with regards to semantic preservation.\nIn some cases it is possible to guide the discriminator to learn specific semantics by performing a self-supervised task. An example of this is the discriminator proposed in Vougioukas et al. (2020) which enforces audio-visual synchrony in facial animation by detecting in and out of sync pairs of video and audio. However, this adversarial loss alone can not fully enforce audio-visual synchronization which is why additional reconstruction losses are required. Finally, it is important to note that finding a self-supervised task capable of enforcing the desired semantics is not always possible." }, { "heading": "2.2 ENERGY-BASED GANS", "text": "Energy-based GANs (Mathieu et al., 2015; Berthelot et al., 2017) use a discriminator D which is an autoencoder. The generator G synthesizes a sample G(z) from a noise sample z ∈ Z . The discriminator output is fed to a loss function L in order to form an energy function LD(·) = L (D(·)). The objective of the discriminator is to minimize the energy assigned to real data x ∈ X and maximize the energy of generated data. The generator has the opposite objective, leading to the following minimax game:\nmin D max G V (D,G) = LD(x)− LD(G(z)) (1)\nThe EBGAN model proposed by Mathieu et al. (2015) uses the mean square error (MSE) to measure the reconstruction and a margin loss to limit the penalization for generated samples. The resulting objective thus becomes:\nmin D max G V (D,G) = ‖D(x)− x‖+max(0,m− ‖D(G(z))−G(z)‖), (2)\nThe marginm corresponds to the maximum energy that should be assigned to a synthesized sample. Performance depends on the magnitude of the margin, with large values causing instability and small values resulting in mode collapse. For this reason, some approaches (Wang et al., 2017; Mathieu et al., 2015) recommend decaying the margin during training. An alternative approach is proposed by Berthelot et al. (2017) which introduces an equilibrium concept to balance the generator\nand discriminator and measure training convergence. Energy-based GANs have been successful in generating high quality images although their use for conditional generation is limited." }, { "heading": "3 METHOD", "text": "The encoder-decoder structure used in the discriminator of an energy-based GAN gives it the flexibility to perform various regression tasks. The choice of task determines how energy is distributed and can help the network focus on specific characteristics. We propose a conditional version of EBGAN where the generator (Forward network) and discriminator (Reverse network) perform translations in opposite directions. The Reverse network is trained to minimize the reconstruction error for real samples (low energy) and maximize the error for generated samples (high energy). The Forward network aims to produce samples that will be assigned a low energy by the Reverse network. Generated samples that do not preserve the semantics can not be accurately reconstructed back to the source domain and are thus penalized. Given a condition x ∈ X and its corresponding target y ∈ Y and networks F : X → Y and R : Y → X the objective of the DINO framework becomes:\nmin R max F V (R,F ) = L (R(y), x)−L (R(F (x)), x), (3)\nwhere L (·, ·) is a loss measuring the reconstruction error between two samples. Multiple choices exist for the loss function and their effects are explained in Lecun et al. (2006). We propose using the MSE to measure reconstruction error and a margin loss similar to that used in EBGAN. However, as shown in Mathieu et al. (2015) this method is sensitive to the value of margin parameter m, which must be gradually decayed to avoid instability. We propose using an adaptive method inspired by BEGAN (Berthelot et al., 2017) which is based on maintaining a fixed ratio γ ∈ [0, 1) between the reconstruction of positive and negative samples.\nγ = L (R(y), x)\nL (R(F (x)), x) (4)\nBalancing is achieved using a proportional controller with gain λ. A typical value for the gain is λ = 0.001. The output of the controller kt ∈ [0, 1] determines the amount of emphasis that the Reverse network places on the reconstruction error of generated samples. The balance determines an upper bound for the energy of fake samples, which is a fixed multiple of the energy assigned to real samples. When the generator is producing samples with a low energy they are pushed to this limit faster than when the generator is already producing high-energy samples. Since the ratio of reconstruction errors is kept fixed this limit will decay as the reconstruction error for real samples improves over time. This achieves a similar result to a decaying margin loss without the necessity for a decay schedule. The output of the controller as well as the reconstruction error for real and fake samples during training is shown in Figure 2. We notice that the controller output increases at the start of training in order to push generated samples to a higher energy value and reduces once the limit determined by γ is reached. Although this approach is inspired by BEGAN there are some key differences which prevent the BEGAN from working with the predictive conditioning proposed in this paper. These are discussed in detail in Section A.4 of the appendix.\nIn practice we find it advantageous to use the margin loss in combination with adaptive balancing. In this case the margin parameter serves as a hard cutoff for the energy of generated samples and\nhelps stabilize the system at the beginning of training. As training progresses and reconstruction of real samples improves training relies more on the soft limit enforced by the energy balancing mechanism. In this case we can set γ = 0 to fall back to a fixed margin approach. The training objective is shown in Equation 5. When dealing with one-to-many scenarios we find that adding a reconstruction loss to the generator’s objective can help improve sample diversity.\nLR = ‖R(y)− x‖+ kt ·max(0,m− ‖R(F (x))− x‖) LF = ‖R(F (x))− x‖ kt+1 = kt + λ · [‖R(y)− x‖ − γ · ‖R(F (x))− x‖]\n(5)" }, { "heading": "3.1 BIDIRECTIONAL TRANSLATION", "text": "It is evident from Equation 5 that the two networks have different goals and that only the Forward network will produce realistic translations since the Reverse network is trained only using an MSE loss. This prohibits its use for domain translation and limits it to facilitating the training of the Forward network. For the DINO framework, since the Forward and Reverse network have the same structure we can swap the roles of the networks and retrain to obtain realistic translation in the opposite direction. However, it is also possible to train both networks simultaneously by combining the objectives for both roles (i.e. discriminator and generator). This results in the following zero-sum two player game:\nmin R max F V (R,F ) = L (R(y), x)−L (R(F (x)), x) + L (F (R(y)), y)−L (F (x), y) (6)\nIn this game both players have the same goal which is to minimize the reconstruction error for real samples and to maximize it for fake samples while also ensuring that their samples are assigned a low energy by the other player. Each player therefore behaves both as a generator and as a discriminator. However, in practice we find that is difficult for a network to achieve the objectives for both roles, causing instability during training. The work proposed by Chen et al. (2020), where discriminators and generators share encoders, comes to a similar conclusion and proposes decoupling the training for different parts of the networks. This is not possible in our framework since the discriminator for one task is the generator for the other. To solve this problem we propose branching the decoders of the networks to create two heads which are exclusively used for either discrimination or generation. We find empirically that the best performance in our image-to-image experiments is achieved when branching right after the third layer of the decoder. Additionally, the network encoders are frozen during the generation stage. The bidirectional training paradigm is illustrated in Figure 3.\nWhen training network R as a discriminator we use the stream that passes through the discriminative head Rdisc and when training as a generator we use the stream that uses the generative head\nRgen. The same applies for player F and which uses streams Fdisc and Fgen for discrimination and generation, respectively. To maintain balance during training we use a different controller for each player which results the objective shown in Equation 7. The first two terms in each players objective represent the player’s goal as a discriminator and the last term reflects its goal as a generator. LR = L (Rdisc(y), x)− kt ·L (Rdisc(Fgen(x))− x)︸ ︷︷ ︸ discriminator objective +L (Fdisc(Rgen(y)), y)︸ ︷︷ ︸ generator objective LF = L (Fdisc(x), y)− µt ·L (Fdisc(Rgen(y)), y)︸ ︷︷ ︸ discriminator objective +L (Rdisc(Fgen(x)), x)︸ ︷︷ ︸ generator objective\nkt+1 = kt + λR · [L (Rdisc(y), x)− γD ·L (Rdisc(Fgen(x)), x)] µt+1 = µt + λF · [L (Fdisc(x), y)− γG ·L (Fdisc(Rgen(y)), y)]\n(7)" }, { "heading": "3.2 COMPARISON WITH OTHER METHODS", "text": "As mentioned in section 2.1 the cGAN conditioning mechanism, used in most supervised translation systems, struggles to preserve the shared semantics in cases where there is no structural similarity between domains. The DINO framework attempts to overcome this limitation by using a different paradigm, where the condition is predicted by the discriminator instead of being fed as an additional input, forcing the generator to maintain the common semantics. Our approach is inspired by several semi-supervised training techniques for GANs (Salimans et al., 2016; Odena et al., 2017; Springenberg, 2015), which have showed that specializing the discriminator by performing a classification task adds structure to its latent space and improves the quality of generated samples. However, these approaches are not designed for conditional generation and use classification only as a proxy task. This differs from our approach where discrimination is driven directly by the prediction of condition.\nAnother advantage of our system stems from its use of an encoder-decoder structure for the Reverse network. This provides flexibility since the Reverse network can be easily adapted to perform a variety of different translation tasks. In contrast, the multi-stream discriminators used in crossmodal cGANs require fusing representations from different streams. The fusion method as well as the stage at which embeddings are fused is an important design decision that must be carefully chosen depending on the task since it can greatly affect the performance of these models.\nThe objective of the generator in Equation 5 resembles the cycle-consistency loss used in many unsupervised methods such as CycleGAN (Zhu et al., 2017a) and NICE-GAN (Chen et al., 2020). This also bears resemblance to the back-translation used in bidirectional neural machine translation methods (Artetxe et al., 2018; Lample et al., 2018). However, it is important to note that the cycleconsistency loss used in these approaches is not an adversarial loss since it is optimized with respect to both networks’ parameters. The most similar work to ours is MirrorGAN (Qiao et al., 2019), which improves the generation of images through re-description. This model however uses a pretrained network for re-description in addition to an adversarial loss. Compared to all aforementioned approaches the DINO framework is the only one in which the adversarial loss alone can both achieve sample realism while enforcing correspondence. Finally, since our bidirectional framework uses the generators for discrimination it requires far fewer parameters than these approaches." }, { "heading": "4 EXPERIMENTS", "text": "We evaluate the DINO framework on image-to-image translation since this the most typical application for domain-translation systems. Additionally, we tackle the problem of video-driven speech reconstruction, which involves synthesising intelligible speech from silent video. In all of the experiments focus is placed not only on evaluating the quality of the generated samples but also verifying that the semantics are preserved after translation." }, { "heading": "4.1 IMAGE-TO-IMAGE TRANSLATION", "text": "The majority of modern domain translation methods have been applied to image-to-image translation problems, since it is common for both domains to share high-level structure and therefore easier to capture their correspondence. We evaluate the DINO framework on the CelebAMask-HQ (Lee et al., 2020) and the Cityscapes (Cordts et al., 2016) datasets, using their recommended training-test splits.\nWhen judging the performance of image-to-image translation systems one must consider multiple factors including the perceptual quality, the semantic consistency and the diversity of the generated images. We therefore rely on a combination of full-reference reconstruction metrics and perceptual metrics for image assessment.\nReconstruction metrics such as the peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM) measure the deviation of generated images from the ground truth. Although these metrics are good at measuring image distortion they are usually poor indicators of realism and they penalize diversity. For this reason, we also measure the perceptual quality of the images by using the Fréchet Inception Distance (FID), which compares the statistics of the embeddings of real and fake images in order to measure the quality and diversity. Furthermore, we use the cumulative probability blur detection (CPBD) metric (Narvekar & Karam, 2009) to assess image sharpness. Finally, we use pre-trained semantic segmentation models to verify that image semantics are accurately captured in the images. For the CelebAMask-HQ dataset we use the segmentation model from Lee et al. (2020) and for the Cityscapes dataset we use a DeepLabv3+ model (Chen et al., 2018). We report the pixel accuracy as well as the average intersection over union (mIOU).\nWe compare our method to other supervised image-to-image translation models, such as Pix2Pix2 and BiCycleGAN3. Since DINO is a generic translation method comparing it to translation methods that are tailored to a specific type of translation (Yi et al., 2019) is an unfair comparison since these methods make use of additional information or use task-specific losses. Nevertheless, we present the results for SPADE4 (Park et al., 2019) on the Cityscapes dataset in order to see how well our approach performs compared to state-of-the-art task-specific translation methods. Since the pretrained SPADE model generates images at a resolution of 512× 256 we resize images to 256× 256 for a fair comparison.\nWhen training the DINO model we resize images to 256 × 256 and use a networks with a similar U-Net architecture to the Pix2Pix model to ensure a fair comparison. The architecture of the networks used in these experiments can be found in section A.1.1 of the appendix. Additionally, like Pix2Pix we use an additional L1 loss to train the Forward network (generator), which helps improve image diversity. The balance parameter γ is set to 0.8 for image-to-image translation experiments. We train using the Adam optimizer (Kingma & Ba, 2015), with a learning rate of 0.0002, and momentum parameters β1 = 0.5, β2 = 0.999. The quantitative evaluation on the CelebAMask-HQ and Cityscapes datasets is shown in Tables 1 and 2. Qualitative results are presented in Section A.5.1 of the appendix.\nThe results on Tables 1 and 2 show that our method outperforms the Pix2Pix and BicycleGAN models both in terms of perceptual quality as well as reconstruction error. More importantly, our approach is better at preserving the image semantics as indicated by the higher pixel accuracy and mIoU. We notice that for the CelebAMask-HQ dataset the segmentation accuracy is better for generated images than for real images. This phenomenon is due to some inconsistent labelling and is explained in Section A.2 of the appendix. We also note that the bidirectional DINO framework can simultaneously train two networks to perform translation in both directions without a sacrificing quality and with fewer parameters. Finally, an ablation study for our model is performed in A.3 of the appendix.\nWhen comparing our results to those achieved by the SPADE network on the Cityscapes dataset we notice that our model performs similarly, achieving slightly better performance on reconstruction metrics (PSNR, SSIM) and slightly worse performance for preserving the image semantics. This is expected since the SPADE model has been specifically designed for translation from segmentation maps to images. Furthermore, the networks used in these experiments, for the DINO framework, are far simpler (37 million parameters in the generator compared to 97 million). More importantly, unlike SPADE our network can be applied to any task and perform the translation in both directions." }, { "heading": "4.2 VIDEO-DRIVEN SPEECH RECONSTRUCTION", "text": "Many problems require finding a mapping between signals from different modalities (e.g. speechdriven facial animation, caption-based image generation). This is far more challenging than imageto-image translation since signals from different modalities do not have structural similarities, making it difficult to capture their correspondence. We evaluate the performance of our method on video-driven speech reconstruction, which involves synthesising speech from a silent video. This is a notoriously difficult problem due to ambiguity which is attributed to the existence of homophenous words. Another reason for choosing this problem is that common reconstruction losses (e.g. L1, MSE), which are typically used in image-to-image translation to enforce low-frequency correctness (Isola et al., 2017) are not helpful for the generation of raw waveforms. This means that methods must rely only on the conditional adversarial loss to enforce semantic consistency.\nWe show that the DINO framework can synthesize intelligible speech from silent video using only the adversarial loss described in Equation 5. Adjusting the framework for this task requires using encoders and decoders that can handle audio and video as shown in Figure 4. The Forward network transforms a sequence of video frames centered around the mouth to its corresponding waveform. The Reverse network is fed a waveform and the initial video frame to produce a video sequence of the speaker. The initial frame is provided to enforce the speaker identity and ensures that the reconstruction error will be based on the facial animation and not on any differences in appearance. This forces the network to focus on capturing the content of the speech and not the speaker’s identity.\nExperiments are performed on the GRID dataset (Cooke et al., 2006), which contains short phrases spoken by 33 speakers. There are 1000 phrases per speaker, each containing 6 words from a vocabulary of 51 words. The data is split according to Vougioukas et al. (2019) so that the test set contains unseen speakers and phrases. As baselines for comparison we use a conditional version of WaveGAN (Donahue et al., 2019) and a CycleGAN framework adapted for video-to-audio translation. Additionally, we compare with the model proposed by Vougioukas et al. (2019), which is designed for video-driven speech reconstruction and uses a perceptual loss to accurately capture the spoken content. An Adam optimiser is used with a learning rate of 0.0001 for the video-to-audio network and a learning rate of 0.001 for the audio-to-video network. The balancing parameter γ is set to 0.5.\nWe evaluate the quality of the synthesized audio based on intelligibility and spoken word accuracy. We measure speech quality using the mean Mel Cepstral Distance (MCD) (Kubichek, 1993), which measures the distance between two signals in the mel-frequency cepstrum and is often used to assess synthesized speech. Furthermore, we use the Short-Time Objective Intelligibility (STOI) (Taal et al., 2011) and Perceptual Evaluation of Speech Quality (PESQ) (Rix et al., 2001) metrics, which measure the intelligibility of the synthesized audio. Finally, in order to verify the semantic consistency of the spoken message we use a pretrained automatic speech recognition (ASR) model and measure the Word Error Rate (WER). The results for the speech-reconstruction task are shown in Table 3.\nThe results of Table 3 show that our method is capable of producing intelligible speech and achieving similar performance to the model proposed by Vougioukas et al. (2019). Furthermore, the large WER for both baselines highlights the limitations of cGANs and CycleGANs for cross-modal translation. Although our approach is better at capturing the content and audio-visual correspondence, we notice that samples all share the same robotic voice compared to the other methods. This is expected since discrimination using our approach focuses mostly on audio-visual correspondence and not capturing the speaker identity. Examples of synthesized waveforms and their spectrograms are shown in Section A.6 of the appendix and samples are provided in the supplementary material.\nEthical considerations: We have tested the DINO model on this task as an academic investigation to test its ability to capture common semantics even across modalities. Video-driven speech reconstruction has many practical applications especially in digital communications. It enables videoconferencing in noisy or silent environments and can improve hearing-assistive devices. However, this technology can potentially be used in surveillance systems which raises privacy concerns. Therefore, although we believe that this topic is worth exploring, future researchers should be careful when developing features that will enable this technology to be used for surveillance purposes." }, { "heading": "5 CONCLUSIONS", "text": "In this paper we have presented a domain translation framework, based on predictive conditioning. Unlike other conditional approaches, predicting the condition forces the discriminator to learn the the relationship between domains and ensures that the generated samples preserve cross-domain semantics. The results on image-to-image translation verify that our approach is capable of producing sharp and realistic images while strongly enforcing semantic correspondence between domains. Furthermore, results on video-driven speech reconstruction show that our method is applicable to a wide range of problems and that correspondence can be maintained even when translating across different modalities. Finally, we present a method for bidirectional translation and show that it achieves the same performance while reducing the number of training parameters compared to other models." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 NETWORK ARCHITECTURE", "text": "A.1.1 IMAGE-TO-IMAGE TRANSLATION\nThis section describes the network architecture used for the image-to-image translation experiments in Section 4.1. The two networks used in the DINO framework are identical and both use a U-Net encoder-decoder architecture similar to that used in Pix2Pix (Isola et al., 2017). The encoder is a 7-layer Convolutional Neural Network (CNN) made of strided 2D convolutions. The decoder is a 12-layer CNN made of 2D convolutions and up-sampling layers. We use Instance Normalization (Ulyanov et al., 2016), which has been shown to work well in style transfer applications. The network is shown in detail in Figure 5.\nA.1.2 VIDEO-DRIVEN SPEECH RECONSTRUCTION\nThis section describes the architecture of the networks used for video-driven speech reconstruction in the experiments of Section 4.2. In this scenario the Forward network synthesizes speech and the Reverse network performs speech-driven facial animation. The Forward network is made up of a Video Encoder, a single-layer GRU and an Audio Decoder. The video sequence is fed to the Video Encoder, which uses spatio-temporal convolutions to produce an embedding per video frame. The embeddings are fed to a single-layer GRU to create a coherent sequence of representations which is then passed to an Audio Decoder network which will produce 640 audio samples per embedding. Concatenating these chunks of samples without overlap forms a waveform. Both the Video Encoder and Audio Decoder are fully convolutional networks, with the Audio Decoder using an additional self-attention layer (Zhang et al., 2019) before the last layer as shown in Figure 6.\nThe Reverse network is made up of two encoders responsible for capturing the speaker identity and content. The content stream uses a sliding window approach to create a sequence of embeddings for the audio using an Audio Encoder and a 2-layer GRU. The identity stream consists of an Identity Encoder which captures the identity of the person and enforces it on the generated video. The two embeddings are concatenated and fed to a Frame Decoder which produces a video sequence. Skip connections between the Identity Encoder and Frame Decoder ensure that the face is accurately reconstructed. A detailed illustration of the Reverse network is shown in Figure 7." }, { "heading": "A.2 CELEBA SEGMENTATION", "text": "In Table 1 we notice that the segmentation evaluation on generated images surpasses that of real images. The reason for this are some inconsistencies in the labelled images. Examples in Figure 8\nshow that in these cases some objects are labeled despite being occluded in the real image. However, these objects will appear in the generated images since the labelled images are used to drive their generation. These small inconsistencies in the data annotations explain why segmentation is slightly better for synthesized samples." }, { "heading": "A.3 ABLATION STUDY", "text": "In order to measure the effect of the reconstruction loss and adaptive balancing used in the DINO framework we perform an ablation study on the CelebAMask-HQ dataset. The results of the study are shown in Table 4. As expected the addition of the L1 loss results in a higher PSNR and SSIM since these metrics depend on the reconstruction error, which is directly optimised by this loss. More importantly, we note that the addition of the L1 loss improves the FID score since it prevents mode collapse. This is evident when observing the examples shown in Figure 9, which shows that modedropping that occurs in both DINO and Pix2Pix when this loss is omitted. Finally, we notice that the adaptive balancing used in DINO allows for more stable training and improves performance, which is reflected across all metrics." }, { "heading": "A.4 ADAPTIVE BALANCING", "text": "As mentioned in Section 3 DINO uses a controller to ensure that the energy of generated samples is always a fixed multiple of the energy of real samples. Although this approach is similar to that used by BEGAN (Berthelot et al., 2017) there is a key difference. BEGANs perform autoencoding and therefore assume that the discriminator’s reconstruction error will be larger for real samples since they have more details which are harder to reconstruct. For this reason, the controller used by BEGAN tries to maintain a balance throughout training where L(xreal) > L(xfake). In the DINO framework the discriminator performs domain translation therefore it is natural to assume that real samples should produce better reconstructions since they contain useful information regarding the semantics. For this reason we choose to maintain a balance where L(xfake) > L(xreal). This is reflected in the controller update as well as the balance parameter of DINO which is the inverse of that used in BEGANs.\nAs we mentioned the core difference with the adaptive balancing in BEGAN is that DINO maintains a balance where L(xfake) > L(xreal) whereas BEGAN maintains a balance for which L(xreal) > L(xfake). This makes BEGAN unsuitable for use with the predictive conditioning proposed in this paper since it allows the generator to “hide” information about the condition in synthesized samples. The generator thus tricks the discriminator into producing a much better reconstruction of the condition for fake samples without the need for them to be realistic. Since the controller prevents the discriminator from pushing fake samples to a higher energy than real samples (i.e. the controller output is zero when fake samples have higher energy) this behaviour is not prohibited by BEGANs throughout training.\nThe method used in DINO however does not have this problem since it encourages the discriminator to assign higher energies to unrealistic samples thus penalizing them and preventing the generator from “cheating” in the same way as BEGAN. To show this effect we train a conditional BEGAN\nand the DINO framework to perform translation from photo to sketch using the APDrawings dataset from (Yi et al., 2019). Figure 10 shows how the balancing used in DINO allows the network to penalize unrealistic images by encouraging the discriminator to assign to them energies larger than the real samples. We note that this problem occurs only in cases where the source domain is more informative than the target domain (i.e. photo → sketch). This does not occur in cases where the source domain in more generic than the target domain (i.e. segmentation map→ photo)" }, { "heading": "A.5 QUALITATIVE RESULTS", "text": "A.5.1 IMAGE-TO-IMAGE TRANSLATION" }, { "heading": "CelebAMask-HQ", "text": "Examples of image-to-image translation from segmentation maps to photos for the CelebMask-HQ dataset are shown in Figure 11. We note that our approach is able to maintain semantics and produce realistic results even in cases with extreme head poses and facial expressions." }, { "heading": "Cityscapes", "text": "Examples of image-to-image translation from segmentation maps to photos for the Cityscapes dataset are shown in Figure 12.\nA.6 VIDEO-TO-SPEECH TRANSLATION\nThis section presents examples of waveforms produced by the methods compared in Table 3. In addition to the waveforms we also present their corresponding spectrograms. The waveforms and spectrograms are shown in Figure 13. It is evident from the shape of the waveform that our method more accurately captures voiced sections in the audio. Furthermore, the spectrogram of our method is closely resembles that of the ground truth although some high frequency components are not captured. The performance is similar to the Perceptual GAN proposed by Vougioukas et al. (2019) although our method relies on only an adversarial loss." } ]
2,021
DINO: A CONDITIONAL ENERGY-BASED GAN FOR DOMAIN TRANSLATION
SP:88c3a4a7498801de3d7442253a4aeae5b83a3eb5
[ "The paper empirically studies the regularization of BN. It proposes the point that the BN's effect is connected with the regularizing against explosive growth in the final layer. To motivate this point, it takes a single-layer case and shows the BN approximately penalizes on the norm of the feature embedding thereon. Two regularizations are proposed according to the point and are used to justify it. " ]
Batch normalization (BatchNorm) has become a standard technique in deep learning. Its popularity is in no small part due to its often positive effect on generalization. Despite this success, the regularization effect of the technique is still poorly understood. This study aims to decompose BatchNorm into separate mechanisms that are much simpler. We identify three effects of BatchNorm and assess their impact directly with ablations and interventions. Our experiments show that preventing explosive growth at the final layer at initialization and during training can recover a large part of BatchNorm’s generalization boost. This regularization mechanism can lift accuracy by 2.9% for Resnet-50 on Imagenet without BatchNorm. We show it is linked to other methods like Dropout and recent initializations like Fixup. Surprisingly, this simple mechanism matches the improvement of 0.9% of the more complex Dropout regularization for the state-of-the-art Efficientnet-B8 model on Imagenet. This demonstrates the underrated effectiveness of simple regularizations and sheds light on directions to further improve generalization for deep nets.
[ { "affiliations": [], "name": "Yann N. Dauphin" }, { "affiliations": [], "name": "Ekin D. Cubuk" } ]
[ { "authors": [ "D. Balduzzi", "M. Frean", "L. Leary", "J. Lewis", "Ma", "K.W.-D", "B. McWilliams" ], "title": "The shattered gradients problem: If resnets are the answer, then what is the question", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "N. Bjorck", "C.P. Gomes", "B. Selman", "K.Q. Weinberger" ], "title": "Understanding batch normalization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Chen", "L.-C", "G. Papandreou", "I. Kokkinos", "K. Murphy", "A.L. Yuille" ], "title": "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2017 }, { "authors": [ "J. Collins", "J. Balle", "J. Shlens" ], "title": "Accelerating training of deep neural networks with a standardization loss", "venue": "arXiv preprint arXiv:1903.00925", "year": 2019 }, { "authors": [ "E.D. Cubuk", "B. Zoph", "D. Mane", "V. Vasudevan", "Q.V. Le" ], "title": "Autoaugment: Learning augmentation strategies from data", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2019 }, { "authors": [ "E.D. Cubuk", "B. Zoph", "J. Shlens", "Q.V. Le" ], "title": "Randaugment: Practical data augmentation with no separate search. arXiv preprint arXiv:1909.13719", "venue": null, "year": 2019 }, { "authors": [ "G. Desjardins", "K. Simonyan", "R Pascanu" ], "title": "Natural neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "X. Du", "Lin", "T.-Y", "P. Jin", "G. Ghiasi", "M. Tan", "Y. Cui", "Q.V. Le", "X. Song" ], "title": "Spinenet: Learning scale-permuted backbone for recognition and localization", "venue": "arXiv preprint arXiv:1912.05027", "year": 2019 }, { "authors": [ "S. Elfwing", "E. Uchibe", "K. Doya" ], "title": "Sigmoid-weighted linear units for neural network function approximation in reinforcement learning", "venue": "Neural Networks,", "year": 2018 }, { "authors": [ "A. Galloway", "A. Golubeva", "T. Tanay", "M. Moussa", "G.W. Taylor" ], "title": "Batch normalization is a cause of adversarial vulnerability", "venue": "arXiv preprint arXiv:1905.02161", "year": 2019 }, { "authors": [ "J. Gehring", "M. Auli", "D. Grangier", "D. Yarats", "Y.N. Dauphin" ], "title": "Convolutional sequence to sequence learning", "venue": "arXiv preprint arXiv:1705.03122", "year": 2017 }, { "authors": [ "G. Ghiasi", "Lin", "T.-Y", "Q.V. Le" ], "title": "Dropblock: A regularization method for convolutional networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "P. Goyal", "P. Dollár", "R. Girshick", "P. Noordhuis", "L. Wesolowski", "A. Kyrola", "A. Tulloch", "Y. Jia", "K. He" ], "title": "Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677", "venue": null, "year": 2017 }, { "authors": [ "K. He", "R. Girshick", "P. Dollár" ], "title": "Rethinking imagenet pre-training", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "D. Hendrycks", "T. Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "arXiv preprint arXiv:1903.12261", "year": 2019 }, { "authors": [ "G.E. Hinton", "N. Srivastava", "A. Krizhevsky", "I. Sutskever", "R.R. Salakhutdinov" ], "title": "Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580", "venue": null, "year": 2012 }, { "authors": [ "G. Huang", "Z. Liu", "L. Van Der Maaten", "K.Q. Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "G. Huang", "Y. Sun", "Z. Liu", "D. Sedra", "K.Q. Weinberger" ], "title": "Deep networks with stochastic depth", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Y. Huang", "Y. Cheng", "A. Bapna", "O. Firat", "D. Chen", "M. Chen", "H. Lee", "J. Ngiam", "Q.V. Le", "Y Wu" ], "title": "Gpipe: Efficient training of giant neural networks using pipeline parallelism", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "S. Ioffe", "C. Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167", "year": 2015 }, { "authors": [ "P. Ramachandran", "B. Zoph", "Q.V. Le" ], "title": "Searching for activation functions. arXiv preprint arXiv:1710.05941", "venue": null, "year": 2017 }, { "authors": [ "S. Santurkar", "D. Tsipras", "A. Ilyas", "A. Madry" ], "title": "How does batch normalization help optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "C. Szegedy", "S. Ioffe", "V. Vanhoucke", "A.A. Alemi" ], "title": "Inception-v4, inception-resnet and the impact of residual connections on learning", "venue": "In Thirty-first AAAI conference on artificial intelligence", "year": 2017 }, { "authors": [ "M. Tan", "Q.V. Le" ], "title": "Efficientnet: Rethinking model scaling for convolutional neural networks. arXiv preprint arXiv:1905.11946", "venue": null, "year": 2019 }, { "authors": [ "S. Wang", "C. Manning" ], "title": "Fast dropout training", "venue": "In international conference on machine learning,", "year": 2013 }, { "authors": [ "A.C. Wilson", "R. Roelofs", "M. Stern", "N. Srebro", "B. Recht" ], "title": "The marginal value of adaptive gradient methods in machine learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "C. Xie", "M. Tan", "B. Gong", "J. Wang", "A.L. Yuille", "Q.V. Le" ], "title": "Adversarial examples improve image recognition", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "S. Xie", "R. Girshick", "P. Dollár", "Z. Tu", "K. He" ], "title": "Aggregated residual transformations for deep neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "G. Yang", "J. Pennington", "V. Rao", "J. Sohl-Dickstein", "S.S. Schoenholz" ], "title": "A mean field theory of batch normalization", "venue": "arXiv preprint arXiv:1902.08129", "year": 2019 }, { "authors": [ "S. Zagoruyko", "N. Komodakis" ], "title": "Wide residual networks. arXiv preprint arXiv:1605.07146", "venue": null, "year": 2016 }, { "authors": [ "H. Zhang", "Y.N. Dauphin", "T. Ma" ], "title": "Fixup initialization: Residual learning without normalization", "venue": "arXiv preprint arXiv:1901.09321", "year": 2019 }, { "authors": [ "B. Zoph", "V. Vasudevan", "J. Shlens", "Q.V. Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 } ]
[ { "heading": null, "text": "Batch normalization (BatchNorm) has become a standard technique in deep learning. Its popularity is in no small part due to its often positive effect on generalization. Despite this success, the regularization effect of the technique is still poorly understood. This study aims to decompose BatchNorm into separate mechanisms that are much simpler. We identify three effects of BatchNorm and assess their impact directly with ablations and interventions. Our experiments show that preventing explosive growth at the final layer at initialization and during training can recover a large part of BatchNorm’s generalization boost. This regularization mechanism can lift accuracy by 2.9% for Resnet-50 on Imagenet without BatchNorm. We show it is linked to other methods like Dropout and recent initializations like Fixup. Surprisingly, this simple mechanism matches the improvement of 0.9% of the more complex Dropout regularization for the state-of-the-art Efficientnet-B8 model on Imagenet. This demonstrates the underrated effectiveness of simple regularizations and sheds light on directions to further improve generalization for deep nets." }, { "heading": "1 INTRODUCTION", "text": "Deep learning has made remarkable progress on a variety of domains in the last decade. While part of this progress relied on training larger models on larger datasets, it also depended crucially on the development of new training methods. A prominent example of such a development is batch normalization (BatchNorm) (Ioffe and Szegedy, 2015), which has become a standard component of training protocols. For example, state-of-the-art models in image recognition (Szegedy et al., 2017; He et al., 2016; Tan and Le, 2019), object detection (He et al., 2019; Du et al., 2019), and image segmentation (Chen et al., 2017) all use BatchNorm. Despite its prominence, the mechanisms behind BatchNorm’s effectiveness are not well-understood (Santurkar et al., 2018; Bjorck et al., 2018; Yang et al., 2019).\nPerhaps at the core of the confusion is that BatchNorm has many effects. It has been correlated to reducing covariate shift (Ioffe and Szegedy, 2015), enabling higher learning rates (Bjorck et al., 2018), improving initialization (Zhang et al., 2019), and improving conditioning (Desjardins et al., 2015), to name a few. These entangled effects make it difficult to properly study the technique. In this work, we deconstruct some of the effects of BatchNorm in search of much simpler components. The advantage of this approach compared to previous work is that it allows going beyond correlating these effects to BatchNorm by evaluating their impact separately. The mechanisms we consider in this work are purposefully simple. These simpler mechanisms are easier to understand and, surprisingly, they are competitive even at the level of the state-of-the-art.\nOur contributions can be summarized as follows:\n1. How does normalization help generalization? We isolate and quantify the benefits of the different effects of BatchNorm using additive penalties and ablations. To our knowledge, we are the first to provide empirical evidence that BatchNorm’s effect of regularizing against explosive growth at initialization and during training can recover a large part of its generalization boost. Replicating this effect with Fixup initialization and the proposed additive penalty improves accuracy by 2.9% for Resnet-50 without BatchNorm.\n2. Links to Fixup and Dropout. We draw novel connections between the regularization on the final layer, Dropout regularization, Fixup initialization and BatchNorm.\n3. Simplicity in regularization. The mechanism we identify can be useful as a standalone regularization. It produces 0.9% improvement on the Efficientnet B8 architecture, matching the more complex Dropout regularization." }, { "heading": "2 DECOMPOSING THE REGULARIZATION EFFECTS OF BATCH", "text": "NORMALIZATION\nEmbedding Network\nFeature Embedding\nInput\nEmbedding L2\nOutput Layer\nBatchNorm or Standardizing Loss\nFunctional L2\nOutput\nFigure 1: Diagram of a neural network showing where the mechanisms operate.\nIn this section, we break BatchNorm into different mechanisms that can be studied separately. BatchNorm (Ioffe and Szegedy, 2015) is a technique that accelerates training by standardizing the intermediate activations of a deep network. It achieves this standardization by using explicit normalization instead of relying on an additive regularization. While BatchNorm has been correlated to many effects, it is still unclear which effect, if any, explains most of its generalization boost.\nThe effects we evaluate are the implicit regularizing effect on the norms at the final layer, and also its primary effect of standardizing the intermediate layers. In order to test these purposefully simple mechanisms we will rely on ablations and additive penalties. The use of additive penalties allows us to disentangle these effects where we control for the positive effect of BatchNorm on initialization by using the recently proposed Fixup initializer (Zhang et al., 2019)." }, { "heading": "2.1 REGULARIZING AGAINST EXPLOSIVE GROWTH IN THE FINAL LAYER", "text": "First, we characterize the implicit effect of normalization on the final layer. Consider a neural network of the form NN(x) = WEmb(x) with loss L where x ∈ RI is the input of the network, W ∈ RK×H is the final weight matrix in the model, and Emb(x) : RI → RH is a feature embedding network with L layers. Let us take the common case where Emb(x) = Swish(γBatchNorm(PreEmb(x)) + β) where PreEmb(x) is the output of a residual network, Swish(x) = xσ(ρx) is the Swish activation (Ramachandran et al., 2017; Elfwing et al., 2018) with scalar parameter ρ (typically denoted β), and BatchNorm parameters γ, β. BatchNorm makes weight decay regularization on γ, β approximately equivalent to an additive penalty on the norm on the feature embedding\nL(NN(x)) + λ‖γ‖2 + λ‖β‖2 = L(NN(x)) + λ 4 E[‖Emb(x)‖2] +O(|ρ|). (1)\nSee the Appendix A for the derivation. It means that the norm of the BatchNorm parameters alone is enough to directly control the norm of the feature embedding. It is a guarantee that the norm of the feature embedding cannot grow explosively during training as long as these parameters are small. This regularization effect of BatchNorm can occur even without explicit weight decay due to the tendency of stochastic gradient descent to favor low norm parameters (Wilson et al., 2017).\nThis equivalency does not hold without BatchNorm because the activation of the embedding network become an important factor in the norm of the feature embedding (‖γ‖2 + ‖β‖2 6= E [ ‖γPreEmb(x) + β‖2 ] in general). Indeed, (Balduzzi et al., 2017; Gehring et al., 2017; Zhang et al., 2019) have shown that the activations of residual networks without BatchNorm tend explode exponentially in the depth of the network at initialization. This results in an extremely large embedding norm, even though the parameters are relatively small. We confirm experimentally in Section 4.3 that networks without BatchNorm have much larger feature embedding norms.\nFeature Embedding L2 (EL2) We propose to assess the effect of this regularization mechanism by isolating it as the following additive penalty\nREL2(NN) = 1\nH E[‖Emb(x)‖2]. (2)\nAdding this regularization to the loss allows us to test the impact of this mechanism independently on networks with and without BatchNorm. We dub this regularization embedding L2 for short in the following sections as it is L2 with a metric that is at the embedding network function level, with no additional penalties on the intermediate layers. It is applied right before the classification layer, in other words right after the final average pooling layer for residual networks. We will see with our experiments that this simple regularization can in large part recover the regularization boost of BatchNorm, has links to several known methods and is practically useful even at the level of the state-of-the-art.\nFunctional L2 (FL2) Regularizing the feature embedding has an implicit effect on the final output norm (E[‖NN(x)‖] ≤ 1/2 ‖W‖2 + H/2 · REL2(NN)). In order to test the impact of this effect, we will also evaluate a direct penalty on the norm of the final output\nRFL2(NN) = 1\nK E[‖NN(x)‖2]. (3)\nWe dub this regularization functional L2 for short in the following sections as it is L2 with a metric that is at the full network function level. It is applied on the logits of the model for classification. In Section 4.4, we investigate whether it is regularizing this norm or the feature embedding norm that is more closely correlated to better generalization." }, { "heading": "2.2 STANDARDIZING THE INTERMEDIATE ACTIVATIONS OF THE MODEL", "text": "As a baseline mechanism, we will also consider an additive penalty that encourages the normalization of every intermediate layer called standardization loss (Collins et al., 2019). This is a useful reference to gauge the effectiveness of the regularization on the embedding described in the previous sub-section. It also helps disentangle the side-effects of normalization. The penalty is\nDKL(P (x)||N (x|0, I)) = 1\n2 ∑ i (µ2i + σ 2 i − log σ2i − 1) (4)\nwhere µi is the mean and σi is the variance of an intermediate layer of the network. These are the same statistic that are computed by BatchNorm and a penalty is added to the loss for all the intermediate layers. This regularization has been considered in (Collins et al., 2019). They found that it accelerated learning but fell significantly short of the generalization boost of batch normalization. However, they did not account for the positive effect of normalization on initialization. We correct for this in our experiments using the recently proposed Fixup initialization (Zhang et al., 2019)." }, { "heading": "3 DRAWING LINKS TO OTHER METHODS", "text": "In this section, we will draw connections between the mechanisms considered for BatchNorm and other methods." }, { "heading": "3.1 DROPOUT REGULARIZATION", "text": "Dropout (Hinton et al., 2012) is a regularization that prevents overfitting by randomly omitting subsets of features during training. Despite its early popularity, its use has declined with the rise of batch normalized convolutional networks. Ghiasi et al. (2018), and Zagoruyko and Komodakis (2016), find that it produces comparatively much smaller improvements when applied to the intermediate layers of such networks. However, Tan and Le (2019) has shown that state-of-the-art results can be obtained by applying Dropout only at the input of the final layer of the network. Interestingly, we can relate this particular use of Dropout to BatchNorm. Wang and Manning (2013) have shown that Dropout with MSE loss can be isolated as an additive regularization when applied at the last layer\nLDropout = 1\nN\n∑ E [ (WDropout(Emb(xi))− yi)2 ] = 1\nN\n∑ (WEmb(xi)− yi)2 + λtr(Wdiag(E[Emb(x)Emb(x)T ])WT )\nwhere the additive Dropout regularization is RDropout(NN) = tr(Wdiag(E[Emb(x)Emb(x)T ])WT ) and λ is the strength of the penalty. In this formulation, we can see that Dropout is related to the\nmechanisms in Section 2.1 as follows\nK ·RFL2(NN) ≈ RDropout(NN) ≤ 1 4 ‖W‖4 + H 4 REL2(NN). (5)\nThe approximate relationship to functional L2 relies on the assumption that the features are relatively decorrelated. To our knowledge this close but simple relationship between the expected norm of the output and Dropout with MSE loss had not been noted before. The upper bound with embedding L2 gives us a guarantee on Dropout robustness when training with BatchNorm and weight decay. In some sense, this means that networks with BatchNorm already incorporate a regularization effect similar to that conferred by Dropout. This can explain why networks with BatchNorm have tended to benefit relatively less from Dropout.\nThe approximate relationship to functional L2 can be extended to networks with cross-entropy by using Taylor expansion and assuming the network has low-confidence. This assumption is likely only correct at initialization. In comparison, a related upper-bound can be found for embedding L2 using Taylor expansion. We will see in Section 4.5 that embedding and functional L2 can match or exceed Dropout for state-of-the-art architectures on Imagenet such as Efficientnet (Tan and Le, 2019). This is surprising because using an additive penalty at the final layer is much simpler than Dropout." }, { "heading": "3.2 FIXUP INITIALIZATION", "text": "Fixup (Zhang et al., 2019) is a recent initialization method for the ubiquitous residual networks. Recall that residual networks without normalization have explosive dynamics with conventional initializations (Balduzzi et al., 2017; Gehring et al., 2017; Zhang et al., 2019). In fact, the output scale of the networks grows exponentially with their depth L. This means that the embedding and functional L2 at initialization are\nREL2(NN t=0 NoBN), RFL2(NN t=0 NoBN) ∈ O(2L) (6)\nwhere t is the training iteration. These explosive dynamics can be very detrimental to learning (Balduzzi et al., 2017). Fixup aims to stabilize the training of very deep residual networks without normalization. It does so by initializing the network so that the output does not change explosively in the first gradient step, that is ‖NNt=1Fixup(x)− NNt=0Fixup(x)‖ ∈ Θ(η) where η is the learning rate. They show this stabilizes learning and the output scale. We find that Fixup initialization minimizes the initial penalties when compared to conventional initialization\nREL2(NN t=0 Fixup), RFL2(NN t=0 Fixup) ∈ O(1) (7)\nsince it initializes residual branches to zero. Most importantly, Fixup ensures this does not grow explosively in the first few steps. Zhang et al. (2019) observe that Fixup can improve generalization compared to conventional initialization, but requires strong regularization to be competitive to networks with normalization. In Section 4.3, we will see that regularizing the functional L2 not just at initialization but throughout training remarkably improves generalization." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we evaluate the regularization mechanisms on a set of architectures for two challenging benchmarks." }, { "heading": "4.1 DATASETS", "text": "CIFAR-10 CIFAR-10 has 50k samples for training and 10k samples for test evaluation. We tune hyperparameters using 5k of the training samples as a validation set. We then train a final model using the whole training set of 50,000 samples with the best hyperparameters for 10 different seeds, and report the median test accuracy. Each model is trained on a single GPU. The robustness is reported as the accuracy on CIFAR-10-C dataset (Hendrycks and Dietterich, 2019), averaged over all corruptions and their severities.\nSVHN SVHN has 73,257 samples for training and 26,032 samples for testing (note that we do not consider the 531,131 “extra” samples). We tune hyperparameters using 3,257 of the training samples\nas a validation set. We then train a final model using the whole training set of 73,257 samples with the best hyperparameters and report the median test accuracy for 10 different random initializations.\nImagenet Imagenet is a large scale image recognition dataset with over 1.2M examples and 1000 classes. Following the past work, we report classification accuracy on the validation set of 50k examples." }, { "heading": "4.2 IMPLEMENTATION DETAILS", "text": "WideResnet We train WideResNet-28-10 architecture (Zagoruyko and Komodakis, 2016) on CIFAR-10, which is one of the most commonly used architectures for this dataset. We train the model for 200 epochs using a cosine learning rate decay with a batch size of 128. We use the standard data augmentation of horizontal flips and pad-and-crop. For models without BatchNorm, we use Fixup initialization. We use the activation function Swish (Ramachandran et al., 2017), with the ρ value initialized to 0.\nResnet-50 Resnet-50 (He et al., 2016) has quickly become a standard architecture to evaluate on Imagenet. Our implementation makes use of the improvements proposed by Goyal et al. (2017). Only the networks trained without BatchNorm use Fixup initialization (Zhang et al., 2019). In order to improve results for the standardizing loss, we found it was useful to use a special case of Fixup where the residual branches are not initalized to zero: when the standardization loss is used we initialize the last layer of each residual block inversely proportional to the square root of the depth. The models are trained for 90 epochs with a batch size of 512 and a precision of float32. All results are the average of two seeds. The coefficient for the standardizing loss is cross-validated in the range from 10−7 to 10−5 logarithmically. We found that higher coefficients led to divergence at high learning rates. The coefficient for embedding and functional L2 was found in the range from 0 to 1 with increments of 0.1.\nEfficientnet Efficientnet (Tan and Le, 2019) is the state-of-the-art architecture on Imagenet at the time of writing. We will evaluate on the biggest version trained with RandAugment (Cubuk et al., 2019b) data augmentation and without additional data, called the B8 (Xie et al., 2020). We follow the implementation of Tan and Le (2019) and the hyper-parameters they found optimal. We use early stopping based on a held-out validation set of 25022 images. All results reported are averaged over two random seeds." }, { "heading": "4.3 RESULTS: ABLATION OF BATCHNORM AND INTERVENTIONS", "text": "First, we measure the impact of the regularization mechanisms under consideration for networks where we have removed BatchNorm. We also include results for network trained with BatchNorm\nfor reference. On CIFAR-10 (Table 1) and Imagenet (Table 2), we observe a significant gap between the baseline network and the network trained with BatchNorm. The baseline networks benefit from improved initialization with Fixup, which (Zhang et al., 2019) find stabilizes training at higher learning rates and increases accuracy by up to 2.9% on Imagenet without BatchNorm. However, the baseline networks do not have extra regularization during training.\nWe find that the standardization regularization produces modest robustness improvements on CIFAR10 but not for the accuracy, and moderate improvements on Imagenet. We see that this regularization does not replicate BatchNorm’s generalization boost, even with proper initialization. This matches the results of Collins et al. (2019) which considered the use of this regularization without accounting for the initialization effects of BatchNorm. We do not believe this indicates that standardization is not an important effect but rather reflects a failure of this particular approach. There are a number of possible explanations, including that this type of constraint is difficult for the optimizer to balance successfully with the classification loss. Indeed, we see that only comparatively small regularization coefficients were found to produce stable training with high learning rates.\nIn comparison, we note that replicating standardization’s implicit effect on the norms at the final layer produces better results on both CIFAR-10 and Imagenet. While it is enough to match the generalization boost of BatchNorm on Imagenet, this is not the case on CIFAR-10. However, the scale of the improvement suggest that controlling the norm of the output through the norm of the feature embedding is an important benefit of standardization and as a result BatchNorm. In particular, embedding and functional L2 produces an improvement of close to 3% on Imagenet and 0.6% on CIFAR-10. We analyze this results more closely in Section 4.4.\nTo further test the effect of output norm regularization on a small dataset, we experiment with the SVHN core set. The Wide-Resnet-28-10 model with BatchNorm achieves 96.9% test accuracy on this dataset, whereas the same model without BatchNorm (but with Fixup initialization) achieves 95.5%. If we utilize the Functional L2 or the Embedding L2 regularization, we reach 96.5% accuracy. Similar to the models on CIFAR-10, output norm regularization offers a significant improvement in generalization (1.0%) for a model without BatchNorm.\nBjorck et al. (2018) have shown that BatchNorm allows using larger learning rates without divergence. It is thus interesting to take note of how the optimal learning rate changed when we removed BatchNorm. We observe that in our experimental setup on Imagenet the optimal base learning rate with BatchNorm of 0.1 remains optimal with embedding and functional L2. In our experimental setup on CIFAR-10, the optimal learning rates increased from 0.1 with BatchNorm to 0.2 with functional L2.\nWhile BatchNorm is effective at improving accuracy, we find that it does not improve the robustness of the baseline model to distortions applied at test-time. We measure the robustness of the models by evaluating their accuracy on the 15 different corruptions described by Hendrycks and Dietterich (2019), which include corruptions from noise, blue, weather, and digital categories. Models trained with standardizing loss have 1.1% lower accuracy than the models trained with BatchNorm, but their robustness is 0.6% higher. This suggests that while BatchNorm might be improving robustness due to the standardization of the intermediate layers, it is reducing robustness by a larger amount due to its other effects (negative effects of BatchNorm on adversarial robustness have been discussed by Galloway et al. (2019)). Functional L2 is able to improve both accuracy and robustness (by 1.3% over the baseline model and the BatchNorm model), by its ability to reduce the output norm." }, { "heading": "4.4 ANALYSIS: HOW MUCH DOES THIS EXPLAIN THE REGULARIZATION OF BATCHNORM?", "text": "In the previous section, we show that the mechanisms considered are effective. This section investigates their relationship to BatchNorm.\nHow much does BatchNorm regularize the norm at the last layer? Figure 2 (left column) shows learning curves for networks with and without BatchNorm. As the networks without BatchNorm use a suitable initialization, we observe that the networks converge at similar speeds at first. However, we see a generalization gap emerge later in training on both datasets. We see that this gap is associated with larger growth throughout training in the norm of the final output and feature embedding for networks without BatchNorm. This confirms that BatchNorm prevents explosive growth in the representation from cascading to the output during training. This also makes clear that these quantities\nmust be controlled throughout training and not just at initialization. And given that these networks are already trained with a well-tuned parameter L2, it is evidence that parameter L2 is not sufficient to control these quantities without BatchNorm.\nDoes regularizing the norm to the same level as BatchNorm improve results? In Figure 3, we see that the accuracy increases with the coefficient of functional or embedding L2. Both the output and feature embedding norm reduce as expected. We see that at comparable output norms, networks with functional L2 perform much worse than BatchNorm. Indeed, the coefficient of functional L2 must be quite small to allow for such high norms. In contrast, we see that the methods perform comparably when the norm of the feature embedding is similar. These results indicate that the coefficient must be high enough to control the reduce the norm of the feature embedding as much as BatchNorm to obtain good results. This is consistent with the interpretation that the methods prevent explosions in the residual representation of the network from cascading to the output. Thus controlling the norm of the feature embedding appears more tightly related to generalization than the\nnorm of the final output. Like other regularizations such as classical L2, a single regularization does not tell the whole story and it matters how these quantities are minimized.\nCan we disentangle BatchNorm’s effect on initialization to its effect on the norm at the last layer? Can we train networks with only a single BatchNorm layer at the feature embedding level? Next, we evaluate Resnet-50 with only a single BatchNorm layer at the level of the final feature embedding on Imagenet (added before the average pooling layer). Let us first consider the case without Fixup initialization. This bad initialization leads to an explosion at the level before the BatchNorm layer, which has a norm of 1.8× 105± 3.8× 104 at initialization. Normally this network would not train at reasonable learning rates but thanks to the BatchNorm at that final layer it trains to 73.8 ± 0.1 accuracy. This a slightly higher accuracy than with Fixup only but lower than with BatchNorm in every layer. This shows that preventing explosion in the feature embedding is sufficient to allow training even without proper initialization. In combination with Fixup, the accuracy rises to 74.9± 0.0. This network contains over 100× less normalization layers, but recovers nearly 70% of the generalization improvement with respect to Fixup. It is corroboration of the previous results with additive penalties that proper initialization and controlling the feature embedding norm recovers a large part of the boost of BatchNorm.\nHow much do these methods increase natural robustness to Dropout? Section 3.1 establishes a link between BatchNorm, embedding L2 and Dropout, with a link to functional L2 at initialization only. We showed that a low feature embedding norm should correlate to Dropout regularization. Table 3 shows the robustness to Dropout at test time for networks that were not trained with Dropout. We see that networks with embedding L2 suffer considerably less from Dropout corruption at test compared to the baseline. Similarly, the use of BatchNorm also boost robustness to Dropout. While it had been observed that Dropout regularizing during training is less effective for networks with BatchNorm, this is the first time it has been shown that BatchNorm networks have a naturally higher robustness to Dropout." }, { "heading": "4.5 REGULARIZATION WITH BATCHNORM", "text": "In this section, we evaluate using the regularizations on the final layer with BatchNorm. As explained in Section 2.1, BatchNorm already reduces the embedding L2 to some degree. However, BatchNorm and weight decay by themselves does not make it easy to control the strength of the regularization. On one hand, it would require using independent weight decay coefficients between parameters γ and the other parameters, and on the other they only approximately reduce these quantities. Embedding L2 allows us to minimize this quantity directly.\nTable 4 shows our results for Efficient B8. This model produces state-of-the-art results for Imagenet without additional data sources. We find that embedding and functional L2 boosts accuracy by 0.9% compared to Efficientnet B8 without Dropout. This improvement matches that of Dropout. It is surprising that regularizing such a simple additive penalty improves results for a state-ofthe-art architecture at the same level as the more complicated Dropout regularization. This is even more surprising considering that the baseline is already regularized by powerful methods like RandAugment (Cubuk et al., 2019b) and StochasticDepth (Huang et al., 2016). This suggests that further investigation of simple regularizations could be beneficial even for state-of-the-art models." }, { "heading": "5 CONCLUSION", "text": "In this work, we have quantified a number of the regularization effects of BatchNorm. Surprisingly, our empirical study shows that preventing explosion of the feature embedding norm at initialization\nand during training can recover a large part of the generalization improvement of BatchNorm. Furthermore, this much simpler regularization produces improvements on state-of-the-art architectures such as Efficienet-B8. We hope that these results will help guide the way to more effective regularization methods." }, { "heading": "A APPENDIX", "text": "Here we provide the derivation for Equation 1. Consider the Taylor expansion of this expression around ρ = 0\n‖Swish(x)‖2 = 1 4 ‖x‖2 +O(|ρ|). (8)\nPutting this together with the mean squared norm of the feature embedding\nE[‖Emb(x)‖2] = E[‖Swish(γBatchNorm(PreEmb(x)) + β)‖2], (9)\nwe have\nE[‖Emb(x)‖2] = 1 4 E[‖γBatchNorm(PreEmb(x)) + β)‖2] +O(|ρ|). (10)\nThis further simplifies due to the normalizing effect of BatchNorm E[‖γBatchNorm(PreEmb(x)) + β‖2] = ∑ i E [ (γiBatchNorm(PreEmb(x))i + βi) 2 ] (11)\n= ‖γ‖2 + ‖β‖2, (12)\nyielding the result\nE[‖Emb(x)‖2] = 1 4 ‖γ‖2 + 1 4 ‖β‖2 +O(|ρ|). (13)" } ]
2,021
null
SP:da34f0f0c8f4887dc84cdb63ec13ac7550e0c37c
[ "This paper presents a GAN framework to learn spatial and temporal representation on complex physical surfaces to apply to simulation. The method represents data in a SDF-like way so it is agnostic to the properties of material and simulation model. The network is built upon conventional GAN, with two discriminator for temporal and spatial. A loss function is proposed to evaluates on a grid-based Fourier transform of the output and ground truth to better preserve high frequency details (temporally and spatially). Results show performance on a physical simulation with opaque, elasto-plastic materials." ]
We present a new method for reconstructing and refining complex surfaces based on physical simulations. Taking a roughly approximated simulation as input, our method infers corresponding spatial details while taking into account how they evolve over time. We consider this problem in terms of spatial and temporal frequencies, and leverage generative adversarial networks to learn the desired spatiotemporal signal for the surface dynamics. Furthermore, we investigate the possibility to train our network in an unsupervised manner, i.e. without predefined training pairs. We highlight the capabilities of our method with a set of synthetic wave function tests and complex 3D dynamics of elasto-plastic materials.
[]
[ { "authors": [ "David Adalsteinsson", "James A Sethian" ], "title": "The fast construction of extension velocities in level set methods", "venue": "Journal of Computational Physics,", "year": 1999 }, { "authors": [ "Kai Bai", "Wei Li", "Mathieu Desbrun", "Xiaopei Liu" ], "title": "Dynamic upsampling of smoke through dictionary-based learning", "venue": null, "year": 1910 }, { "authors": [ "Peter W. Battaglia", "Razvan Pascanu", "Matthew Lai", "Danilo Rezende", "Koray Kavukcuoglu" ], "title": "Interaction networks for learning about objects, relations and physics", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Prateep Bhattacharjee", "Sukhendu Das" ], "title": "Temporal coherency based criteria for predicting video frames using deep multi-stage generative adversarial networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Dongdong Chen", "Jing Liao", "Lu Yuan", "Nenghai Yu", "Gang Hua" ], "title": "Coherent online video style transfer", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Mengyu Chu", "Nils Thuerey" ], "title": "Data-driven synthesis of smoke flows with CNN-based feature descriptors", "venue": "ACM Trans. Graph.,", "year": 2017 }, { "authors": [ "Emmanuel de Bezenac", "Arthur Pajot", "Patrick Gallinari" ], "title": "Deep learning for physical processes: Incorporating prior scientific knowledge", "venue": "arXiv preprint arXiv:1711.07970,", "year": 2017 }, { "authors": [ "Chao Dong", "Chen Change Loy", "Kaiming He", "Xiaoou Tang" ], "title": "Image super-resolution using deep convolutional networks", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2016 }, { "authors": [ "Robert A Gingold", "Joseph J Monaghan" ], "title": "Smoothed particle hydrodynamics: Theory and application to non-spherical stars", "venue": "Monthly Notices of the Royal Astronomical Society,", "year": 1977 }, { "authors": [ "Ian Goodfellow" ], "title": "Nips 2016 tutorial: Generative adversarial networks", "venue": "arXiv preprint arXiv:1701.00160,", "year": 2016 }, { "authors": [ "Francis Harlow", "Eddie Welch" ], "title": "Numerical Calculation of Time-dependent Viscous Incompressible Flow of Fluid with Free Surface", "venue": "Physics of Fluids,", "year": 1965 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proc. of IEEE Comp. Vision and Pattern Rec.,", "year": 2016 }, { "authors": [ "Yuanming Hu", "Luke Anderson", "Tzu-Mao Li", "Qi Sun", "Nathan Carr", "Jonathan Ragan-Kelley", "Frédo Durand" ], "title": "Difftaichi: Differentiable programming for physical simulation", "venue": null, "year": 1910 }, { "authors": [ "Markus Ihmsen", "Jens Orthmann", "Barbara Solenthaler", "Andreas Kolb", "Matthias Teschner" ], "title": "SPH fluids in computer graphics", "venue": "In Eurographics - State of the Art Reports,", "year": 2014 }, { "authors": [ "Phillip Isola", "Jun-Yan Zhu", "Tinghui Zhou", "Alexei A Efros" ], "title": "Image-to-image translation with conditional adversarial networks", "venue": "Proc. of IEEE Comp. Vision and Pattern Rec.,", "year": 2017 }, { "authors": [ "Alexia Jolicoeur-Martineau" ], "title": "The relativistic discriminator: a key element missing from standard GAN", "venue": "CoRR, abs/1807.00734,", "year": 2018 }, { "authors": [ "Byungsoo Kim", "Vinicius C. Azevedo", "Nils Thuerey", "Theodore Kim", "Markus Gross", "Barbara Solenthaler" ], "title": "Deep fluids: A generative network for parameterized fluid simulations", "venue": "Computer Graphics Forum,", "year": 2019 }, { "authors": [ "Byungsoo Kim", "Vinicius C Azevedo", "Markus Gross", "Barbara Solenthaler" ], "title": "Lagrangian neural style transfer for fluids", "venue": "ACM Trans. Graph.,", "year": 2020 }, { "authors": [ "Dan Koschier", "Jan Bender", "Barbara Solenthaler", "Matthias Teschner" ], "title": "Smoothed particle hydrodynamics techniques for the physics based simulation of fluids and solids", "venue": "In EUROGRAPHICS 2019 Tutorials. Eurographics Association,", "year": 2019 }, { "authors": [ "Lubor Ladicky", "SoHyeon Jeong", "Barbara Solenthaler", "Marc Pollefeys", "Markus Gross" ], "title": "Datadriven fluid simulations using regression forests", "venue": "ACM Trans. Graph.,", "year": 2015 }, { "authors": [ "Christian Ledig", "Lucas Theis", "Ferenc Huszár", "Jose Caballero", "Andrew Cunningham", "Alejandro Acosta", "Andrew Aitken", "Alykhan Tejani", "Johannes Totz", "Zehan Wang" ], "title": "Photo-realistic single image super-resolution using a generative adversarial network", "venue": null, "year": 2016 }, { "authors": [ "Yunzhu Li", "Jiajun Wu", "Russ Tedrake", "Joshua B. Tenenbaum", "Antonio Torralba" ], "title": "Learning particle dynamics for manipulating rigid bodies, deformable objects, and fluids", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Julia Ling", "Andrew Kurzawski", "Jeremy Templeton" ], "title": "Reynolds averaged turbulence modelling using deep neural networks with embedded invariance", "venue": "Journal of Fluid Mechanics,", "year": 2016 }, { "authors": [ "Ding Liu", "Zhaowen Wang", "Yuchen Fan", "Xianming Liu", "Zhangyang Wang", "Shiyu Chang", "Thomas Huang" ], "title": "Robust video super-resolution with learned temporal dynamics", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Mehdi Mirza", "Simon Osindero" ], "title": "Conditional generative adversarial nets", "venue": "arXiv preprint arXiv:1411.1784,", "year": 2014 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Jeremy Morton", "Antony Jameson", "Mykel J Kochenderfer", "Freddie Witherden" ], "title": "Deep dynamical modeling and control of unsteady fluid flows", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Damian Mrowca", "Chengxu Zhuang", "Elias Wang", "Nick Haber", "Li F Fei-Fei", "Josh Tenenbaum", "Daniel L Yamins" ], "title": "Flexible neural representation for physics prediction", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Lukas Prantl", "Boris Bonev", "Nils" ], "title": "Thuerey. Pre-computed liquid spaces with generative neural networks and optical flow", "venue": null, "year": 2017 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "Proc. ICLR,", "year": 2016 }, { "authors": [ "Masaki Saito", "Eiichi Matsumoto", "Shunta Saito" ], "title": "Temporal generative adversarial nets with singular value clipping", "venue": "In IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Mehdi SM Sajjadi", "Raviteja Vemulapalli", "Matthew Brown" ], "title": "Frame-recurrent video superresolution", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Alvaro Sanchez-Gonzalez", "Nicolas Heess", "Jost Tobias Springenberg", "Josh Merel", "Martin A. Riedmiller", "Raia Hadsell", "Peter W. Battaglia" ], "title": "Graph networks as learnable physics engines for inference and control", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Alvaro Sanchez-Gonzalez", "Jonathan Godwin", "Tobias Pfaff", "Rex Ying", "Jure Leskovec", "Peter W Battaglia" ], "title": "Learning to simulate complex physics with graph networks", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Connor Schenck", "Dieter Fox" ], "title": "SPNets: Differentiable fluid dynamics for deep neural networks", "venue": "In Conference on Robot Learning,", "year": 2018 }, { "authors": [ "Vincent Sitzmann", "Julien NP Martel", "Alexander W Bergman", "David B Lindell", "Gordon Wetzstein" ], "title": "Implicit neural representations with periodic activation functions", "venue": "arXiv preprint arXiv:2006.09661,", "year": 2020 }, { "authors": [ "Jos Stam" ], "title": "Stable Fluids", "venue": "In Proc. ACM SIGGRAPH, pp. 121–128", "year": 1999 }, { "authors": [ "Alexey Stomakhin", "Craig Schroeder", "Lawrence Chai", "Joseph Teran", "Andrew Selle" ], "title": "A Material Point Method for Snow Simulation", "venue": "ACM Trans. Graph.,", "year": 2013 }, { "authors": [ "Jonathan Tompson", "Kristofer Schlachter", "Pablo Sprechmann", "Ken Perlin" ], "title": "Accelerating eulerian fluid simulation with convolutional networks", "venue": null, "year": 2017 }, { "authors": [ "Kiwon Um", "Xiangyu Hu", "Nils Thuerey" ], "title": "Liquid splash modeling with neural networks", "venue": "In Computer Graphics Forum,", "year": 2018 }, { "authors": [ "Benjamin Ummenhofer", "Lukas Prantl", "Nils Thuerey", "Vladlen Koltun" ], "title": "Lagrangian fluid simulation with continuous convolutions", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Marcel Weiler", "Dan Koschier", "Magnus Brand", "Jan Bender" ], "title": "A physically consistent implicit viscosity solver for sph fluids", "venue": "Computer Graphics Forum,", "year": 2018 }, { "authors": [ "You Xie", "Erik Franz", "Mengyu Chu", "Nils" ], "title": "Thuerey. tempogan: A temporally coherent, volumetric gan for super-resolution fluid flow", "venue": "ACM Trans. Graph.,", "year": 2018 }, { "authors": [ "Lantao Yu", "Weinan Zhang", "Jun Wang", "Yong Yu" ], "title": "Seqgan: Sequence generative adversarial nets with policy gradient", "venue": "In AAAI,", "year": 2017 }, { "authors": [ "Mohamed Akram Zaytar", "Chaker El Amrani" ], "title": "Sequence to sequence weather forecasting with long short-term memory recurrent neural networks", "venue": "International Journal of Computer Applications,", "year": 2016 }, { "authors": [ "Yongning Zhu", "Robert Bridson" ], "title": "Animating Sand as a Fluid", "venue": "ACM Trans. Graph.,", "year": 2005 } ]
[ { "heading": "1 INTRODUCTION", "text": "Complex and chaotic physical phenomena such as liquids, gels and goo are still very challenging when it comes to representing them as detailed and realistically as possible. A variety of numerical methods have been proposed to simulate such materials, from purely Eulerian methods (Harlow & Welch, 1965; Stam, 1999), over particle based methods (Gingold & Monaghan, 1977; Ihmsen et al., 2014), to hybrids (Zhu & Bridson, 2005; Stomakhin et al., 2013). Such simulations have also been targeted with deep learning methods (Tompson et al., 2017; Mrowca et al., 2018; Li et al., 2019), but despite significant advances, they remain very time-consuming and highly challenging to solve.\nOne approach to speed up the necessary calculations and to allow for more control is to employ super-sampling. This can be seen as a form of post-processing where one simulates only a lowresolution simulation and uses an up-sampling technique to approximate the behavior of a highresolution simulation. Neural networks are of special interest here because of their capability to efficiently approximate the strongly nonlinear behavior of physical simulations. Applying neural networks to space-time data sets of physical simulations has seen strongly growing interest in recent years (Ladicky et al., 2015; Kim et al., 2020), and is particularly interesting in this context to incorporate additional constraints, e.g., for temporal coherence (Xie et al., 2018), or for physical plausibility (Tompson et al., 2017; Kim et al., 2019).\nAn important aspect here is that methods based on simple distance losses, such as mean square errors, quickly reach their limits. The generated data tends to be smooth without the necessary small-scale features. Generative adversarial networks (GANs) have been proposed to overcome this issue (Goodfellow, 2016). They are characterized by the fact that, apart from a generative network, they also make use of a discriminator that classifies the results of the generator with respect to the ground-truth data. Via a joint training, the distribution of solutions of the generator is guided to approximate the ground-truth data distribution. As the quality of the results is primarily determined by the discriminator network, it remains an open problem to accurately evaluate the quality of the inferred results. In our work we propose to evaluate the problem in the Fourier space. In this way, we are able to evaluate the given methods reliably, and it allows us to design improved learning algorithms that more faithfully recover the small scale details of the reference data.\nFor the core of our method, we build on an existing GAN-based architecture that employs two discriminator networks, one for the spatial and one for the temporal behaviour (Xie et al., 2018). In terms of ground truth data, we focus on multi-phase (solid-fluid-air) interactions with a sharp fluid-air interface. Unlike single-phase flow whose details are visible and relevant solely due to transparency throughout the volume, the details of our data are in most cases only visible on the surface. Of course, the internal dynamics in the volume also play a role, but they are mostly hidden\nfrom the viewer, only the effects on the surface are visible. Furthermore, we consider phenomena that build up and take place over the course of several frames. Thus, as we will outline below, we employ a recurrent approach that is conditioned on a previous output in order to produce the solution for a subsequent timestep.\nIn order to represent and process fine details, we treat such detail as high-frequency displacements of a low-frequency surface, and correspondingly formulate the problem in Fourier space. The transformation into Fourier space yields an isolated view of the individual frequencies, and thus allows for a much improved analysis of the results achieved by different methods. E.g., it robustly identifies the strong smoothing behavior of L2 metrics, and can detect mode collapse problems of adversarial training runs. We also demonstrate how frequency information can be incorporated into the learning objective in order to improve results.\nTo summarize, the central contributions of our work are: (1) A method for frequency evaluation with a consideration of spatial properties, (2) A novel frequency aware loss formulation, (3) A simple, yet intuitive evaluation of different generative methods, (4) A time consistent spatio-temporal upsampling of complex physical surfaces.\nRelated Work Deep learning methods in conjunction with physical models were employed in variety of contexts, ranging from learning models for physical intuition (Battaglia et al., 2016; Sanchez-Gonzalez et al., 2018), over robotic control (Schenck & Fox, 2018; Hu et al., 2019) to engineering applications (Ling et al., 2016; Morton et al., 2018). In the following, we focus on fluid-like materials with continuous descriptions, which encompass a wide range of behavior and pose challenging tasks for learning methods (Mrowca et al., 2018; Li et al., 2019). For fluid flows in particular, a variety of learning methods were proposed (Tompson et al., 2017; Prantl et al., 2017; Um et al., 2018). A common approach to reduce the high computational cost of a simulation is to employ super-resolution techniques (Dong et al., 2016; Chu & Thuerey, 2017; Bai et al., 2019). In this context, our work targets the up-sampling for physics-based animations, for which we leverage the approach proposed by Xie et al. (2018). However, in contrast to this work, we target phenomena with clear interfaces, which motivates the frequency-based viewpoint of our work.\nFor sharp interfaces, Lagrangian models are a very popular discretization of continuum mechanical systems. E.g., smoothed particle hydrodynamics (SPH) (Gingold & Monaghan, 1977; Koschier et al., 2019) is a widely-used particle-based simulation method. While points and particles are likewise frequently used representations for physical deep learning (Li et al., 2019; Ummenhofer et al., 2019; Sanchez-Gonzalez et al., 2020), Eulerian, i.e., grid-based representations offer advantages in terms of efficient and robust kernel evaluations.\nWe employ generative adversarial networks (Goodfellow, 2016), as a powerful and established method for learning generative models. Here, ”unconditional” GANs typically rely on a synthetic input vector from Gaussian noise to produce the desired output distribution, e.g., the DC-GAN approach (Radford et al., 2016). Conditional GANs (Mirza & Osindero, 2014) were introduced to provide the network with an input that allows the neural network to steer the generation of the output. Hence super-resolution tasks for natural images (Ledig et al., 2016), or image translation tasks (Isola et al., 2017) employ conditional GANs. The time dimension was also taken into account in natural imaging works, e.g., by Saito et al. in the form of a temporal generator (Saito et al., 2017), or via a stochastic sequence generator (Yu et al., 2017). Other works have included direct L2 loss terms as temporal regularizers (Bhattacharjee & Das, 2017; Chen et al., 2017), which, however, typically strongly restricts the changes over time. Similar to flow advection, video networks also often use warping information to align data over time (Liu et al., 2017; de Bezenac et al., 2017). We will demonstrate that recurrent architectures similar to those used for video super-resolution (Sajjadi et al., 2018) are likewise very amenable for physical problems over time." }, { "heading": "2 METHOD", "text": "The input for our method is a coarsely approximated source simulation, with the learning objective to infer the surface of a target simulation over space and time. This target is typically computed via a potentially very costly, finely resolved simulation run for the same physical setup. When it comes to the possibilities of simulation representations, there is a great variance. In our case we have chosen an implicit representation of the data, by a signed-distance field (SDF) denoted by g : R3 → R.\nAn SDF returns, for a given point, the signed distance to the surface, with negative being inside the medium. Such a function is realized in practice by a grid X ∈ RMx×My×Mz , storing the pre-computed signed distance values, where M∗, ∗ ∈ {x, y, z} specifies the size of the grid in the respective dimension x, y or z. We have chosen this representation because most neural network layers are designed for array-like representations, and the loss functions on grid-based data are very efficient to evaluate. Additionally, an implicit representation via a grid can leverage tools from the field of level-set processing (Adalsteinsson & Sethian, 1999), and facilitates the frequency viewpoint via a Fourier transformation. Additional values, like the velocity, are also mapped on a grid V ∈ RMx×My×Mz×3. Our goal is to let a generative network G : RMx×My×Mz×4 → RNx×Ny×Nz infer a grid Ỹ which approximates a desired high-resolution simulation Y ∈ RNx×Ny×Nz with N∗ = kM∗, N∗ ∈ N{x,y,z} and up-sample factor k ∈ N, i.e. G(X) = Ỹ ≈ Y . As our method only requires position and velocity data from a simulation, it is largely agnostic to the type of solver or physical model for generating the source and target particle data." }, { "heading": "2.1 NEURAL NETWORK FORMULATION", "text": "Our method is based on a generative, neural network with a 3D fully-convolutional ResNet architecture (He et al., 2016) that produces an output field at a single instance in time. The low-resolution input data is first up-sampled with a tri-linear up-sampling and then processed with several convolutional layers, as shown in Figure 1a. We use leaky ReLU as activation function after each layer, except for the last layer, where we use a tanh activation. In our case, the input data consists of the implicitly represented geometry data Xt, the velocity Vt of the simulation as well as the results of a previous pass Ỹt. The previously generated data is advected with the low-resolution velocity before further processing. Through this feedback loop we train our network recurrently by iterating over a sequence of T = 10 frames. This yields stability over longer periods of time and gives better insights about temporal behaviour. Furthermore, the recurrent training is important to enable persistent behavior over time, such as the progression of fine surface waves. Unlike the process for generating the input data, the network training cannot resort to a physical simulation with full resolution, and hence cannot uniquely determine the evolution of future states. Therefore, its main learning objective is to capture the dynamics of the target simulations beyond that basic motion computed with an advection step. For initialization of the undefined first frame Ỹ−1 we use a tri-linear up-sampled version of the input. To train our network we have to define first a loss function that allows us to evaluate the differences between generated and ground-truth data. The most basic loss function is a simple mean squared error (MSE): Ls = ||Y − Ỹ ||22. (1) This has the big disadvantage that it is ill-suited to measure the similarity or differences of solutions. For example, considering a function with multiple solutions for a given input, i.e., a multi-modal setting, a method that trains with an MSE loss will learn the expected value of the output distribution,\ni.e, the average of the different solutions. However, the average is typically not a part of the solution set. Thus, the MSE loss often does not correspond to the correct distance in solution space, based on significant factors corresponding to the distribution of the solutions. Our super-sampling setup is such a problem: Due to the low resolution input, the high resolution details cannot be determined uniquely, resulting in a variety of possible solutions when up-sampling. Via physical properties of the material and its temporal sequence, some solutions can be eliminated, but nonetheless the space of solutions typically remains infinitely large. If an MSE loss is used, all such samples from the training data set are simply averaged to obtain a mean value, so that the result does no longer reflect the level of detail of the ground-truth data.\nThe MSE loss nevertheless gives a rough direction, and provides a stable learning target. Hence, we still use it as a component in the final loss formulation, in combination with an adversarial loss. In contrast to a direct distance metric, the adversarial loss approximates the ground-truth distribution. Hence, the network no longer learns one mean value, but chooses one valid solution out of the possible ones. We define a discriminator Ds that takes as input a high-resolution version of a simulation frame and classifies it, distinguishing between ground-truth and generated frames. It does this through a binary output, where 0 is ”fake” and 1 is ”real”. Its task is to provide the generator with feedback on the correctness of the given data. The special feature is that the discriminators are trained together with the generator, thus creating a competitive interaction where both parties improve each other. As loss for the discriminator we use a binary cross-entropy:\nLbce = y log(ỹ) + (1− y) log(1− ỹ), (2)\nwhere y is the ground-truth and ỹ is the generated value from the discriminator. For complex tasks, GANs can be unstable and difficult to control. For this reason we additionally use the recent Spectral Normalization (Miyato et al., 2018), which we found to provide more stable adversarial training.\nWhile we have primarily focused on spatial content, i.e., the surface of the material so far, the temporal behavior likewise plays a crucial role, and poses similar difficulties in our multi-modal setting. On the one hand, the generation of details can quickly lead to temporally incoherent results, which is characterized by unappealing flickering. On the other hand, our network also should be able to match and recreate spatial solutions over time that reflect the physical behavior. Following previous work (Xie et al., 2018), we use an additional discriminator Dt to classify the temporal behavior of data. This is done by passing three corresponding frames, which are aligned with each other using advection A : RNx×Ny×Nz×3 × RMx×My×Mz×3 → RNx×Ny×Nz×3. Apart from this, the temporal discriminator closely follows the structure of the spatial discriminator. Both discriminators (Figure 1b) use a typical funnel structure, where the dimension is increasingly reduced using strided convolutional layers, with a last fully connected layer computing the classification result. We likewise use leaky ReLU activations, with a sigmoid function for the last layer.\nThe classification of the discriminators is included in the loss formulation of the generator:\nLDs = 1\nT T∑ t Ds(G(tX), Xt),\nLDt = Dt(A(G(Xt−1), Vt),G(X),A(G(Xt+1),−Vt), X),\n(3)\nwhich gives the final loss function:\nLG = Ls + αLDs + βLDt, (4)\nwhere α and β indicate the weighting of the individual loss terms.\nAn additional benefit of the adversarial loss is that it allows for learning from unpaired data. A common problem for up-sampling methods is the generation of paired ground truth data for training. Due to different numerical approximations, and hence potentially differing physical behavior, the easiest solution is to simulate at high resolution, and down-sample the data. While at training time the down-sampled data is used, at test time, the model needs to be applied to data from a lowresolution simulation instead. This typically leads to large distribution shifts, and correspondingly impaired inference quality. Therefore, we take an unpaired training approach into account that decouples the low and high resolution data. The feedback from the discriminators is still based on the ground-truth data, which makes the output conditionally dependent on the input, but also approximates the behavior of the reference data. However, there is no direct supervision in the\ngenerator anymore: the output is no longer compared with a matching ground-truth in the loss, but only related to the input. This is done by down-sampling the output and comparing it with the input:\nL∗s = ||X − p(Ỹ )||22, (5)\nwhere p : RNx×Ny×Nz×3 → RMx×My×Mz×3 is a down-sampling function based on average pooling. This effectively removes the need for paired low- and high-resolution samples at training time, and fully relies on the discriminator to match both distributions.\nTo indicate the focus on surface structures, we refer to the final version of our generative network as surfGAN. For a more detailed description of the training and the network architecture we refer to the appendix A.1 and A.2." }, { "heading": "2.2 FREQUENCY EVALUATION", "text": "Given the formulation so far, an inherent difficulty that remains is a robust and reliable evaluation of the generated outputs. In a GAN setting, the discriminator determines the content, but it is typically not possible to evaluate whether it has correctly learned the shape of the output distribution. While obvious cases such as a mode collapse towards a constant signal are easy to detect, it remains challenging to reliably detect shifts in the data distributions, especially so for GANs (Arjovsky et al., 2017; Jolicoeur-Martineau, 2018). In our setting, the outputs should on the one hand represent a high-resolution version of the input, i.e., the down-scaled output should correspond to the input. This can be measured with a simple MSE. In addition, we also expect the generation of details that match the ground-truth as closely as possible. MSE is no longer usable for this, as even if the right details are generated, a slight translation would already lead to substantial errors. In addition, smallscale features have little effect on the MSE, despite being crucial for a realistic simulation result. Finally, the temporal behaviour should also correspond to that of the ground-truth.\nConsidering the problem in frequency space, the sought after detail consists of high-frequency features that cannot be represented by the low-resolution simulation. In addition, we have to distinguish between spatial and temporal frequencies. The frequency-based view has the advantage that it yields a simple but powerful performance evaluation that complements the discriminator of the GAN training. For the frequency evaluation we consider the SDF of our implicit grid Gt ∈ RNx×Ny×Nz for t ∈ [0, T ), which can be equated with the frequency behavior of the surface. In order to retain spatial properties in addition to the local frequency behaviour of the surface, we divide our image into blocks with size b3 whose Fourier coefficients gkx,ky,kz we evaluate, where k∗ ∈ [0, b). The method is similar to the one used for JPEG compressions. Therefore, we divide the domain into N ′x × N ′y × N ′z regions for the spatial frequency evaluation, where N ′∗ = N∗/b. Using a three-dimensional fast Fourier transformation (FFT), we then transform the extracted blocks into the Fourier components:\ngkx,ky,kz = 1\nb3 ∣∣∣∣∣ ∣∣∣∣∣ b−1∑ nx=0 ( ωkxnx b−1∑ ny=0 ( ωkyny b−1∑ nz=0 ωkznz ))∣∣∣∣∣ ∣∣∣∣∣, (6) where ω = e −i2π b . An important point here is that we take the absolute value from the coefficients in order to eliminate the phase, and focus only on the amplitude of the frequency component. This\nmakes the problem more robust for translated small-scale structures. Equation 6 yields b × b × b Fourier components per box. The individual components can now be grouped with the same components from other blocks so that we get one Nx/b × Ny/b × Nz/b grid per Fourier component (Figure 2), with each grid corresponding to a certain frequency. These grids can be further processed, e.g., via computational kernels, and in addition can be inspected by humans for verification. For the temporal frequencies we consider the changes per pixel over time via FFT. For longer sequences, a block-wise evaluation would be conceivable for time as well. The separation of spatial and temporal frequencies gives us the possibility to compensate for differences by adjusting the weighting. Based on this setup we can now evaluate results. On the one hand, we can directly compare the frequency range of generated data and identify missing details for single samples or mini-batches. E.g., this is amenable to loss formulations for learning objectives. On the other hand we can also create a histogram for the frequencies of the whole solution space and thus compare the frequency distribution of the generated and ground-truth data. This is especially helpful for GANs and works also for unsupervised setups." }, { "heading": "3 RESULTS", "text": "For evaluation we consider two different data sets: a synthetic 2D case and a 3D particle-based simulation. The 2D data is fully controllable such that the frequency spectrum of the surface can be evaluated reliably. The basis is a wavy surface, based on a sine wave with varying frequency (Figure 3). This forms a wide range of analysis to isolate problems in the generative process and illustrate the aspects of the proposed method. We then evaluate the established methodology in 3D for a more complex scenario with data generated from a highly viscous SPH simulation (Weiler et al., 2018). For more information on both data sets, refer to the appendix A.3 and A.4." }, { "heading": "3.1 FREQUENCY EVALUATION", "text": "We first consider the controlled 2D data. Training is performed with the full data set of 10000 samples with 30 frames, and we evaluate the resulting models on 10 simulations that are not part of the training set with 30 frames each. Examples of the block-wise analysis from our frequency evaluation (Figure 2) are shown in Figure 4. From top to bottom, it compares ground-truth, a network with MSE loss, and the surfGAN result. While the MSE-based variant is not able to reconstruct fine details, the surfGAN can recover these, as highlighted by the frequency evaluation. The highfrequency blocks clearly illustrate the differences into terms of reconstruction accuracy. In Table 1 and Figure 5 we also compare the mean error values of the different blocks quantitatively and via histograms. The mean errors likewise illustrate that the MSE version is not able to reconstruct the high frequencies of the target function. The spectrum of the generated data hardly shows any deflection in the high frequency range (Figure 5(a)). We additionally extended this fully supervised MSE loss with a term that takes into account differences in the frequency spectrum via an L2 norm between the Fourier components of the data. Figure 5(b) shows that this improves the situation, but does not suffice to reconstruct the full high frequency range. This is caused by the inherent averaging of the MSE, which is still suboptimal in Fourier space. In contrast, our surfGAN setup (Figure 5(c)) achieves the best results. The spectrum of the prediction very closely matches that of the ground truth. The GAN based method seems to be able to implicitly learn the underlying frequency distribution. Finally, we repeat the evaluation with an unpaired surfGAN setup. Despite the fundamentally more challenging learning setup, the network manages to recover the missing frequencies, as shown in Figure 5(d). Interestingly, the discriminator feedback causes the generator to slightly overshoot in terms of high frequency content.\nFor our 3D results we have collected the values in Table 2. We compare our method with tempoGAN, whose performance suffers mainly from the missing recursion loop. It is clearly visible that surfGAN has the smallest error in the spatial frequencies, which correlates to the spatial detail level. The temporal frequency, on the other hand, is almost equal to the value of the NN with the MSE loss. This is probably due to the fact that the temporal behavior is relatively smooth." }, { "heading": "3.2 QUALITATIVE RESULTS", "text": "As a qualitative evaluation we executed our network with test data and visualize the results. Figure 6 shows visual examples of the 2D frequency evaluation. While a version based on MSE generates very smooth images in the random data set, the surfGAN versions are able to reconstruct the jagged edges. In direct comparison with the ground-truth data there are differences, but this is due to the randomness of the data. Therefore, the exact solution cannot be reconstructed, but the surfGAN is still able to generate a very plausible solution.\nFor the 3D case we focus on the paired surfGAN setup. Figure 7(a) shows that our prediction is able to reconstruct most of the details, even if they are not present in the input. We have deliberately chosen frames after a long run-time (190-220 time-steps) to show that details can persist and that they are the result of complex, physically-based behavior over multiple frames. Again, we compared our method with a simpler MSE-based generator and tempoGAN. With MSE, the results become comparatively smooth, as can be seen in Figure 7(b). In Figure 7(c) however, the results of tempoGAN are shown, where details are generated quite randomly and display a rather chaotic behavior over time. Due to the lack of a recurrent processing, the network cannot build an accurate internal state, which has the effect that the details are generated unnaturally." }, { "heading": "4 CONCLUSION", "text": "We have presented a learning-based method to infer spatio-temporal detail to complex physical simulations. Our method puts special emphasis on high frequency content, and we present an approach\nfor assessing GAN-based outputs in terms of the generated spatial and temporal frequencies. Interestingly, our proposed surfGAN performs better than a direct supervision in terms in frequency space. Our method provides a first step towards evaluation and synthesis of physical space-time processes, and could be employed for other phenomena such as turbulence Ling et al. (2016) or weather Zaytar & El Amrani (2016). Furthermore, it will be interesting to employ it in conjunction with other frequency-based representations Sitzmann et al. (2020)." }, { "heading": "A APPENDIX", "text": "A.1 IMPLEMENTATION DETAILS\nThe main part of the generator consists of four blocks containing two convolution layers and a residual connection. The feature count of the convolution layers per block is as follows: [[8, 32], [64, 64], [32, 8], [1, 1]]. The kernel size is 5 × 5 × 5 for all layers. For the discriminators we use only convolution layers and no residual blocks. Both discriminators, spatial and temporal, have the same number of features per layer: [8, 16, 32, 32]. The fully-connected layer at the end of the layer consists of 64 and 1 neurons. The kernel size is 4 × 4 × 4 for all layers. It is important to mention that for the first convolution of every network we do not use zero padding as usual but mirror padding. This is because we work with tiles from the training data and not with the complete frame and zero padding falsifies the values at the transitions between the tiles.\nA.2 TRAINING DETAILS\nFor the training we have implemented our network with the Tensorflow Framework. We use an Adam optimizer with a learning rate of 0.00001 and a batch size of 16 for the 2D tests and 4 for the 3D tests for 50k iterations. All other weights are initialized with the respective standard initializers of Tensorflow version 2.1. The weighting factors α and β of Equation 4 are set to 1.0 and 10.0 correspondingly.\nA.3 SYNTHETIC DATA SET\nWe use a synthetic data set to test and evaluate different aspects of our approach. The data set is designed to have a simple, clearly defined behavior, and such that the frequency spectrum of the surface can be evaluated reliably. Therefore, we use a horizontal wavy surface based on a 1D sine function st(x) with randomly varying frequency fl ∈ (0, M2 ):\ns0(x) = sin( πfl(x)x\nM ), x ∈ [0,M), (7)\nwhere M is the resolution of the low-resolution data.\nTo make a time sequence out of this, we use a simple wave equation:\nδ2s δt2 = δ2s δx2 , (8)\nDiscretizing this we can calculate the vertical velocity of our surface as follows:\nvt(x) = vt−1(x) + 2 ∗ st(x)− st(x−∆x) + st(x+ ∆x)\n∆x , (9)\nwhere v0(x) = 0. Given the velocity we can now calculate the next frame as follows:\nst(x) = st−∆t(x) + ∆tvt(x), (10)\nFor the high resolution data set, i.e, the targets to be learned, we use the same low resolution wave as base, and modulate it with a high frequency component (Figure 8):\nt0(x) = sin( πfl(x)x\nkM ) + sin(\nπfh(x)x\nkM ), x ∈ [0, kM), (11)\nwhere k is the chosen up-sampling factor and the frequency fh(x) is chosen so that it cannot be represented by the low-resolution version. According to the Nyquist-Shannon sampling theorem, the frequency should be higher than M2 and below kM 2 (Figure 3(a)). The generation of a sequence is done in the same way like for the low-resolution data.\nBased on this setup, we generate two different data sets of high resolution data: one where we modulate with a fixed high frequency fh(x) = const, whereas in the second version we vary the this high frequency component (Figure 3(b)). Thus the first version represents a deterministic upsampling, which a generator should be able to reconstruct perfectly, whereas the second version is ill-posed, i.e. several solutions are possible, and hence poses a much more difficult learning target. Both data sets consist of 10000 sequences with 30 frames each at the end. While the deterministic version only serves as a sanity check which we only discuss here in the appendix (A.5) , the second, randomized version shows how well the method can approximate the ground-truth distribution for ill-posed tasks like the actual super-resolution problem for physical simulation data.\nA.4 SIMULATION DATA\nFor the generation of simulation, data we use an SPH solver of the SPlisHSPlasH framework (Bender, 2017). There are different materials to choose from. With materials that exhibit high-frequency physical behavior, chaotic behavior occurs in some cases, such as splashes in water, which are typically very difficult to reconstruct. With more viscous materials, such as gel, details are mainly distinguished by folds and fine waves on the surface. The chaotic behaviour is very difficult to reconstruct and it is sometimes very difficult to understand how correct the behaviour is. For these reasons we focus more on materials like gel. With gel, fine wrinkles can form on the surface which allows a good evaluation of the method. Another special feature is that details are persistent over time. Methods such as Xie et al. (2018) cannot represent such details because they do not provide feedback in the network. This means there is no memory. Therefore, we use an plastic material with high viscosity, simulated with an advanced viscosity solver (Weiler et al., 2018).\nFor the training we generate 60 different scenes with 300 frames per simulation. The time step corresponds to 100 frames per second. The scenes consist of randomly generated shapes that fall into a pool from different heights at random times. This creates interesting waves and folds on the surface. The ground-truth resolution is 1603. For the generation of the low-resolution data we distinguish between the way the training is done. For the supervised setup the ground-truth data is scaled down by the desired up-sampling factor k and then smoothed with a Gaussian blur. This results in synchronous data pairs that can be used in a supervised setup. In the unsupervised setup, however, the low-resolution data is generated in the same way as the high-resolution data, with the correspondingly lower resolution. From the position and velocity data of the particles, we then generate our SDF and velocity grid which we use for training. Before the training, the data is normalized over the whole data set, so that the data is in the value range between -1 and 1. For a sharper edge in the SDF grid, which prevents unwanted noise in the generated data, we additionally modify the normalized values with a tangent hyperbolic function:\nf(x) = tanh(cx) (12)\nwhere c can be freely selected, depending on the strength of the transition. We found 5 to be a good value. Finally, we take advantage of the locality of our problem and only use excerpts from the training data frames in training. On the one hand, this saves memory, because the data is sometimes very large and can cause problems with the GPU memory, on the other hand it allows us to extract only relevant parts of the data. So in our example we can only consider the data in places where there is a surface. Finally we have the advantage to augment the data by overlapping the tiles we extract, so we can get a lot of information from only a few frames.\nA.5 ADDITIONAL RESULTS\nAs a sanity check we used a deterministic setup as described in A.3. With this setup we tested if the network is able to modulate a static high frequency with a simple MSE loss. In Figure 9a this is shown to be the case. This serves as a baseline for what the network can achieve. In Table 9b we compare the frequency deviation with the MSE network trained with the non-deterministic setup. As expected, the error is much smaller with the deterministic setup.\nIn Figure 10 and Figure 11 we compare the frequency evaluation of two more data samples, the same as in Figure 4.\nFinally there are some more 3D results in Figure 12 and Figure 13." } ]
2,020
null
SP:96e4c8e540941178aa3a9d9c0f11a58128a87e26
[ "This paper investigates „lookahead dynamics of smooth games“. By this the authors mean discrete-time dynamical systems generating from a given algorithm by adding a relaxation step in the updates. The main aim of the paper is to solve smooth games. Under sufficient convexity assumptions Nash equilibria for such games can be identified as solutions to a Variational Inequality with a monotone and operator. This is in particular the case for convex-concave min-max problems. The main conclusion of this paper is that a combination of relaxation and lookahead effects stabilizes the learning dynamics and can lead to acceleration over the base algorithm." ]
As multi-agent systems proliferate in machine learning research, games have attracted much attention as a framework to understand optimization of multiple interacting objectives. However, a key challenge in game optimization is that, in general, there is no guarantee for usual gradient-based methods to converge to a local solution of the game. The latest work by Chavdarova et al. (2020) report that Lookahead optimizer (Zhang et al., 2019) significantly improves the performance of Generative Adversarial Networks (GANs) and reduces the rotational force of bilinear games. While promising, their observations were purely empirical, and Lookahead optimization of smooth games still lacks theoretical understanding. In this paper, we fill this gap by theoretically characterizing Lookahead dynamics of smooth games. We provide an intuitive geometric explanation on how and when Lookahead can improve game dynamics in terms of stability and convergence. Furthermore, we present sufficient conditions under which Lookahead optimization of bilinear games provably stabilizes or accelerates convergence to a Nash equilibrium of the game. Finally, we show that Lookahead optimizer preserves locally asymptotically stable equilibria of base dynamics, and can either stabilize or accelerate the local convergence to a given equilibrium with proper assumptions. We verify our theoretical predictions by conducting numerical experiments on two-player zero-sum (non-linear) games.
[]
[ { "authors": [ "Waiss Azizian", "Damien Scieur", "Ioannis Mitliagkas", "S. Lacoste-Julien", "Gauthier Gidel" ], "title": "Accelerating smooth games by manipulating spectral shapes", "venue": "In AISTATS,", "year": 2020 }, { "authors": [ "D. Balduzzi", "Sébastien Racanière", "J. Martens", "Jakob N. Foerster", "K. Tuyls", "T. Graepel" ], "title": "The Mechanics of n-Player Differentiable Games", "venue": null, "year": 2018 }, { "authors": [ "Trapit Bansal", "Jakub W. Pachocki", "S. Sidor", "Ilya Sutskever", "Igor Mordatch" ], "title": "Emergent Complexity via Multi-Agent Competition", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "D.P. Bertsekas" ], "title": "Nonlinear Programming", "venue": "Athena Scientific,", "year": 1999 }, { "authors": [ "A. Brock", "J. Donahue", "K. Simonyan" ], "title": "Large Scale GAN Training for High Fidelity Natural Image Synthesis", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Tatjana Chavdarova", "Matteo Pagliardini", "M. Jaggi", "Franois Fleuret" ], "title": "Taming GANs with Lookahead", "venue": "ArXiv, abs/2006.14567,", "year": 2020 }, { "authors": [ "C. Chen" ], "title": "Linear System Theory and Design", "venue": null, "year": 1995 }, { "authors": [ "X. Chen", "X. Deng", "S. Teng" ], "title": "Settling the Complexity of Computing Two-player Nash Equilibria", "venue": "J. ACM,", "year": 2009 }, { "authors": [ "C. Daskalakis", "Ioannis Panageas" ], "title": "The Limit Points of (Optimistic) Gradient Descent in Min-Max Optimization", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "C. Daskalakis", "P. Goldberg", "C. Papadimitriou" ], "title": "The Complexity of Computing a Nash Equilibrium", "venue": "Electron. Colloquium Comput. Complex.,", "year": 2006 }, { "authors": [ "C. Daskalakis", "Andrew Ilyas", "Vasilis Syrgkanis", "Haoyang Zeng" ], "title": "Training GANs with Optimism", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "J. Donahue", "K. Simonyan" ], "title": "Large Scale Adversarial Representation Learning", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "F. Facchinei", "J. Pang" ], "title": "Finite-Dimensional Variational Inequalities and Complementarity Problems", "venue": null, "year": 2003 }, { "authors": [ "Justin Fu", "Katie Luo", "S. Levine" ], "title": "Learning Robust Rewards with Adversarial Inverse Reinforcement Learning", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Gauthier Gidel", "Hugo Berard", "Pascal Vincent", "S. Lacoste-Julien" ], "title": "A Variational Inequality Perspective on Generative Adversarial Nets", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Gauthier Gidel", "Reyhane Askari Hemmat", "M. Pezeshki", "Gabriel Huang", "Rémi Le Priol", "S. LacosteJulien", "Ioannis Mitliagkas" ], "title": "Negative Momentum for Improved Game Dynamics", "venue": "AISTAT,", "year": 2019 }, { "authors": [ "Ian J. Goodfellow", "Jean Pouget-Abadie", "M. Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron C. Courville", "Yoshua Bengio" ], "title": "Generative Adversarial Nets", "venue": "In NIPS,", "year": 2014 }, { "authors": [ "Ian J. Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and Harnessing Adversarial Examples", "venue": "CoRR, abs/1412.6572,", "year": 2015 }, { "authors": [ "Ishaan Gulrajani", "F. Ahmed", "Martı́n Arjovsky", "Vincent Dumoulin", "Aaron C. Courville" ], "title": "Improved training of wasserstein", "venue": "gans. ArXiv,", "year": 2017 }, { "authors": [ "Ya-Ping Hsieh", "Panayotis Mertikopoulos", "Volkan Cevher" ], "title": "The Limits of Min-Max Optimization Algorithms: Convergence to Spurious Non-critical", "venue": "Sets. ArXiv,", "year": 2020 }, { "authors": [ "Tero Karras", "S. Laine", "Timo Aila" ], "title": "A Style-Based Generator Architecture for Generative Adversarial Networks", "venue": null, "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A Method for Stochastic Optimization", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "M. G" ], "title": "Korpelevich. The Extragradient Method for Finding Saddle Points and Other Problems", "venue": "Ekonomika i Matematicheskie Metody,", "year": 1976 }, { "authors": [ "Marc Lanctot", "V. Zambaldi", "A. Gruslys", "A. Lazaridou", "K. Tuyls", "Julien Pérolat", "D. Silver", "T. Graepel" ], "title": "A Unified Game-Theoretic Approach to Multiagent Reinforcement Learning", "venue": "NeurIPS,", "year": 2017 }, { "authors": [ "J. Lee", "Ioannis Panageas", "G. Piliouras", "Max Simchowitz", "Michael I. Jordan", "B. Recht" ], "title": "First-order methods almost always avoid saddle points", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Tengyuan Liang", "J. Stokes" ], "title": "Interaction Matters: A Note on Non-asymptotic Local Convergence of Generative Adversarial Networks", "venue": "In AISTATS,", "year": 2019 }, { "authors": [ "N. Loizou", "Hugo Berard", "Alexia Jolicoeur-Martineau", "Pascal Vincent", "S. Lacoste-Julien", "Ioannis Mitliagkas" ], "title": "Stochastic Hamiltonian Gradient Methods for Smooth Games", "venue": "In ICML,", "year": 2020 }, { "authors": [ "A. Madry", "Aleksandar Makelov", "L. Schmidt", "D. Tsipras", "Adrian Vladu" ], "title": "Towards Deep Learning Models Resistant to Adversarial Attacks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "B. Martinet" ], "title": "Brève Communication. Régularisation d’inéquations Variationnelles par Approximations Successives. ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et", "venue": "Analyse Numérique,", "year": 1970 }, { "authors": [ "Eric V. Mazumdar", "Michael I. Jordan", "S. Sastry" ], "title": "On Finding Local Nash Equilibria (and Only Local Nash Equilibria) in Zero-Sum", "venue": "Games. ArXiv,", "year": 2019 }, { "authors": [ "P. Mertikopoulos", "C. Papadimitriou", "G. Piliouras" ], "title": "Cycles in Adversarial Regularized Learning", "venue": "In SODA,", "year": 2018 }, { "authors": [ "P. Mertikopoulos", "Houssam Zenati", "Bruno Lecouat", "Chuan-Sheng Foo", "V. Chandrasekhar", "G. Piliouras" ], "title": "Mirror descent in saddle-point problems: Going the extra (gradient", "venue": "mile. ArXiv,", "year": 2019 }, { "authors": [ "Lars M. Mescheder", "Sebastian Nowozin", "Andreas Geiger" ], "title": "The Numerics of GANs", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Lars M. Mescheder", "Andreas Geiger", "Sebastian Nowozin" ], "title": "Which Training Methods for GANs do actually Converge", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Aryan Mokhtari", "A. Ozdaglar", "Sarath Pattathil" ], "title": "A Unified Analysis of Extra-gradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach", "venue": "In AISTATS,", "year": 2020 }, { "authors": [ "Cristinel Mortici", "Hary M Srivastava" ], "title": "Estimates for the Arctangent Function Related to Shafer’s Inequality", "venue": "Colloq. Math,", "year": 2014 }, { "authors": [ "A. Nemirovski" ], "title": "Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems", "venue": "SIAM J. Optim.,", "year": 2004 }, { "authors": [ "L. Popov" ], "title": "A modification of the arrow-hurwicz method for search of saddle points", "venue": "Mathematical notes of the Academy of Sciences of the USSR,", "year": 1980 }, { "authors": [ "R.T. Rockafellar" ], "title": "Monotone Operators and the Proximal Point Algorithm", "venue": "Siam Journal on Control and Optimization,", "year": 1976 }, { "authors": [ "Tim Salimans", "Ian J. Goodfellow", "W. Zaremba", "Vicki Cheung", "A. Radford", "Xi Chen" ], "title": "Improved techniques for training", "venue": "gans. ArXiv,", "year": 2016 }, { "authors": [ "Florian Schäfer", "Anima Anandkumar" ], "title": "Competitive Gradient Descent", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "D. Silver", "T. Hubert", "Julian Schrittwieser", "Ioannis Antonoglou", "Matthew Lai", "A. Guez", "Marc Lanctot", "L. Sifre", "D. Kumaran", "T. Graepel", "T. Lillicrap", "K. Simonyan", "Demis Hassabis" ], "title": "A General Reinforcement Learning Algorithm That Masters Chess, Shogi, and Go Through Self-play", "venue": null, "year": 2018 }, { "authors": [ "P. Tseng" ], "title": "On Linear Convergence of Iterative Methods for the Variational Inequality Problem", "venue": "Journal of Computational and Applied Mathematics,", "year": 1995 }, { "authors": [ "Katrina McKinney", "O. Smith", "T. Schaul", "T. Lillicrap", "K. Kavukcuoglu", "Demis Hassabis", "Chris Apps", "D. Silver" ], "title": "Grandmaster Level in StarCraft II using Multi-agent", "venue": "Reinforcement Learning. Nature,", "year": 2019 }, { "authors": [ "Jianyu Wang", "Vinayak Tantia", "Nicolas Ballas", "Michael Rabbat" ], "title": "Lookahead Converges to Stationary Points of Smooth Non-convex Functions", "venue": "In ICASSP,", "year": 2020 }, { "authors": [ "Yasin Yazici", "C.S. Foo", "S. Winkler", "Kim-Hui Yap", "G. Piliouras", "V. Chandrasekhar" ], "title": "The Unusual Effectiveness of Averaging in GAN Training", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Guojun Zhang", "Y. Yu" ], "title": "Convergence of Gradient Methods on Bilinear Zero-Sum Games", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "M. Zhang", "J. Lucas", "Geoffrey E. Hinton", "Jimmy Ba" ], "title": "Lookahead Optimizer: k Steps Forward, 1 Step Back", "venue": "In NeurIPS,", "year": 2019 } ]
[ { "heading": null, "text": "As multi-agent systems proliferate in machine learning research, games have attracted much attention as a framework to understand optimization of multiple interacting objectives. However, a key challenge in game optimization is that, in general, there is no guarantee for usual gradient-based methods to converge to a local solution of the game. The latest work by Chavdarova et al. (2020) report that Lookahead optimizer (Zhang et al., 2019) significantly improves the performance of Generative Adversarial Networks (GANs) and reduces the rotational force of bilinear games. While promising, their observations were purely empirical, and Lookahead optimization of smooth games still lacks theoretical understanding. In this paper, we fill this gap by theoretically characterizing Lookahead dynamics of smooth games. We provide an intuitive geometric explanation on how and when Lookahead can improve game dynamics in terms of stability and convergence. Furthermore, we present sufficient conditions under which Lookahead optimization of bilinear games provably stabilizes or accelerates convergence to a Nash equilibrium of the game. Finally, we show that Lookahead optimizer preserves locally asymptotically stable equilibria of base dynamics, and can either stabilize or accelerate the local convergence to a given equilibrium with proper assumptions. We verify our theoretical predictions by conducting numerical experiments on two-player zero-sum (non-linear) games." }, { "heading": "1 INTRODUCTION", "text": "Recently, a plethora of learning problems have been formulated as games between multiple interacting agents, including Generative Adversarial Networks (GANs) (Goodfellow et al., 2014; Brock et al., 2019; Karras et al., 2019), adversarial training (Goodfellow et al., 2015; Madry et al., 2018), self-play (Silver et al., 2018; Bansal et al., 2018), inverse reinforcement learning (RL) (Fu et al., 2018) and multi-agent RL (Lanctot et al., 2017; Vinyals et al., 2019). However, the optimization of interdependent objectives is a non-trivial problem, in terms of both computational complexity (Daskalakis et al., 2006; Chen et al., 2009) and convergence to an equilibrium (Goodfellow, 2017; Mertikopoulos et al., 2018; Mescheder et al., 2018; Hsieh et al., 2020). In particular, gradient-based optimization methods often fail to converge and oscillate around a (local) Nash equilibrium of the game even in a very simple setting (Mescheder et al., 2018; Daskalakis et al., 2018; Mertikopoulos et al., 2019; Gidel et al., 2019b;a). To tackle such non-convergent game dynamics, a huge effort has been devoted to developing efficient optimization methods with nice convergence guarantees in smooth games (Mescheder et al., 2017; 2018; Daskalakis et al., 2018; Balduzzi et al., 2018; Gidel et al., 2019b;a; Schäfer & Anandkumar, 2019; Yazici et al., 2019; Loizou et al., 2020).\nMeanwhile, Chavdarova et al. (2020) have recently reported that the Lookahead optimizer (Zhang et al., 2019) significantly improves the empirical performance of GANs and reduces the rotational force of a bilinear game dynamics. Specifically, they demonstrate that class-unconditional GANs trained by a Lookahead optimizer can outperform class-conditional BigGAN (Brock et al., 2019) trained by Adam (Kingma & Ba, 2015) even with a model of 1/30 parameters and negligible computation overheads. They also show that Lookahead optimization of a stochastic bilinear game tends to be more robust against large gradient variances than other popular first-order methods, and converges to a Nash equilibrium of the game where other methods fail.\nDespite its great promise, the study of Chavdarova et al. (2020) relied on purely empirical observations, and the dynamics of Lookahead game optimization still lacks theoretical understanding. Specifically, many open questions, such as the convergence properties of Lookahead dynamics and the impact of its hyperparameters on the convergence, remain unexplained. In this work, we fill this gap by theoretically characterizing the Lookahead dynamics of smooth games. Our contributions are summarized as follows:\n• We provide an intuitive geometric explanation on how and when Lookahead can improve the game dynamics in terms of stability and convergence to an equilibrium.\n• We analyze the convergence of Lookahead dynamics in bilinear games and present sufficient conditions under which the base dynamics can be either stabilized or accelerated.\n• We characterize the limit points of Lookahead dynamics in terms of their stability and local convergence rates. Specifically, we show that Lookahead (i) preserves locally asymptotically stable equilibria of base dynamics and (ii) can either stabilize or accelerate the local convergence to a given equilibrium by carefully choosing its hyperparameters.\n• Each of our theoretical predictions is verified with numerical experiments on two-player zero-sum (non-linear) smooth games." }, { "heading": "2 PRELIMINARIES", "text": "We briefly review the objective of smooth game optimization, first-order game dynamics, and Lookahead optimizer. Finally, we discuss previous work on game optimization. We summarize the notations throughout this paper in Table A.1." }, { "heading": "2.1 SMOOTH GAMES", "text": "Following Balduzzi et al. (2018), a smooth game between players i = 1, . . . , n can be defined as a set of smooth scalar functions {fi}ni=1 with fi : Rd → R such that d = ∑n i=1 di. Each fi represents the cost of player i’s strategy xi ∈ Rdi with respect to other players’ strategies x−i. The goal of this game optimization is finding a (local) Nash equilibrium of the game (Nash, 1951), which is a strategy profile where no player has an unilateral incentive to change its own strategy. Definition 1 (Nash equilibrium). Let {fi}ni=1 be a smooth game with strategy spaces {Rdi}ni=1 such that d = ∑n i=1 di. Then x\n∗ ∈ Rd is a local Nash equilibrium of the game if, for each i = 1, . . . , n, there is a neighborhood Ui of x∗i such that fi(xi, x∗−i) ≥ fi(x∗) holds for any xi ∈ Ui. Such x∗ is said to be a global Nash equilibrium of the game when Ui = Rdi for each i = 1, . . . , n.\nA straightforward computational approach to find a (local) Nash equilibrium of a smooth game is to carefully design a gradient-based strategy update rule for each player. Such update rules that define iterative plays between players are referred to as a dynamics of the game. Definition 2 (Dynamics of a game). A dynamics of a smooth game {fi}ni=1 indicates a differentiable operator F : Rd → Rd that describes players’ iterative strategy updates as x(t+1) = F (x(t)).\nOne might expect that a simple myopic game dynamics, such as gradient descent, would suffice to find a (local) Nash equilibrium of a game as in traditional minimization problems. However, in general, gradient descent optimization of smooth games often fail to converge and oscillate around an equilibrium of the game (Daskalakis et al., 2018; Gidel et al., 2019b;a; Letcher et al., 2019). Such non-convergent behavior of game dynamics is mainly due to (non-cooperative) interaction between multiple cost functions, and is considered as a key challenge in the game optimization (Mescheder et al., 2017; 2018; Mazumdar et al., 2019; Hsieh et al., 2020)." }, { "heading": "2.2 FIRST-ORDER METHODS FOR SMOOTH GAME OPTIMIZATION", "text": "We introduce well-known first-order methods for smooth game optimization. To ease the notation, we use ∇xf(·) to denote the concatenated partial derivatives (∇x1f1(·), . . . ,∇xnfn(·)) of a smooth game {fi}ni=1, where ∇xifi(·) is a partial derivative of a player i’s cost function with respective to its own strategy.\nGradient Descent (GD) minimizes the cost function of each player using the gradient descent. Its simultaneous dynamics FGDSim of a smooth game {fi}ni=1 with a learning rate η > 0 is given by\nx(t+1) = FGDSim(x (t)) def = x(t) − η∇xf(x(t)). (1)\nOn the other hand, its alternating dynamics FGDAlt is described by\nx(t+1) = FGDAlt(x (t)) def = F1 ◦ . . . ◦ Fn(x(t)), where (2)\nFi(x) def = (. . . , xi−1, xi − η∇xifi(x), xi+1, . . .). (3)\nProximal Point (PP) (Martinet, 1970) computes an update by solving a proximal problem at each iteration. Its simultaneous dynamics F PPSim of a smooth game {fi}ni=1 with a learning rate η > 0 is\nx(t+1) = F PPSim(x (t)) def = x(t) − η∇xf(x(t+1)). (4)\nNote that this update rule is implicit in a sense that x(t+1) appears on both sides of the equation; hence it requires solving the proximal subproblem for x(t+1) per iteration.\nExtra Gradient (EG) (Korpelevich, 1976) computes an update by using an extrapolated gradient. Its simultaneous dynamics F EGSim of a smooth game {fi}ni=1 with a learning rate η > 0 is\nx(t+1) = F EGSim(x (t)) def = x(t) − η∇xf(x(t+ 1 2 )), where (5)\nx(t+ 1 2 ) def = x(t) − η∇xf(x(t)). (6)" }, { "heading": "2.3 LOOKAHEAD OPTIMIZER", "text": "Lookahead (Zhang et al., 2019) is a recently proposed optimizer that wraps around a base optimizer and takes a backward synchronization step for each k forward steps. Given a dynamics FA induced by a base optimization method A, the Lookahead dynamics GLA-A with a synchronization period k ∈ N and a rate α ∈ (0, 1) is\nx(t+1) = GLA-A(x(t)) def = (1− α)x(t) + αF kA(x(t)). (7)" }, { "heading": "2.4 RELATED WORK", "text": "The convergence analysis of first-order smooth game dynamics dates several decades back and have been established in the context of saddle-point problems (Rockafellar, 1976; Korpelevich, 1976; Tseng, 1995), which is a special case of zero-sum games. For example, Rockafellar (1976) showed the linear convergence of PP in the bilinear and strongly-convex-strongly-concave (SCSC) saddlepoint problems. Tseng (1995) and Facchinei & Pang (2003) proved the linear convergence of EG in the same problem, and Nemirovski (2004) did in the convex-concave problem over compact sets.\nAs many learning problems are formulated as games in recent years (Goodfellow et al., 2014; Madry et al., 2018; Silver et al., 2018; Fu et al., 2018; Vinyals et al., 2019), game optimization has regained considerable attentions from the research community. Optimistic gradient descent (OGD) (Popov, 1980), which can be seen as an efficient approximation of EG, was recently rediscovered in the context of GAN training (Daskalakis et al., 2018). Recent work of Liang & Stokes (2019) and Gidel et al. (2019a) proved linear convergence of OGD in bilinear and SCSC games. Mokhtari et al. (2020) established an unifying theoretical framework for analyzing PP, EG and OGD dynamics. Zhang & Yu (2020) presented exact and optimal conditions for PP, EG and OGD dynamics to converge in bilinear games. While there has been a growing interest for incorporating second-order information into game dynamics (Mescheder et al., 2017; Balduzzi et al., 2018; Mazumdar et al., 2019; Schäfer & Anandkumar, 2019; Loizou et al., 2020) to remedy non-convergent behaviors, the first-order optimization still dominates in practice (Brock et al., 2019; Donahue & Simonyan, 2019) due to computational and memory cost of second-order methods.\nLately, Chavdarova et al. (2020) reported that recently developed Lookahead optimizer (Zhang et al., 2019) significantly improves the empirical performance of GANs and reduces the rotational force of bilinear game dynamics. However, this study relied on purely empirical observation and lacked theoretical understanding for Lookahead optimization of smooth games. Although Wang et al. (2020) proved that Lookahead optimizer globally converges to a stationary point in minimization problems, its convergence in smooth games still remain as an open question." }, { "heading": "3 SPECTRAL CONTRACTION EFFECT OF LOOKAHEAD IN BILINEAR GAMES", "text": "In this section, we show that Lookahead can either stabilize or accelerate the convergence of its base dynamics by reducing the spectral radius of its underlying Jacobian matrix. We highlight such spectral contraction effect by analyzing the convergence of Lookahead dynamics in a simple bilinear game (Section 3.1), and extend the results to general bilinear games (Section 3.2)." }, { "heading": "3.1 LOOKAHEAD DYNAMICS OF A SIMPLE BILINEAR GAME", "text": "We begin with a simple exemplar bilinear game that has a unique Nash equilibrium (0, 0):\nmin x1∈R max x2∈R\nx1 · x2. (8)\nThis game has been extensively studied as a representative toy example of game optimization by Gidel et al. (2019a) due to its oscillating dynamics. The following proposition demonstrates stabilization effect of Lookahead on Equation 8. Proposition 1. Simultaneous GD dynamics FGDSim with a learning rate η > 0 diverges from the Nash equilibrium of Equation 8. However, its Lookahead dynamicsGLA-GDSim with a synchronization period k ∈ N and a rate α ∈ (0, 1) globally converges to the Nash equilibrium if <((1 + iη)k) < 1 and α is small enough.\nProposition 1 shows that Lookahead optimizer can stabilize divergent dynamics of Equation 8. However, such stabilization effect of Lookahead raises a natural question: would there be any advantage for using Lookahead when its base dynamics is already stable? Proposition 2 analyzes well-known convergent PP dynamics of Equation 8 and presents an affirmative answer. Specifically, it shows that Lookahead dynamics (i) preserves the convergence of its base dynamics, and (ii) can further accelerate the convergence with proper hyperparameter choices. Proposition 2. Simultaneous PP Lookahead dynamics GLA-PPSim with a learning rate η ∈ (0, 1), a synchronization period k ∈ N and a rate α ∈ (0, 1) globally converges to the Nash equilibrium of Equation 8. Furthermore, the rate of convergence is improved upon its base dynamics F PPSim if <((1 + iη)k) < (1 + η2)k and α is large enough.\nWe provide geometric interpretation of the Lookahead procedure in Figure 1. Intuitively, Lookahead optimizer either stabilizes or accelerates its base dynamics by pulling the eigenvalues of the dynamics’ Jacobian matrix into a circle with a small radius. Specifically, k forward steps of a Lookahead procedure first rotate the eigenvalues, and a synchronization backward step pulls them into a circle with a radius smaller than their maximal modulus. This results in a reduction of the spectral radius of the dynamics’ Jacobian matrix, which is known to be crucial for stability (Slotine & Li, 1991). Such spectral contraction effect of Lookahead dynamics is captured by the following lemma.\nLemma 3 (Spectral contraction effect of Lookahead). Let k ∈ N, α ∈ (0, 1) and define a function f : Rm×m → Rm×m by f(X) = (1 − α)I + αXk. Define θ(λ) def= Arg(λk − 1) and φ(λ) def= arcsin sin(θ(λ))\nρ(X)k . Then, the following statements hold:\n• For ρ(X) = 1, ρ(f(X)) < 1 if λk 6= 1, ∀λ ∈ λmax(X).\n• For ρ(X) > 1, ρ(f(X)) < 1 if <(λk) < 1, α < 2 cos(π−θ(λ))|λk−1| , ∀λ ∈ λ≥1(X).\n• For ρ(X) < 1, ρ(f(X)) < ρ(X)k if <(λk) < ρ(X)2k, α > 1− 2ρ(X) k cos(π−φ(λi)) |λki−1|\n, ∀λ ∈ λmax(X), ∀λi ∈ λ(X).\nIn short, Lemma 3 suggests that Lookahead can reduce the spectral radius of a matrix by choosing a proper α and a k such that the entire radius-supporting eigenvalues (e.g., λ≥1(X), λmax(X)) are rotated to left enough. However, such k may not exist, for example, especially when such eigenvalues are not tightly clustered together. To help understanding when Lookahead can actually reduce the spectral radius, we present Lemma 4 as a sufficient condition for a set of eigenvalues to admit the existence of k that rotates them to the left half-plane. Lemma 4 (Left-rotatable eigenvalues). Let X, J ∈ Rm×m be such that X = I− ηJ for some η > 0 and let S ⊆ λ(X). Assume that each element of S has its conjugate pair in S. Then we have <(λk) < 0 for each λ ∈ S if k ∈ ( π\n2θmin(S) , 3π2θmax(S)\n) and every element of S has non-zero\nimaginary part. Existence of such k ∈ N is guaranteed for a small enough η when =max(S)=min(S) < 3.\nNote that the Jacobian matrix of most well-known gradient-based dynamics can be written in the form of I− ηJ, where η > 0 is a learning rate and J is the underlying Jacobian matrix of the game. Intuitively, Lemma 4 suggests that for a small enough learning rate, any subset of the eigenvalues of a dynamics with imaginary conditioning less than 3 admits the existence of k that rotates them left enough. For such k, Lookahead can reduce the spectral radius of the dynamics by choosing a proper α, as stated in Lemma 3. This joint usage of Lemma 3-4 plays a central role for the proofs of our main results in Section 3.2 and Section 4. To summarize, Lemma 3-4 together highlight when Lookahead can actually improve the game dynamics and show that the imaginary conditioning of the radius-supporting eigenvalues is crucial for determining whether the dynamics is improvable." }, { "heading": "3.2 LOOKAHEAD DYNAMICS OF GENERAL BILINEAR GAMES", "text": "In this section, we extend the analysis of Lookahead dynamics to a general bilinear game\nmin x1∈Rm max x2∈Rn\nxT1 Ax2 − bT1 x1 − bT2 x2 (9)\nfor some A ∈ Rm×n and b1 ∈ Rm,b2 ∈ Rn such that there exists x∗1 ∈ Rm, x∗2 ∈ Rn with AT x∗1 = b2 and Ax∗2 = b1. The existence of x∗1, x∗2 allows us to rewrite the game as\nmin x1∈Rm max x2∈Rn (x1 − x∗1)TU [ Σr 0 0 0 ] VT (x2 − x∗2), (10)\nwhere U,Σr,V is the SVD of A with r def = rank(A). Therefore, we can analyze the dynamics of Equation 9 by inspecting a rather simpler problem\nmin x1∈Rr max x2∈Rr\nxT1 Σrx2, (11)\nas they are equivalent up to some rotations and translations. This reduction is a well-known technique and has been used by Gidel et al. (2019b;a) and Zhang & Yu (2020) for simplifying the analysis of Equation 9.\nNow we present sufficient conditions for Lookahead hyperparameters under which convergence of each first-order base dynamics, namely GDAlt, GDSim, PPSim and EGSim, is either stabilized or accelerated. The following first two theorems show that Lookahead can provably stabilize nonconvergent GD dynamics of general bilinear games.\nTheorem 5 (Convergence of GLA-GDAlt ). Lookahead dynamics GLA-GDAlt with a learning rate η ∈( 0, 2σmax ) , a synchronization period k ∈ N and a rate α ∈ (0, 1) converges to a Nash equilibrium of Equation 9 if k arccos(1− η 2σ2i 2 ) mod 2π 6= 0 for any σi ∈ σ(A). Theorem 6 (Convergence ofGLA-GDSim ). Lookahead dynamicsGLA-GDSim with a learning rate η > 0, a synchronization period k ∈ N and a rate α ∈ (0, 1) converges to a Nash equilibrium of Equation 9 if k ∈ ( π\n2 arctan ησmin , 3π2 arctan ησmax\n) and α is small enough.\nRoughly, Theorem 5 suggests that almost any configurations of Lookahead can make GDAlt convergent to a Nash equilibrium of the bilinear games. On the other hand, the existence of k that satisfies the condition of Theorem 6 is guaranteed for a small enough η if σmaxσmin < 3 holds. This highlights a limitation of the convergence guarantee for GDSim that it holds only for well-conditioned games.\nThe next two theorems show that Lookahead preserves the convergence of PPSim and EGSim in the bilinear games, and can further accelerate their convergence under proper hyperparameter choices. Theorem 7 (Acceleration of GLA-PPSim ). Lookahead dynamics GLA-PPSim with a learning rate η > 0, a synchronization period k ∈ N and a rate α ∈ (0, 1) converges to a Nash equilibrium of Equation 9. Furthermore, the rate of convergence is accelerated upon its base dynamics F PPSim if\nk ∈ (\nπ 2 arctan ησmin , 3π2 arctan ησmin\n) and α is large enough.\nTheorem 8 (Acceleration of GLA-EGSim ). Lookahead dynamics GLA-EGSim with a learning rate η ∈( 0, 1σmax ) , a synchronization period k ∈ N and a rate α ∈ (0, 1) converges to a Nash equilibrium of Equation 9. Furthermore, the rate of convergence is accelerated upon its base dynamics F EGSim if\nη ∈ (\n0, 12σmax\n) , k ∈ ( π\n2 arctan ησmin\n1−ησmin\n, 3π 2 arctan\nησmin 1−ησmin\n) and α is large enough.\nNote that the existence of k that satisfies the acceleration conditions of Theorem 7-8 is always guaranteed for a small enough η. This contrasts Theorem 7-8 with Theorem 6, which only applies to well-conditioned games, and suggests that they can be applied for a wide range of bilinear games, including the ill-conditioned ones." }, { "heading": "4 THE LIMIT POINTS OF LOOKAHEAD DYNAMICS", "text": "In this section, we characterize the limit points of Lookahead dynamics and reveal the connections between their stability and the hyperparameters of Lookahead. We start by defining a few stability concepts which are standard in the dynamical system theory (Slotine & Li, 1991). Definition 3 (Lyapunov stability). Let F be a smooth vector field on Rn. Then x ∈ Rn is Lyapunov stable in F if for any > 0, there exists δ > 0 such that for any y ∈ Rn, ‖x− y‖ < δ implies ‖F t(x)− F t(y)‖ < for all t ∈ N. Definition 4 (Asymptotic stability). A Lyapunov stable equilibrium x∗ ∈ Rn of a smooth vector field F is said to be asymptotically stable if there exists δ > 0 such that ‖x − x∗‖ < δ implies lim t→∞ ‖F t(x)− x∗‖ = 0. Such x∗ is said to be locally asymptotically stable if δ <∞.\nWe show that any Lyapunov stable equilibrium (SE) of a dynamics is a locally asymptotically stable equilibrium (LASE) of a Lookahead dynamics. Furthermore, we show that Lookahead can either stabilize or accelerate the local convergence to an equilibrium when the radius-supporting eigenvalues of the equilibrium satisfy certain assumptions on their imaginary parts. Theorem 9 (SEA ⊆ LASELA-A). Let x∗ ∈ Rn be a Lyapunov stable equilibrium of a dynamics F . Then, x∗ is a LASE of its Lookahead dynamics G with a synchronization period k ∈ N and a rate α ∈ (0, 1) if λki 6= 1 for each λi ∈ λ(∇xF (x∗)). Theorem 10 (One-point local stabilization). Let x∗ ∈ Rn be an equilibrium of a dynamics F with ρ(∇xF (x∗)) > 1. Assume that every element of λ≥1(∇xF (x∗)) has non-zero imaginary part. Then, x∗ is a LASE of its Lookahead dynamicsGwith a synchronization period k ∈ N and a rate α ∈ (0, 1) if k ∈ ( π\n2θmin(λ≥1(∇xF (x∗))) , 3π2θmax(λ≥1(∇xF (x∗)))\n) and α is small enough.\nTheorem 11 (One-point local acceleration). Let x∗ ∈ Rn be an equilibrium of a dynamics F with ρ(∇xF (x∗)) < 1. Assume that every element of λmax(∇xF (x∗)) has non-zero imaginary part. Then, the local convergence rate to x∗ in a Lookahead dynamics G with a synchronization period k ∈ N and a rate α ∈ (0, 1) is accelerated upon F if k ∈ ( π\n2θmin(λmax(∇xF (x∗))) , 3π2θmax(λmax(∇xF (x∗)))\n) and\nα is large enough.\nIntuitively, Theorem 9 shows that Lookahead preserves stability of its base dynamics, and Theorem 10-11 suggest that Lookahead can either stabilize or accelerate the local convergence to an equilibrium. Note that the stabilization and acceleration can be guaranteed when λ≥1(∇xF (x∗)) and λmax(∇xF (x∗)) contain no real eigenvalues and have imaginary conditioning less than 3; otherwise, k that satisfies the conditions of Theorem 10-11 may not exist (see Appendix E.10-E.11).\nAn additional, but important consequence of Theorem 10 is that the inclusion relationship implied by Theorem 9 is strict in general. In the context of Nash equilibrium (NE) computation, such stabilization effect of Lookahead can be helpful when unstable NE are stabilized (e.g., bilinear games). However, the stabilization effect also carries a possibility for introducing non-Nash LASE, which is bad for the NE computation (Mazumdar et al., 2019). Hence, the overall impact of Theorem 10 on the computation of NE depends on the global structure of the game and base dynamics.\nNote that Theorem 10-11 require radius-supporting eigenvalues to have non-zero imaginary parts and therefore does not apply to fully-cooperative (FC) games (i.e., minimization problems), which exhibit real eigenvalues only. To give an understanding of Lookahead dynamics in FC games, we present Proposition 12-13, together which imply that the iterates of Lookahead dynamics almost surely avoids unstable equilibria of its base dynamics in FC games (e.g., avoids local maxima). Proposition 12 (Avoids unstable points). Let F be a L-Lipschitz smooth dynamics for some L > 0 and let G be its Lookahead dynamics with a synchronization period k ∈ N and a rate α ∈(\n0, 1 1+Lk\n) . Then the random-initialized iterates of G almost surely avoids its equilibrium x∗ with\nρ(∇xG(x∗)) > 1 if ρ(∇xG(x0)) 6= 1 holds for any equilibrium x0 of G.\nProposition 13 (Preserves unstable points in FC games). Let x∗ ∈ Rn be an equilibrium of a dynamics F with ρ(∇xF (x∗)) > 1, and assume that ∇xF (x∗) is a symmetric matrix with positive eigenvalues. Then, ρ(∇xG(x∗)) > 1 holds for a Lookahead dynamics G with a synchronization period k ∈ N and a rate α ∈ (0, 1)." }, { "heading": "5 EXPERIMENTS", "text": "Bilinear game. We test our theoretical predictions in Section 3.2 (Theorem 5–8) on a bilinear game\nmin x1∈Rn max x2∈Rn\nxT1 Ax2 (12)\nwith A def= In + · En, where each element of En ∈ Mn×n is sampled from N (0, 1). We report our results using n = 10 and = 0.05, which gives a sample of A with σmax = 1.195 and σmin = 0.852, hence σmaxσmin = 1.401 < 3. For a fixed η = 0.1, we use Theorem 5–8 to derive a range of k\nand an approximate scale of α that guarantee stabilization and acceleration of convergence to a Nash equilibrium (NE) of Equation 12. We provide the derivations of theoretically recommended values and actual configurations used for the experiment in Appendix D. Figure 2 (a) shows that the hyperparameters predicted by our theorems, denoted by LA-GDAlt/Sim+ and LA-EGSim+, actually stabilize and accelerates the convergence to a NE. We also test the hyperparameters that are chosen against our theorems and denote as LA-GDAlt/Sim− and LA-EGSim−. Specifically, we choose a k smaller than the lower bound predicted by our theorems and use large α for unstable base dynamics and small α for stable base dynamics. The result in Figure 2 (a) suggests that Lookahead can fail to stabilize, or even worse, slow down the convergence when hyperparameters are configured badly.\nNonlinear game. We verify our theoretical predictions in Section 4 (Theorem 10 and 11) on the non-linear game proposed by Hsieh et al. (2020):\nmin x1∈R max x2∈R\nx1 · x2 + φ(x2), (13)\nwhere φ(x) def= 12x 2 − 14x 4 with > 0. This game has an unstable critical point (0, 0) surrounded by an attractive internally chain-transitive (ICT) set, which may contain arbitrarily long trajectories. Hsieh et al. (2020) demonstrate that most first-order methods fail to converge in this game due to the instability of the equilibrium and the existence of the ICT set. For a fixed = 0.01 and η = 0.05, we use Theorem 10 and 11 to derive a range of k and an approximate scale of α that guarantee local stabilization and acceleration to the equilibrium of Equation 13. We provide the detailed derivations of the theoretically recommended values and the configurations in Appendix D. Figure 2 (b) and Figure 3 (a) shows that the hyperparameters predicted by our theorems, denoted by LA-GDAlt/Sim+ and LA-EGSim+, actually stabilize and accelerates the convergence to the equilibrium. In contrast, hyperparameters chosen against our theorems, denoted by LA-GDAlt/Sim− and LA-EGSim− in Figure 2 (b) and Figure 3 (b), neither success to stabilize nor accelerate the convergence to the equilibrium." }, { "heading": "6 CONCLUSION", "text": "In this work, we derived the theoretic results for convergence guarantee and acceleration of Lookahead dynamics in smooth games for the first time. Specifically, we derived sufficient conditions for hyperparameters of Lookahead optimizer under which the convergence of bilinear games is either stabilized or accelerated. Furthermore, we proved that the Lookahead optimizer preserves locally asymptotically stable equilibria of smooth games. Finally, we showed that Lookahead can either stabilize or accelerate the local convergence to a given equilibrium under proper assumptions.\nOur results point to several future research directions. Lemma 4 suggests that the imaginary conditioning of the radius-supporting eigenvalues is crucial for the performance gain in Lookahead. Therefore, developing an optimizer that exhibits a small imaginary conditioning could improve the convergence of its Lookahead dynamics. Another interesting application of our theoretic results would be designing an adaptive mechanism for the Lookahead hyperparameters by applying our theorems on local bilinear approximation (Schäfer & Anandkumar, 2019) of the game for each step." }, { "heading": "A NOTATION", "text": "" }, { "heading": "B USEFUL FACTS", "text": "" }, { "heading": "B.1 STANDARD RESULTS ON CONVERGENCE", "text": "Lemma 14 (Bertsekas (1999)). Let F : Rm → Rm be continuously differentiable, and let x∗ ∈ Rm be such that F (x∗) = x∗. Assume that ρ(∇xF (x∗)) < 1. Then, there is an open neighborhood Ux∗ of x∗ such that for any x ∈ Ux∗ , ‖F t(x)− x∗‖2 ∈ O(ρ(∇xF (x∗))t) for t→∞.\nLemma 15 (Gidel et al. (2019b)). Let M ∈ Rm×m and u(t) be a sequence of iterates such that, u(t+1) = Mu(t), then we have three cases of interest for the spectral radius ρ(M):\n• If ρ(M) < 1 and M is diagonalizable 1, then ∥∥u(t)∥∥\n2 ∈ O(ρ(M)t ∥∥u(0)∥∥ 2 ).\n• If ρ(M) > 1, then there exists u(0) such that ∥∥u(t)∥∥\n2 ∈ Ω(ρ(M)t ∥∥u(0)∥∥ 2 ).\n• If |λi| = 1,∀λi ∈ λ(M), and M is diagonalizable, then ∥∥u(t)∥∥\n2 ∈ Θ( ∥∥u(0)∥∥ 2 )." }, { "heading": "B.2 CHARACTERISTIC EQUATIONS OF FIRST-ORDER DYNAMICS IN BILINEAR GAMES", "text": "Latest work of Zhang & Yu (2020) provides the exact and optimal conditions for popular first-order methods to converge in zero-sum bilinear games, if possible. Besides from the exact conditions and the choice of optimal hyperparameters, they also derive the characteristic equation of each firstorder dynamics in the zero-sum bilinear games. Since our proofs of theorems in Section 3.2 heavily\n1Actually, M does not have to be diagonalizable; see Theorem 5.4 and Theorem 5.D4 in Chen (1995).\nrely on these characteristic equations, we restate somewhat simplified version of the equations for Equation 9 using our notations.\nGDAlt : (λi − 1)2 + η2σ2i λi = 0. (14) GDSim : (λi − 1)2 + η2σ2i = 0. (15) PPSim : (1/λi − 1)2 + η2σ2i = 0. (16) EGAlt : (λi − 1)2 + (η2 + 2η)σ2i (λi − 1) + (η2σ2i + η2σ4i ) = 0. (17) EGSim : (λi − 1)2 + 2ησ2i (λi − 1) + η2σ2i + η2σ4i = 0. (18)\nWe denote the singular values of matrix A in Equation 9 by σi. The eigenvalues of each dynamics’ Jacobian matrix are denoted by λi. Note that Zhang & Yu (2020) also derive characteristic equations of memory-augmented first-order methods, such as OGD (Popov, 1980) and the momentum method, which we do not cover in this paper." }, { "heading": "C OMITTED RESULTS", "text": "Proposition 16. Alternating GD dynamics FGDAlt with a learning rate η ∈ (0, 2) fails to converge and oscillates around the Nash equilibrium of the game in Equation 8. However, its Lookahead dynamics GLA-GDAlt with a synchronization period k ∈ N and a rate α ∈ (0, 1) globally converges to\nthe Nash equilibrium if ( 1− η 2 2 + i √ 4−η2 2 )k 6= 1.\nProof. One can easily check from Equation 2 that the dynamics FGDAlt can be written as\nFGDAlt(x (t) 1 , x (t) 2 ) = [ 1 −η η 1− η2 ][ x (t) 1\nx (t) 2\n] . (19)\nDefining M def= [ 1 −η η 1− η2 ] , the Lookahead dynamics GLA-GDAlt can be written as\nGLA-GDAlt(x (t) 1 , x (t) 2 ) = ((1− α)I + αM k)\n[ x (t) 1\nx (t) 2\n] . (20)\nIt follows that the eigenvalues of ∇xGLA-GDAlt can be written as 1− α+ αλk± with λ± def = 1− η\n2\n2 ± i √\n4−η2 2 ∈ λ(M) for any η ∈ (0, 2). However, 1−α+αλ k ± is an interpolation between two distinct points on S1 since |λ±| = 1 and λk± 6= 1 , implying |1 − α + αλk±| < 1. Therefore, we conclude from Lemma 14 that the iterates of GLA-GDAlt converge to the Nash equilibrium (0, 0) of the game with convergence rate O(|1 − α + αλk±|t/k), assuming the amortization of its computation over k forward steps. The proof for oscillation of FGDAlt follows from Lemma 15 and can be found in Gidel et al. (2019a).\nProposition 17. Simultaneous EG Lookahead dynamics GLA-EGSim with a learning rate η ∈ (0, 1), a synchronization period k ∈ N and a rate α ∈ (0, 1) globally converges to the Nash equilibrium of Equation 8. Furthermore, the rate of convergence is improved upon its base dynamics F EGSim if <((1− η2 + iη)k) < (1− η2 + η4)k and α is large enough.\nProof. Using simple algebra on Equation 5, the dynamics F EGSim can be written as\nF EGSim(x (t) 1 , x (t) 2 ) = [ 1− η2 −η η 1− η2 ] [ x (t) 1\nx (t) 2\n] . (21)\nDefining M def= 11+η [ 1− η2 −η η 1− η2 ] , its Lookahead dynamics GLA-EGSim with a synchronization\nperiod k ∈ N and a rate α ∈ (0, 1) can be written as\nGLA-EGSim(x (t) 1 , x (t) 2 ) = ((1− α)I + αM k)\n[ x (t) 1\nx (t) 2\n] . (22)\nIt follows that the eigenvalues of ∇xGLA-EGSim are 1 − α + αλk± with λ± def = 1 − η2 ± iη ∈ λ(M). However, 1−α+αλk± is an interpolation between two distinct points on/inside S1 since |λ±|k < 1 for any η ∈ (0, 1). It follows that |1−α+αλk±| < 1, from which we conclude from Lemma 14 that the iterates of GLA-EGSim converge to the Nash equilibrium (0, 0) of the game with convergence rate O(|1− α+ αλk±|t/k), assuming the amortization of its computation over k forward steps.\nNow we show that the convergence is accelerated upon its base dynamics F EGSim if <((1 − η2 + iη)k) < (1− η2 + η4)2k and α is large enough. Figure 1 (c) intuitively shows that the line segment between (1, 0) and λk± contains a line segment inside S|λ±|k when k is such that <(λk±) < |λ2k± |. Therefore, for a large enough α, the interpolation 1− α+ αλk± lies inside S|λ±|k . This implies that the convergence rate O(|1− α+ αλk±|t/k) of GLA-EGSim is accelerated upon the rate O(|λ±|t) of its base dynamics.\nProposition 18 (Equilibrium of Lookahead dynamics). Let F be a dynamics and G be its associated Lookahead dynamics with a synchronization period k ∈ N. Then any equilibrium of F is an equilibrium of G and any equilibrium of G is a periodic point of F .\nProof. Let k ∈ N and α ∈ (0, 1) be the synchronization period and synchronization rate of G, respectively. It is trivial to see that G(x∗) = ((1 − α)id + αF k)(x∗) = (1 − α)x∗ + αx∗ = x∗ if F (x∗) = x∗. Conversely, one can easily check that G(x∗) = (1 − α)x∗ + αF k(x∗) = x∗ implies F k(x∗) = x∗." }, { "heading": "D EXPERIMENTAL DETAILS", "text": "We report the actual hyperparameters used for the experiments of Section 5 in TableD.2 and D.3. Furthermore, we also provide the detailed derivations of the theoretically recommended range of synchronization period k ∈ N.\nD.1 DERIVATION OF THEORETICALLY RECOMMENDED RANGE OF k IN EQUATION 12\nWe plug in σ(A) = {1.195, 1.163, 1.094, 1.083, 1.018, 0.999, 0.969, 0.888, 0.879, 0.852} with σmax = 1.195 and σmin = 0.852 to Theorem 5-8. Then we have\n• LA-GDAlt: {k ∈ N : k arccos(1− 0.1 2σ2i 2 ) mod π 6= 0,∀σi}, • LA-GDSim: ( π 2 arctan 0.08 , 3π 2 arctan 0.12 ) = (18.47, 39.62),\n• LA-EGSim: ( π2 arctan 0.09 , 3π 2 arctan 0.13 ) = (16.9, 34.93),\nwhich give ranges for k as in TableD.2.\nD.2 DERIVATION OF THEORETICALLY RECOMMENDED RANGE OF k IN EQUATION 13\nLA-GDAlt From Equation 2, the Jacobian of dynamics F LA-GDAlt of Equation 13 can be derived as ∇xFGDAlt(x1, x2) = [ 1 −η η 1− η2 + η (1− 3x22) ] [ x1 x2 ] , (23)\nand it is trivial to see that it has an equilibrium at (0, 0). By plugging in = 0.01 and η = 0.05, we obtain\n∇xFGDAlt(0, 0) = [ 1 −0.05 0.05 0.998 ] (24)\nwith eigenvalues λ± def = 0.99 ± 0.05i. Note that |λ±| = 1.0003 > 1 and ∇xFGDAlt(0, 0) has the imaginary conditioning of 1, which implies that the origin is an unstable equilibrium of GDAlt that can be locally stabilized by a Lookahead dynamics. By plugging in the eigenvalues and θmin(∇xFGDAlt(0, 0)) = θmax(∇xFGDAlt(0, 0)) = arctan 0.050.99 = 0.0504 to Theorem 10, we obtain the theoretically recommended range of k as (31.16, 93.49).\nLA-GDSim From Equation 1, the Jacobian of dynamics F LA-GDSim of Equation 13 can be derived as\n∇xFGDSim(x1, x2) = [ 1 −η η 1 + η (1− 3x22) ] [ x1 x2 ] , (25)\nand it is trivial to see that it has an equilibrium at (0, 0). By plugging in = 0.01 and η = 0.05, we obtain\n∇xFGDSim(0, 0) = [ 1 −0.05 0.05 1.005 ] (26)\nwith eigenvalues λ± def = 1.0025 ± 0.0499i. Note that |λ±| = 1.0037 > 1 and ∇xFGDSim(0, 0) has the imaginary conditioning of 1, which implies that the origin is an unstable equilibrium of GDAlt that can be locally stabilized by a Lookahead dynamics. By plugging in the eigenvalues and θmin(∇xFGDSim(0, 0)) = θmax(∇xFGDSim(0, 0)) = arctan 0.04991.0025 = 0.0497 to Theorem 10, we obtain the theoretically recommended range of k as (31.6, 94.81).\nLA-EGSim From Equation 5, the dynamics F LA-EGSim of Equation 13 can be derived as[ x′1 x′2 ] = F EGSim(x1, x2) = [ x1 − ηx̃2 x2 + η(x̃1 + (x̃2 − x̃32)) ] , where (27)[\nx̃1 x̃2\n] = [ x1 − ηx2\nx2 + η(x1 + (x2 − x32))\n] . (28)\nBy computing the derivatives with x1 = 0, x2 = 0 and = 0.01, η = 0.05, we obtain ∇xF EGSim(0, 0) = [ 0.9975 −0.05 0.05 0.9005 ] (29)\nwith eigenvalues λ± = 0.949 ± 0.0122i. Note that |λ±| = 0.949 < 1 and ∇xF EGSim(0, 0) has the imaginary conditioning of 1, which implies that the origin is an stable equilibrium of EGSim whose local convergence can be accelerated by a Lookahead dynamics. By plugging in the eigenvalues and θmin(∇xF EGSim(0, 0)) = θmax(∇xF EGSim(0, 0)) = arctan 0.01220.949 = 0.0129 to Theorem 11, we obtain the theoretically recommended range of k as (121.76, 365.30)." }, { "heading": "E PROOFS", "text": "" }, { "heading": "E.1 PROOF OF PROPOSITION 1", "text": "Proof. One can easily check from Equation 1 that the dynamics FGDSim can be written as\nFGDSim(x (t) 1 , x (t) 2 ) = [ 1 −η η 1 ][ x (t) 1\nx (t) 2\n] . (30)\nDefining M def= [ 1 −η η 1 ] , its Lookahead dynamics GLA-GDSim can be written as\nGLA-GDSim(x (t) 1 , x (t) 2 ) = ((1− α)I + αM k)\n[ x (t) 1\nx (t) 2\n] . (31)\nIt follows that the eigenvalues of ∇xGLA-GDSim can be written as 1− α+ αλk± with λ± def = 1± iη ∈ λ(M). Assuming <((1+ iη)k) < 1, the line segment between (1, 0) and λk± contains a line segment inside S1 as in Figure 1 (b). Therefore, for a small enough α, the interpolation1 − α + αλk± lies inside S1, implying |1 − α + αλk±| < 1. We thus conclude from Lemma 15 that the iterates of GLA-GDSim converge to the Nash equilibrium (0, 0) of the game. The proof for divergence of FGDSim follows from Lemma 15 and can be found in Gidel et al. (2019a)." }, { "heading": "E.2 PROOF OF PROPOSITION 2", "text": "Proof. Using simple algebra on Equation 4, the dynamics F PPSim can be written as\nF PPSim(x (t) 1 , x (t) 2 ) =\n1\n1 + η\n[ 1 −η η 1 ][ x (t) 1\nx (t) 2\n] . (32)\nDefining M def= 11+η [ 1 −η η 1 ] , its Lookahead dynamics GLA-PPSim with a synchronization period\nk ∈ N and a rate α ∈ (0, 1) can be written as\nGLA-PPSim(x (t) 1 , x (t) 2 ) = ((1− α)I + αM k)\n[ x (t) 1\nx (t) 2\n] . (33)\nIt follows that the eigenvalues of∇xGLA-PPSim are 1−α+αλk± with λ± def = 1±iη1+η2 ∈ λ(M). We know that 1 − α + αλk± is an interpolation between two distinct points on/inside S1 since |λ±|k < 1 for any η ∈ (0, 1). It follows that |1 − α + αλk±| < 1, from which we conclude from Lemma 14 that the iterates of GLA-PPSim converge to the Nash equilibrium (0, 0) of the game with convergence rate O(|1− α+ αλk±|t/k), assuming the amortization of its computation over k forward steps.\nNow we show that the convergence is accelerated upon the base dynamics F PPSim if <((1 + iη)k) < (1 + η2)k and α is large enough. Figure 1 (c) intuitively shows that the line segment between (1, 0) and λk± contains a line segment inside S|λ±|k when k is such that <(λk±) < |λ2k± |. Therefore, the interpolation 1−α+αλk± lies inside S|λ±|k for a large enough α. This implies that the convergence rateO(|1−α+αλk±|t/k) ofGLA-PPSim is accelerated upon the rateO(|λ±|t) of its base dynamics." }, { "heading": "E.3 PROOF OF LEMMA 3", "text": "Proof. We prove each of the cases in their order.\nCase ρ(X) = 1. Assume that λmaxk 6= 1 for any λmax ∈ λmax(X). Then we can immediately conclude ρ(f(X)) < 1 since 1−α+αλki ∈ λ(f(X)) is an interpolation between two distinct points (1, 0) and λki on/inside S1 for any λi ∈ λ(X).\nCase ρ(X) > 1. Assume that <(λk) < 1 for any λ ∈ λ≥1(X). Then for each λ ∈ λ≥1, λk can be visualized as point B in Figure 4 (a), where the existence of point D is guaranteed by <(λ) < 1. It is easy to see from the figure that∥∥AC∥∥ = α|λk − 1| < 2 cos(π − θ(λ)) = ∥∥AD∥∥ (34) is sufficient to place 1 − α + αλk inside S1. Furthermore, for any λ ∈ λ(X) such that |λ| < 1, 1−α+αλk lies inside S1 since 1−α+αλk is an interpolation between two distinct points on/inside S1. Therefore we conclude ρ(f(X)) < 1.\nCase ρ(X) < 1. Assume that <(λk) < ρ(X)2k for any λ ∈ λmax(X). Then for any λi ∈ λ(X), λki can be visualized as point B in Figure 4 (b) since the existence of point D is guaranteed by <(λk) < ρ(X)2k and sin(φ(λi)) = sin(θ(λi))/ρ(X)k follows from the law of sines. Therefore we can intuitively see from the figure that∥∥BC∥∥ = (1− α)|λki − 1| < 2ρ(X)k cos(π − φ(λi)) = ∥∥BD∥∥ (35) is sufficient to place 1− α+ αλki inside Sρ(X)k , concluding the proof." }, { "heading": "E.4 PROOF OF LEMMA 4", "text": "Proof. Let us denote θmin def = θmin(S), θmax def = θmax(S) for brevity and let k ∈ N be such that k ∈ (\nπ 2θmin , 3π2θmax\n) . Then we have kθmin ∈ ( π 2 , 3πθmin 2θmax ) ⊆ ( π 2 , 3π 2 ) and kθmax ∈ ( πθmax 2θmin , 3π2 ) ⊆(\nπ 2 , 3π 2 ) , which implies <(λki ) < 0 for any λi ∈ S such that =(λi) > 0. Since every element of S\nhas its conjugate pair in S by the assumption, we conclude <(λki ) < 0 for any λi ∈ S. Now we show that the existence of k ∈ N such that k ∈ (\nπ 2θmin , 3π2θmax\n) is guaranteed for a small\nenough η > 0 when =max(S)=min(S) < 3. Using simple algebra, we can see that θmax < f(θmin) for f : R → R defined by f(x) = 3πxπ+2x is equivalent to π 2θmin − 3π2θmax > 1, implying nonempty\nN ∩ (\nπ 2θmin , 3π2θmax\n) . Therefore it suffices to show that θmax < f(θmin) holds for a small enough\nη > 0 when =max(S)=min(S) < 3.\nLet us define a function H : R→ R given by\nH(η) def = ( 1 + 2θmax +\nπ )( 1 + η<max(S) 1 + η<min(S) )( 1 + 2 sec θmin − 1 + 2 sec θmax+ + b ) , (36)\nwhere θmin− def = arctan η=min(S) 1+η<max(S) , θmax + def= arctan η=max(S)1+η<min(S) and b def= (1+2 sec θmin\n−) tan4 θmax+\n540 .\nWe show that the inequality\n=max(S) =min(S) < 3 H(η) (37)\nimplies θmax < f(θmin) and conclude the proof by showing that there exists a small enough η > 0 such that satisfies Equation 37 when =max(S)=min(S) < 3.\nNote that the inequalities θmin− ≤ θmin and θmax ≤ θmax+ directly follow from the definitions of θmin\n− and θmax+. Furthermore, using the Shafer-type double inequalities (Mortici & Srivastava, 2014) for arctan(·), we obtain\nθmin − ≥ 3 tan θmin\n− 1 + 2 √ 1 + tan2 θmin− = 3η=min (1 + η<max)(1 + 2 sec θmin−) , (38)\nθmax + ≤ 3 tan θmax\n+ 1 + 2 √ 1 + tan2 θmax+ + 1 180 tan5 θmax + (39)\n= 3η=max (1 + η<min)(1 + 2 sec θmax+) + η=maxtan4θmax+ 180(1 + η<min) , (40)\nfrom which follows that\nθmax θmin ≤ θmax + θmin− = =max(S) =min(S) ( 1 + η<max(S) 1 + η<min(S) )( 1 + 2 sec θmin − 1 + 2 sec θmax+ + b ) . (41)\nHowever, assuming inequality 37, we can derive\n=max(S) =min(S) ( 1 + η<max(S) 1 + η<min(S) )( 1 + 2 sec θmin − 1 + 2 sec θmax+ + b ) <\n3π π + 2θmax+ = f(θmax\n+)\nθmax+ . (42)\nFurthermore, since f ′(x) = 3π 2\n(π+2x)2 , we know that f is both concave and monotonically increasing. Hence it follows that\nf(θmax +)\nθmax+ < f(θmin) θmin , (43)\nfrom which we obtain θmax < f(θmin) by combining Equation 41-43.\nFinally, we prove that Equation 37 holds for a small enough η > 0 when =max(S)=min(S) < 3. Assume =max(S) =min(S) < 3 and let def= 3 − =max(S)=min(S) > 0. By the continuity of 3 H(·) at η = 0 and the fact that H(0) = 1, there exists δ > 0 such that |3− 3H(η) | < holds for any η ∈ (0, δ). Therefore we have =max(S) =min(S) = 3− < 3H(η) for any η ∈ (0, δ), concluding the proof." }, { "heading": "E.5 PROOF OF THEOREM 5", "text": "Proof. From Equation 2, the dynamics FGDAlt of Equation 11 can be derived as\nFGDAlt(x (t) 1 , x (t) 2 ) = [ Ir −ηΣr ηΣr Ir − η2Σ2r ] [ x(t)1 x(t)2 ] . (44)\nDefining M def= [\nIr −ηΣr ηΣr Ir − η2Σ2r\n] , its Lookahead dynamics GLA-GDAlt with a synchronization pe-\nriod k ∈ N and a rate α ∈ (0, 1) can be written as\nGLA-GDAlt(x (t) 1 , x (t) 2 ) = ((1− α)I + αM k) [ x(t)1 x(t)2 ] . (45)\nTogether with Equation 14, we can see that eigenvalues of∇xGLA-GDAlt can be written as 1−α+αλk±i with λ±i def = 1 − η 2σ2i 2 ± iησi √ 1− η 2σ2i 4 ∈ λ(M) for any η ∈ ( 0, 2σmax ) . In the meanwhile, simple calculation gives us |λ±i| = 1, which implies ρ(M) = 1. Now assume k ∈ N is such that k arccos(1 − η\n2σ2i 2 ) mod 2π 6= 0 for any σi. Then it follows that λ k ±i 6= 1 for any λ±i ∈ λ(M),\nfrom which we obtain ρ(∇xGLA-GDAlt) < 1 from Lemma 3. It follows from Lemma 15 that the iterates converge to the origin, and we conclude the proof by observing that the transformations x1 7→ U [x1; 0m−r]+x∗1 and x2 7→ V [x2; 0n−r]+x∗2 of (0, 0) ∈ Rr×Rr gives (x∗1, x∗2) ∈ Rm×Rn, which is a Nash equilibrium of Equation 9." }, { "heading": "E.6 PROOF OF THEOREM 6", "text": "Proof. From Equation 1, the dynamics FGDSim of Equation 11 can be derived as\nFGDSim(x (t) 1 , x (t) 2 ) = [ Ir −ηΣr ηΣr Ir ] [ x(t)1 x(t)2 ] . (46)\nLet us define J def= [\n0r Σr −Σr 0r\n] and M def= I − ηJ. Then its Lookahead dynamics GLA-GDSim with a\nsynchronization period k ∈ N and a rate α ∈ (0, 1) can be written as\nGLA-GDSim(x (t) 1 , x (t) 2 ) = ((1− α)I + αM k) [ x(t)1 x(t)2 ] . (47)\nTogether with Equation 15, we can see that the eigenvalues of∇xGLA-GDSim can be written as 1−α+ αλk±i with λ±i def = 1± iησi ∈ λ(M). In the meanwhile, one can easily see that |λ±i| > 1, implying\nρ(M) > 1. Now assume that k ∈ (\nπ 2 arctan ησmin , 3π2 arctan ησmax\n) . Then since tan θmin(λ(M)) =\nησmin and tan θmax(λ(M)) = ησmax, we have k ∈ (\nπ 2θmin(λ(M)) , 3π2θmax(λ(M))\n) . It follows from\nLemma 4 that <(λk±i) < 0 for any λ±i ∈ λ(M), and the existence of k is guaranteed for a small enough η when =max(λ(M))=min(λ(M)) = σmax σmin\n< 3. Then it follows from Lemma 3 that ρ(∇xGLA-GDSim) < 1 holds for a small enough α. Therefore, by Lemma 15, the iterates converge to the origin, and we conclude the proof by observing that the transformations x1 7→ U [x1; 0m−r] + x∗1 and x2 7→ V [x2; 0n−r] + x∗2 of (0, 0) ∈ Rr × Rr gives (x∗1, x∗2) ∈ Rm × Rn, which is a Nash equilibrium of Equation 9." }, { "heading": "E.7 PROOF OF THEOREM 7", "text": "Proof. From Equation 4, the dynamics F PPSim of Equation 11 can be derived as\nF PPSim(x (t) 1 , x (t) 2 ) = [ Ir ηΣr −ηΣr Ir ]−1 [x(t)1 x(t)2 ] . (48)\nLet us define J def= [\n0r Σr −Σr 0r\n] and M def= (I + ηJ)−1. Then its Lookahead dynamics GLA-GDSim with\na synchronization period k ∈ N and a rate α ∈ (0, 1) can be written as\nGLA-PPSim(x (t) 1 , x (t) 2 ) = ((1− α)I + αM k) [ x(t)1 x(t)2 ] . (49)\nTogether with Equation 16, we can see that the eigenvalues of ∇xGLA-EGSim can be written as 1 − α + αλk±i with λ±i def = 1+iησi\n1+η2σ2i ∈ λ(M). In the meanwhile, we can easily see that |λ±i| < 1\nholds for any η > 0. Therefore, 1 − α + αλk±i is an interpolation between two distinct points (1, 0) and λk±i on/inside S1, implying ρ(∇xGLA-PPSim) < 1. Hence it follows from Lemma 15 that the iterates converges to the origin. However, the transformations x1 7→ U [x1; 0m−r] + x∗1 and\nx2 7→ V [x2; 0n−r]+x∗2 of (0, 0) ∈ Rr×Rr gives (x∗1, x∗2) ∈ Rm×Rn, which is a Nash equilibrium of Equation 9.\nNow we show that GLA-PPSim can accelerate the convergence upon its base dynamics F PPSim . Assume k ∈ (\nπ 2 arctan ησmin , 3π2 arctan ησmin\n) . Note that M−1 shares the same eigenvalues with the Jacobian of\nFGDSim . Therefore we have tan θmin(λmax(M −1)) = tan θmax(λmax(M−1)) = ησmin , which implies k ∈ (\nπ 2θmin(λmax(M −1)) , 3π 2θmax(λmax(M−1))\n) . Then it follows from Lemma 4 that <(λ−k±i ) < 0 for\nany λ−1±i ∈ λmax(M −1), and the existence of k ∈ N is guaranteed for a small enough η. Then we have <(λk±i) < 0 for any λ±i ∈ λ(M) since the reciprocal of a complex number preserves the sign of the real part. Hence it follows from Lemma 3 that ρ(∇xGLA-PPSim) < ρ(M)k holds for a large enough α. We conclude the proof by noting that the convergence rate O(ρ(∇xGLA-PPSim) t k ) of GLA-PPSim provided by Lemma 15 is faster than the rate O(ρ(M)t) of F PPSim , assuming amortization of computations over k forward steps." }, { "heading": "E.8 PROOF OF THEOREM 8", "text": "Proof. From Equation 5, the dynamics F EGSim of Equation 11 can be derived as\nF EGSim(x (t) 1 , x (t) 2 ) = [ Ir − ηΣ2r −ηΣr ηΣr Ir − ηΣ2r ] [ x(t)1 x(t)2 ] . (50)\nLet us define J def= [\nΣ2r Σr −Σr Σ2r\n] and M def= I − ηJ. Then its Lookahead dynamics GLA-EGSim with a\nsynchronization period k ∈ N and a rate α ∈ (0, 1) can be written as\nGLA-EGSim(x (t) 1 , x (t) 2 ) = ((1− α)I + αM k) [ x(t)1 x(t)2 ] . (51)\nTogether with Equation 18, we can see that the eigenvalues of∇xGLA-EGSim can be written as 1−α+ αλk±i with λ±i def = 1−ησi±iησi ∈ λ(M). In the meanwhile, we can easily see that |λ±i| < 1 for any\nη ∈ (\n0, 1σmax ) , implying ρ(M) < 1. Therefore, 1−α+αλk±i is an interpolation between two distinct\npoints (1, 0) and λk±i on/inside S1, implying ρ(∇xGLA-EGSim) < 1. Hence it follows from Lemma 15 that the iterates converges to the origin. However, the transformations x1 7→ U [x1; 0m−r] + x∗1 and x2 7→ V [x2; 0n−r]+x∗2 on (0, 0) ∈ Rr×Rr gives (x∗1, x∗2) ∈ Rm×Rn, which is a Nash equilibrium of Equation 9.\nNow we show thatGLA-EGSim can accelerate the convergence upon its base dynamics F EGSim . Assume\nk ∈\n( π\n2 arctan ησmin\n1−ησmin\n, 3π 2 arctan\nησmin 1−ησmin\n) and η ∈ ( 0, 12σmax ) . Note that |λi|2 = 2η2(σi− 12η ) 2 + 12\nholds for each λ±i ∈ λ(M). This implies λmax(M) = {1−ησmin± iησmin} for any η ∈ (\n0, 12σmax\n) ,\nhence k ∈ (\nπ 2θmin(λmax(M)) , 3π2θmax(λmax(M))\n) . It follows from Lemma 4 that <(λk±i) < 0 holds for\nany λ±i ∈ λ(M), and the existence of k is guaranteed for a small enough η. Then by Lemma 3 we have ρ(∇xGLA-EGSim) < ρ(M)k for a large enough α. We conclude the proof by noting that the convergence rate O(ρ(∇xGLA-EGSim) t k ) of GLA-EGSim provided by Lemma 15 is faster than the rate O(ρ(M)t) of F EGSim , assuming amortization of computations over k forward steps." }, { "heading": "E.9 PROOF OF THEOREM 9", "text": "Proof. From Equation 7, the Jacobian of G evaluated at x∗ can written as ∇xG(x∗) = ∇x ( (1− α)id + αF k ) (x∗) = (1− α)I + α∇xF k(x∗) (52)\n= (1− α)I + α k∏ i=1 ∇xF (F i−1(x∗)) = (1− α)I + α(∇xF (x∗))k, (53)\nwhere the chain rule is used in third equality with a slight abuse of notation F 0 def= id. We use the fact that x∗ is an equilibrium of dynamics F for the last equality. It is easy to see from Equation 53 that eigenvalues of∇xG(x∗) can be written as 1− α+ αλki for each λi ∈ λ(∇xF (x∗)).\nHowever, λki is either on/inside S1 since |λi| ≤ 1 for each i due to the Lyapunov stability of x∗ in F . Therefore, 1− α+ αλki is an interpolation between two points (1, 0) ∈ S1 and λki either on/inside S1; hence |1 − α + αλki | ≤ 1. By assumption that λki 6= (1, 0) for each λi ∈ λ(∇xF (x∗)), the inequality is strict, i.e. |1 − α + αλki | < 1, implying the local asymptotic stability of x∗ in G by Proposition 14." }, { "heading": "E.10 PROOF OF THEOREM 10", "text": "Proof. From Equation 7, the Jacobian of G evaluated at x∗ can written as ∇xG(x∗) = ∇x ( (1− α)id + αF k ) (x∗) = (1− α)I + α∇xF k(x∗) (54)\n= (1− α)I + α k∏ i=1 ∇xF (F i−1(x∗)) = (1− α)I + α(∇xF (x∗))k, (55)\nwhere the chain rule is used in third equality with a slight abuse of notation F 0 def= id. We use the fact that x∗ is an equilibrium of dynamics F for the last equality. It is easy to see from Equation 55 that the eigenvalues of∇xG(x∗) can be written as 1− α+ αλki for each λi ∈ λ(∇xF (x∗)). Now assume that every element of λ≥1(∇xF (x∗)) has non-zero imaginary part, and let k ∈(\nπ 2θmin(λ≥1(∇xF (x∗))) , 3π2θmax(λ≥1(∇xF (x∗)))\n) . Let η > 0, J ∈ Rn×n be such that∇xF (x∗) = I− ηJ.\nThen by Lemma 4, <(λki ) < 0 holds for any λi ∈ λ≥1(∇xF (x∗)), and the existence of such k ∈ N is guaranteed for a small enough η when =max(λ≥1(∇xF (x ∗)))\n=min(λ≥1(∇xF (x∗))) < 3. Then it follows from the second\ncase of Theorem 3 that ρ(∇xG(x∗)) < 1 holds for a small enough α. By Proposition 14, this implies local asymptotic stability of x∗ in G, concluding the proof." }, { "heading": "E.11 PROOF OF THEOREM 11", "text": "Proof. From Equation 7, the Jacobian of G evaluated at x∗ can written as ∇xG(x∗) = ∇x ( (1− α)id + αF k ) (x∗) = (1− α)I + α∇xF k(x∗) (56)\n= (1− α)I + α k∏ i=1 ∇xF (F i−1(x∗)) = (1− α)I + α(∇xF (x∗))k, (57)\nwhere the chain rule is used in third equality with a slight abuse of notation F 0 def= id. We use the fact that x∗ is an equilibrium of dynamics F for the last equality. It is easy to see from Equation 57 that the eigenvalues of∇xG(x∗) can be written as 1− α+ αλki for each λi ∈ λ(∇xF (x∗)). Now assume that every element of λmax(∇xF (x∗)) has non-zero imaginary part, and let k ∈(\nπ 2θmin(λmax(∇xF (x∗))) , 3π2θmax(λmax(∇xF (x∗)))\n) . Let η > 0, J ∈ Rn×n be such that∇xF (x∗) = I−ηJ.\nThen by Lemma 4, <(λki ) < 0 holds for any λi ∈ λmax(∇xF (x∗)), and the existence of such k ∈ N is guaranteed for a small enough η when =max(λmax(∇xF (x\n∗))) =min(λmax(∇xF (x∗))) < 3. Then it follows from the third\ncase of Theorem 3 that ρ(∇xG(x∗)) < ρ(∇xF (x∗))k holds for a small enough α. We conclude the proof by noting that this implies the upper bound O(ρ(∇xG(x∗) t k ) on the rate of local convergence provided by Proposition 14 is faster than O(∇xF (x∗)t)." }, { "heading": "E.12 PROOF OF PROPOSITION 12", "text": "Proof. We directly follow the proofs of Lemma 2.1 and Lemma 3.1 in Daskalakis & Panageas (2018) and show that α ∈ (0, 1\n1+Lk ) guarantees locally diffeomorphic Lookahead dynamics, i.e., it\nis locally invertible at any given points.\nNote from Equation 7 that the Jacobian of G evaluated at x can written as ∇xG(x) = ∇x ( (1− α)id + αF k ) (x) = (1− α)I + α∇xF k(x) (58)\n= (1− α)I + α k∏ i=1 ∇xF (F i−1(x)), (59)\nwhere we have used the chain rule in the last equality with a slight abuse of notation F 0 def= id. Now assume that α ∈ (\n0, 1 1+Lk\n) and consider the following inequalities\nρ( k∏ i=1 ∇xF (F i−1(x))) ≤ ∥∥∥∥∥ k∏ i=1 ∇xF (F i−1(x)) ∥∥∥∥∥ ≤ k∏ i=1 ∥∥∇xF (F i−1(x))∥∥ ≤ Lk, (60) where the first and second inequalities hold for any operator norms and the last inequality is due to L-Lipschitzness of F . Then it follows from the assumption that\nρ( k∏ i=1 ∇xF (F i−1(x∗))) ≤ Lk < 1− α α . (61)\nTherefore, we conclude that G locally diffeomorphic, since ρ( ∏k i=1∇xF (F i−1(x∗))) < 1−α α implies 0 /∈ λ(∇xG(x)).\nNow let us define the set of unstable equilibria of G as U def= {x∗ : G(x∗) = x∗, ρ(∇xG(x∗)) > 1}. Then it directly follows from the locally diffeomorphic G and the arguments of Lee et al. (2019); Daskalakis & Panageas (2018) that the set {x(0) : lim\nt→∞ Gt(x(0)) ∈ U} is of measure zero, which\nconcludes the proof. We refer the readers to Appendix A of Daskalakis et al. (2018) for the detailed derivation of measure-zero arguments." }, { "heading": "E.13 PROOF OF PROPOSITION 13", "text": "Proof. From Equation 7, the Jacobian of G evaluated at x∗ can written as ∇xG(x∗) = ∇x ( (1− α)id + αF k ) (x∗) = (1− α)I + α∇xF k(x∗) (62)\n= (1− α)I + α k∏ i=1 ∇xF (F i−1(x∗)) = (1− α)I + α(∇xF (x∗))k, (63)\nwhere the chain rule is used in third equality with a slight abuse of notation F 0 def= id. We use the fact that x∗ is an equilibrium of dynamics F for the last equality. It is easy to see from Equation 63 that the eigenvalues of∇xG(x∗) can be written as 1−α+αλki for each λi ∈ λ(∇xF (x∗)). However, by the assumption, there exists a λ ∈ λ(∇xF (x∗)) such that |λ| > 1. Since λ is a positive real number, we have |1− α+ αλk| > 1, concluding the proof." }, { "heading": "F ADDITIONAL EXPERIMENTS", "text": "" }, { "heading": "F.1 EIGENVALUES OF GAN DYNAMICS", "text": "Theorem 10-11 assumes the radius-supporting eigenvalues, namely λ≥1(∇xF ) and λmax(∇xF )), to have non-zero imaginary parts and imaginary conditioning less than 3; otherwise, the existence of k that satisfies the sufficient conditions of Theorem 10-11 may not exist. We verify whether such assumptions are realistic in practical settings. Specifically, we train GANs on MNIST dataset with two different loss functions, non-saturating (Goodfellow et al., 2014) and WGAN-GP (Gulrajani et al., 2017), and visualize the top 20 eigenvalues of∇xFGDSim for each loss function in Figure 5. Figure 5 suggests most of the radius-supporting eigenvalues of ∇xFGDSim at well-performing point (Inception Score (IS) (Salimans et al., 2016) u 9) are distributed along the imaginary axis, and have non-zero imaginary part with imaginary conditioning less than 3. This suggests that our assumptions on the eigenvalues is not unrealistic and Theorem 10-11 can be applied for a practical non-linear game like GANs.\nF.2 ILL-CONDITIONED BILINEAR GAMES AND MOMENTUM METHODS\nWe test the convergence and acceleration of Lookahead dynamics in an ill-conditioned bilinear game, and see if Lookahead can accelerate momentum-based dynamics in such game. Specifically, we test convergence of each dynamics in the game given by Equation 12 with n = 20 and = 1, which gives a sample of A with σmax = 8.81 and σmin = 0.11. Note that this game has a significantly larger conditioning number σmaxσmin = 76.4 than the bilinear game of a conditioning 1.401 we used in Section 5.\nWe fix η = 0.05 throughout the experiments, and use Theorem 8 to derive theoretically recommended (+) hyperparameters k = 300, α = 0.9 for LA-EGSim dynamics. We use k = 50, α = 0.1 to represent hyperparameters of LA-EGSim chosen against (−) the theorem. For LA-GDAlt, we use k = 300, α = 0.1. We use the momentum factor β = −0.1 for negative (NM) and β = 0.1 for positive (PM) momentum methods.\nFigure 6 (a) shows that Theorem 5 and 8 indeed hold even for an ill conditioned game. Furthermore, Figure 6 (b) suggests that Lookahead can significantly accelerate the convergence of momentum methods that provably perform well on bilinear games, including the gradient descent with negative momentum (Gidel et al., 2019b) and extragradient with momentum (Azizian et al., 2020)." } ]
2,020
null
SP:8ebdf09acf96ca3dc86a413fdcd2f524d2a54cb7
[ "This paper analytically considers two flavours of adversarial training in a Gaussian mixture model. The first uses regular adversarial examples, and the second uses examples drawn from a generative model. The authors show that the adversarial perturbations generated in the two cases differ in a cleanly-characterisable way: in the first case the perturbations differ from real data in a direction aligned with the smallest eigenvalues of the data covariance. In the latter case the perturbations are in a direction aligned with the largest eigenvalues. Experimental results on MNIST and CIFAR are presented to illustrate how the analysis transfers to real datasets." ]
Using generative models (GAN or VAE) to craft adversarial examples, i.e. generative adversarial examples, has received increasing attention in recent years. Previous studies showed that the generative adversarial examples work differently compared to that of the regular adversarial examples in many aspects, such as attack rates, perceptibility, and generalization. But the reasons causing the differences between regular and generative adversarial examples are unclear. In this work, we study the theoretical properties of the attacking mechanisms of the two kinds of adversarial examples in the Gaussian mixture model. We prove that adversarial robustness can be disentangled in directions of the data manifold. Specifically, we find that: 1. Regular adversarial examples attack in directions of small variance of the data manifold, while generative adversarial examples attack in directions of large variance. 2. Standard adversarial training increases model robustness by extending the data manifold boundary in directions of small variance, while on the contrary, adversarial training with generative adversarial examples increases model robustness by extending the data manifold boundary directions of large variance. In experiments, we demonstrate that these phenomena also exist on real datasets. Finally, we study the robustness trade-off between generative and regular adversarial examples. We show that the conflict between regular and generative adversarial examples is much smaller than the conflict between regular adversarial examples of different norms.
[]
[ { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Chaowei Xiao", "Bo Li", "Jun-Yan Zhu", "Warren He", "Mingyan Liu", "Dawn Song" ], "title": "Generating adversarial examples with adversarial networks", "venue": "arXiv preprint arXiv:1801.02610,", "year": 2018 }, { "authors": [ "Zhengli Zhao", "Dheeru Dua", "Sameer Singh" ], "title": "Generating natural adversarial examples", "venue": "arXiv preprint arXiv:1710.11342,", "year": 2017 }, { "authors": [ "Yang Song", "Rui Shu", "Nate Kushman", "Stefano Ermon" ], "title": "Generative adversarial examples", "venue": "arXiv preprint arXiv:1805.07894,", "year": 2018 }, { "authors": [ "Jernej Kos", "Ian Fischer", "Dawn Song" ], "title": "Adversarial examples for generative models", "venue": "IEEE Security and Privacy Workshops (SPW),", "year": 2018 }, { "authors": [ "Yang Song", "Rui Shu", "Nate Kushman", "Stefano Ermon" ], "title": "Constructing unrestricted adversarial examples with generative models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "David Stutz", "Matthias Hein", "Bernt Schiele" ], "title": "Disentangling adversarial robustness and generalization", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Yang Song", "Taesup Kim", "Sebastian Nowozin", "Stefano Ermon", "Nate Kushman" ], "title": "Pixeldefend: Leveraging generative models to understand and defend against adversarial examples", "venue": "arXiv preprint arXiv:1710.10766,", "year": 2017 }, { "authors": [ "Aditi Raghunathan", "Sang Michael Xie", "Fanny Yang", "John C Duchi", "Percy Liang" ], "title": "Adversarial training can hurt generalization", "venue": null, "year": 1906 }, { "authors": [ "Battista Biggio", "Fabio Roli" ], "title": "Wild patterns: Ten years after the rise of adversarial machine learning", "venue": "Pattern Recognition,", "year": 2018 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial examples in the physical world", "venue": "arXiv preprint arXiv:1607.02533,", "year": 2016 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Somesh Jha", "Matt Fredrikson", "Z Berkay Celik", "Ananthram Swami" ], "title": "The limitations of deep learning in adversarial settings", "venue": "IEEE European symposium on security and privacy (EuroS&P),", "year": 2016 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Pascal Frossard" ], "title": "Deepfool: a simple and accurate method to fool deep neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "In 2017 ieee symposium on security and privacy (sp),", "year": 2017 }, { "authors": [ "Pin-Yu Chen", "Huan Zhang", "Yash Sharma", "Jinfeng Yi", "Cho-Jui Hsieh" ], "title": "Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models", "venue": "In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security,", "year": 2017 }, { "authors": [ "Jiawei Su", "Danilo Vasconcellos Vargas", "Kouichi Sakurai" ], "title": "One pixel attack for fooling deep neural networks", "venue": "IEEE Transactions on Evolutionary Computation,", "year": 2019 }, { "authors": [ "Andrew Ilyas", "Logan Engstrom", "Aleksander Madry" ], "title": "Prior convictions: Black-box adversarial attacks with bandits and priors", "venue": "arXiv preprint arXiv:1807.07978,", "year": 2018 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "Yandong Li", "Lijun Li", "Liqiang Wang", "Tong Zhang", "Boqing Gong" ], "title": "Nattack: Learning the distributions of adversarial examples for an improved black-box attack on deep neural networks", "venue": null, "year": 1905 }, { "authors": [ "Jianbo Chen", "Michael I Jordan", "Martin J Wainwright" ], "title": "Hopskipjumpattack: A query-efficient decision-based attack", "venue": "In 2020 ieee symposium on security and privacy (sp),", "year": 2020 }, { "authors": [ "Lukas Schott", "Jonas Rauber", "Matthias Bethge", "Wieland Brendel" ], "title": "Towards the first adversarially robust neural network model on mnist", "venue": "arXiv preprint arXiv:1805.09190,", "year": 2018 }, { "authors": [ "Florian Tramèr", "Alexey Kurakin", "Nicolas Papernot", "Ian Goodfellow", "Dan Boneh", "Patrick McDaniel" ], "title": "Ensemble adversarial training: Attacks and defenses", "venue": "arXiv preprint arXiv:1705.07204,", "year": 2017 }, { "authors": [ "Jacob Buckman", "Aurko Roy", "Colin Raffel", "Ian Goodfellow" ], "title": "Thermometer encoding: One hot way to resist adversarial examples", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Chuan Guo", "Mayank Rana", "Moustapha Cisse", "Laurens Van Der Maaten" ], "title": "Countering adversarial images using input transformations", "venue": "arXiv preprint arXiv:1711.00117,", "year": 2017 }, { "authors": [ "Vishaal Munusamy Kabilan", "Brandon Morris", "Hoang-Phuong Nguyen", "Anh Nguyen" ], "title": "Vectordefense: Vectorization as a defense to adversarial examples", "venue": "In Soft Computing for Biomedical Applications and Related Topics,", "year": 2018 }, { "authors": [ "Aaditya Prakash", "Nick Moran", "Solomon Garber", "Antonella DiLillo", "James Storer" ], "title": "Deflecting adversarial attacks with pixel deflection", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Guneet S Dhillon", "Kamyar Azizzadenesheli", "Zachary C Lipton", "Jeremy Bernstein", "Jean Kossaifi", "Aran Khanna", "Anima Anandkumar" ], "title": "Stochastic activation pruning for robust adversarial defense", "venue": "arXiv preprint arXiv:1803.01442,", "year": 2018 }, { "authors": [ "Cihang Xie", "Jianyu Wang", "Zhishuai Zhang", "Zhou Ren", "Alan Yuille" ], "title": "Mitigating adversarial effects through randomization", "venue": "arXiv preprint arXiv:1711.01991,", "year": 2017 }, { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "arXiv preprint arXiv:1802.00420,", "year": 2018 }, { "authors": [ "Florian Tramer", "Nicholas Carlini", "Wieland Brendel", "Aleksander Madry" ], "title": "On adaptive attacks to adversarial example defenses", "venue": "arXiv preprint arXiv:2002.08347,", "year": 2020 }, { "authors": [ "Ajil Jalal", "Andrew Ilyas", "Constantinos Daskalakis", "Alexandros G Dimakis" ], "title": "The robust manifold defense: Adversarial training using generative models", "venue": "arXiv preprint arXiv:1712.09196,", "year": 2017 }, { "authors": [ "Pouya Samangouei", "Maya Kabkab", "Rama Chellappa" ], "title": "Defense-gan: Protecting classifiers against adversarial attacks using generative models", "venue": "arXiv preprint arXiv:1805.06605,", "year": 2018 }, { "authors": [ "Mehdi Mirza", "Simon Osindero" ], "title": "Conditional generative adversarial nets", "venue": "arXiv preprint arXiv:1411.1784,", "year": 2014 }, { "authors": [ "Kihyuk Sohn", "Honglak Lee", "Xinchen Yan" ], "title": "Learning structured output representation using deep conditional generative models", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of wasserstein gans", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras", "Logan Engstrom", "Brandon Tran", "Aleksander Madry" ], "title": "Adversarial examples are not bugs, they are features", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Michael E Tipping", "Christopher M Bishop" ], "title": "Probabilistic principal component analysis", "venue": "Journal of the Royal Statistical Society: Series B (Statistical Methodology),", "year": 1999 }, { "authors": [ "Soheil Feizi", "Farzan Farnia", "Tony Ginart", "David Tse" ], "title": "Understanding gans: the lqg setting", "venue": "arXiv preprint arXiv:1710.10793,", "year": 2017 }, { "authors": [ "Soheil Feizi", "Farzan Farnia", "Tony Ginart", "David Tse" ], "title": "Understanding gans in the lqg setting: Formulation, generalization and stability", "venue": "IEEE Journal on Selected Areas in Information Theory,", "year": 2020 }, { "authors": [ "Bin Dai", "Yu Wang", "John Aston", "Gang Hua", "David Wipf" ], "title": "Hidden talents of the variational autoencoder", "venue": "arXiv preprint arXiv:1706.05148,", "year": 2017 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Dong Su", "Huan Zhang", "Hongge Chen", "Jinfeng Yi", "Pin-Yu Chen", "Yupeng Gao" ], "title": "Is robustness the cost of accuracy?–a comprehensive study on the robustness of 18 deep image classification models", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Florian Tramèr", "Dan Boneh" ], "title": "Adversarial training and robustness for multiple perturbations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Constantinos Daskalakis", "Themis Gouleakis", "Chistos Tzamos", "Manolis Zampetakis" ], "title": "Efficient statistics, in high dimensions, from truncated samples", "venue": "IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS),", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "In recent years, deep neural networks (DNNs) (Krizhevsky et al. (2012); Hochreiter and Schmidhuber (1997)) have become popular and successful in many machine learning tasks. They have been used in different problems with great success. But DNNs are shown to be vulnerable to adversarial examples (Szegedy et al. (2013); Goodfellow et al. (2014a)). A well-trained model can be easily attacked by adding a small perturbation to the image. An effective way to solve this issue is to train the robust model using training data augmented with adversarial examples, i.e. adversarial training.\nWith the growing success of generative models, researchers have tried to use generative adversarial networks (GAN) (Goodfellow et al. (2014b)) and variational autoencoder (VAE) (Kingma and Welling (2013)) to generate adversarial examples (Xiao et al. (2018); Zhao et al. (2017); Song et al. (2018a); Kos et al. (2018); Song et al. (2018b)) to fool the classification model with great success. They found that standard adversarial training cannot defend these new attacks. Unlike the regular adversarial examples, these new adversarial examples are perceptible by humans but they preserve the semantic information of the original data. A good DNN should be robust to such semantic attacks. Since the GAN and VAE are approximations of the true data distribution, these adversarial examples will stay in the data manifold. Hence they are called on-manifold adversarial examples by (Stutz et al. (2019)). On the other hand, experimental evidences support that regular adversarial examples leave the data manifold (Song et al. (2017)). We call the regular adversarial examples as off-manifold adversarial examples.\nThe concepts of on-manifold and off-manifold adversarial examples are important. Because they can help us to understand the issue of conflict between adversarial robustness and generalization (Stutz et al. (2019); Raghunathan et al. (2019)), which is still an open problem. In this paper, we study the attacking mechanisms of these two types of examples, as well as the corresponding adversarial\ntraining methods. This study, as far as we know, has not been done before. Specifically, we consider a generative attack method that adds a small perturbation in the latent space of the generative models. Since standard adversarial training cannot defend this attack, we consider the training methods that use training data augmented with these on-manifold adversarial examples, which we call latent space adversarial training. Then we compare it to standard adversarial training (training with off-manifold adversarial examples).\nContributions: We study the theoretical properties of latent space adversarial training and standard adversarial training in the Gaussian mixture model with a linear generator. We give the excess risk analysis and saddle point analysis in this model. Based on this case study, we claim that:\n• Regular adversarial examples attack in directions of small variance of the data manifold and leave the data manifold.\n• Standard adversarial training increases the model robustness by amplifying the small variance. Hence, it extends the boundary of the data manifold in directions of small variance.\n• Generative adversarial examples attack in directions of large variance of the data manifold and stay in the data manifold.\n• Latent space adversarial training increases the model robustness by amplifying the large variance. Hence, it extends the boundary of the data manifold in directions of large variance.\nWe provide experiments on MNIST and CIFAR-10 and show that the above phenomena also exist in real datasets. It gives us a new perspective to understand the behavior of on-manifold and off-manifold adversarial examples. Finally, we study the robustness trade-off between generative and regular adversarial examples. On MNIST, robustness trade-off is unavoidable, but the conflict between generative adversarial examples and regular adversarial examples are much smaller than the conflict between regular adversarial examples of different norms. On CIFAR-10, there is nearly no robustness trade-off between generative and regular adversarial examples." }, { "heading": "2 RELATED WORK", "text": "Our work is related to attack and defense methods. Specifically, we care about attacks and defenses with generative models.\nAttack Adversarial examples for deep neural networks were first intruduced in (Szegedy et al. (2013)). However, adversarial machine learning or robust machine learning has been studied for a long time (Biggio and Roli (2018)). In the setting of white box attack (Kurakin et al. (2016); Papernot et al. (2016); Moosavi-Dezfooli et al. (2016); Carlini and Wagner (2017)), the attackers have fully access to the model (weights, gradients, etc.). In black box attack (Chen et al. (2017); Su et al. (2019); Ilyas et al. (2018)), the attackers have limited access to the model. First order optimization methods, which use the gradient information to craft adversarial examples, such as PGD (Madry et al. (2017)), are widely used for white box attack. Zeroth-order optimization methods (Chen et al. (2017)) are used in black box setting. Li et al. (2019) improved the query efficiency in black-box attack. HopSkipJumpAttack (Chen et al. (2020)) is another query-efficient attack method.\nGenerative adversarial examples Recently, generative models have been used to craft adversarial examples (Xiao et al. (2018); Song et al. (2018b); Kos et al. (2018); Schott et al. (2018)). The adversarial examples are more natural (Zhao et al. (2017)). These adversarial examples lie in the data manifold, and they are called on-manifold adversarial examples.\nDefense Training algorithms against adversarial attacks can be subdivided into the following categories. Adversarial training: The training data is augmented with adversarial examples to make the models more robust (Madry et al. (2017); Szegedy et al. (2013); Tramèr et al. (2017)). Preprocessing: Inputs or hidden layers are quantized, projected onto different sets or other preprocessing methods (Buckman et al. (2018); (Guo et al. (2017); Kabilan et al. (2018)). Stochasticity: Inputs or hidden activations are randomized (Prakash et al. (2018); Dhillon et al. (2018); Xie et al. (2017)). However, some of them are shown to be useless defenses given by obfuscated gradients (Athalye et al. (2018)). Adaptive attack (Tramer et al. (2020)) is used for evaluating defenses to adversarial examples.\nDefense with generative model Using generative models to design defense algorithms have been studied extensively. Using GAN, we can project the adversarial examples back to the data manifold\n(Jalal et al. (2017); Samangouei et al. (2018)). VAE is also used to train robust model (Schott et al. (2018))." }, { "heading": "3 PROBLEM DESCRIPTION", "text": "Original space adversarial training: Consider the classification problem of training a classifer fθ to map the data points x ∈ X ⊂ Rd to the labels y ∈ Y , where X and Y are the input data space and the label space. The classifier fθ is parameterized by θ. We assume that the data pairs (x, y) are sampled from the distribution P (X,Y ) over X × Y . Standard training is to find the solution of minθ E(x,y)∼P `(fθ(x), y), where `(·, ·) is the loss function. The goal of dversarial training is to solve the minimax problem\nmin θ E(x,y)∼P max ‖x−x′‖≤ε\n`(fθ(x ′), y), (1)\nwhere ε is the threshold of perturbation. Here we can use `1, `2 or `∞-norm (Madry et al. (2017)). The inner maximization problem is to find the adversarial examples x′ to attack the given classifier fθ. The outer minimization problem is to train the classifier to defend the given adversarial examples x′. we refer to these attacks as regular attacks. We refer to these minimax problems as standard adversarial training or original space adversarial training.\nLatent space adversarial training: We assume that the data lie in a low dimensional manifold of Rd. Furthermore, we assume the true distribution D is a pushforward from a prior Guassian distribution z ∼ N (0, I) using G(z), where G : Z → X is a mapping from the latent space Z to the original space X . This is a basic assumption of GAN or VAE. Let I : X → Z be the inverse mapping of G(z). The goal of latent space adversarial training is to solve the following minimax problem\nmin θ E(x,y)∼P max ‖z′−I(x)‖≤ε\n`(fθ(G(z ′)), y). (2)\nUnlike the regular attacks, the distance between the original examples and adversarial examples can be large. To preserve the label of the data, we use the conditional generative models (e.g. C-GAN (Mirza and Osindero (2014)) and C-VAE (Sohn et al. (2015))), i.e. the generator Gy(z) and inverse mapping Iy(x) are conditioned on the label y, for adversarial training. We refer to these attacks as generative attack, and these adversarial training as latent space adversarial training.\nRegular attack algorithms Two widely used gradient-based attack algorithms for the inner maximization problem in equation (1) are fast gradient sign method (FGSM) (Goodfellow et al. (2014a)) and projected gradient descend (PGD) (Madry et al. (2017)). Using FGSM, the adversarial examples are calculated by x′ = x+ εsgn(∇x`(fθ(x), y)), where ∇x denotes the gradient with respect to x. PGD attempts to find a near optimal adversarial example for the inner maximization problem (1) in multiple steps. In the tth step,\nxt+1 = Πx+S [x t + α∇x`(fθ(xt), y)/‖∇x`(fθ(xt), y)‖],\nwhere α is the step size, Πx+S [·] is the projection operator to project the given vector to the constraint x+ S = {x′|‖x− x′‖ ≤ ε}. In the whole paper, we refer to these as FGSM-attack and PGD-attack, and the corresponding original space adversarial training as FGSM-adv and PGD-adv. FGSM-attack is a weak attack and PGD-attack is a stronger attack. In section 5, we use them to show that a strong original space adversarial training, PGD-adv, does not work well against a weak attack, FGSM-attack in the latent space. Conversely, latent space adversarial training cannot defend a simple FGSM-attack in the original space.\nGenerative attack algorithm In our experiments, we use FGSM in the latent space for the inner maximization problem in equation (2)\nz′ = I(x) + εsgn(∇z`(fθ(G(z)), y)). Because of the mode collapse issue of GAN (Salimans et al. (2016); Gulrajani et al. (2017)), adding a small perturbation in the latent space of GAN may output the same images. Thus we use VAE in our experiments. we refer to this generative attack and latent space adversarial training as VAE-attack and VAE-adv." }, { "heading": "4 THEORETICAL ANALYSIS", "text": "In this section, we study the difference between adversarial training in the latent space and in the original space. We study the simple binary classification setting proposed by Ilyas et al. (2019). The main reason for using this model is that we can find the optimal closed-form solution, which gives us insights to understand adversarial training. For a more complex model, we can only solve it numerically. We leave all the proofs of the lemmas and theorems in appendix A." }, { "heading": "4.1 THEORETICAL MODEL SETUP", "text": "Gaussian mixture (GM) model Assume that data points (x, y) are sampled according to y ∼ {−1, 1} unifomly and x ∼ N (yµ∗,Σ∗), where µ∗ and Σ∗ denote the true mean and covariance matrix of the data distribution. For the data in the class y = −1, we replace x by −x, then we can view the whole dataset as sampled from D = N (µ∗,Σ∗).\nClassifier The goal of standard training is to learn the parameters Θ = (µ,Σ) such that\nΘ = arg min µ,Σ L(µ,Σ) = arg min µ,Σ Ex∼D[`(x;µ,Σ)], (3)\nwhere `(·) represents the negative log-likelihood function. The goal of adversarial training is to find\nΘr = arg min µ,Σ Lr(µ,Σ) = arg min µ,Σ Ex∼D[ max ‖x−x′‖≤ε `(x′;µ,Σ)]. (4)\nWe use L and Lr to denote the standard loss and adversarial loss. After training, we classify a new data point x to the class sgn(µTΣ−1x).\nGenerative model In our theoretical study, we use a linear generative model, that is, probabilistic principle components analysis (P-PCA) (Tipping and Bishop (1999)). P-PCA can be viewed as linear GAN (Feizi et al. (2017); Feizi et al. (2020)) and linear VAE (Dai et al. (2017)).\nGiven dataset {xi}ni=1 ⊂ Rd, let µ and S be the sample mean and sample covariance matrix. The eigenvalue decomposition of S is S = UΛUT , then using the first q eigenvactors, we can project the data to a low dimensional space. P-PCA is to assume that the data are generated by\nx = W z + µ+ where z ∼ N (0, I) , ∼ N (0, σ2I),\nz ∈ Rq and W ∈ Rd×q. Then we have x ∼ N (µ,WW T + σ2I), x|z ∼ N (W z + µ, σ2I) and z|x ∼ N (P−1W T (x−µ), σ2P−1) whereP = W TW +σ2I . The maximum likelihood estimator ofW and σ2 are\nWML = Uq(Λq − σ2MLI)1/2 and σ2ML = 1\nd− q d∑ i=q+1 λi,\nwhere Uq is the matrix of the first q columns of U , Λq is the matrix of the first q eigenvalues of Λ. In the following study, we assume that n is large enough such that we can learn the true µ∗ and Σ∗. Thus we have S = Σ∗, Uq = Uq∗, Λq = Λq∗ for the generative model." }, { "heading": "4.2 MINIMAX PROBLEM OF LATENT SPACE ADVERSARIAL TRAINING", "text": "To perturb the data in the latent space, Data will go through the encode-decode process x→ z → ∆z + z → x′. Based on the probabilistic model, we may choose z with the highest probability or just sample it from the distribution we learned. Hence, we could have different strategies. Below we list 3 strategies. Strategy 1 is used in practice and the other two are alternative choices. In lemma 1 we show that these strategies are equivalent under the low dimensional assumption. Hence, we do not need to worry about the effect of sampling strategies.\nStrategy 1: Sample x ∼ D, encode z = arg max q(z|x) = P−1W T (x − µ∗), add a perturbation ∆z, and finally, decode xadv = arg max p(x|z + ∆z) = W (z + ∆z) + µ∗. Strategy 2: Sample x ∼ D, then sample z ∼ q(z|x), add a perturbation ∆z, and finally, sample xadv ∼ p(x|z + ∆z).\nStrategy 3: Sample z ∼ N (0, I), add a perturbation ∆z, and then sample xadv ∼ p(x|z + ∆z). In this strategy, xadv can be viewed as the adversarial example of x = arg maxx q(z|x). The following lemma shows that the adversarial examples can be unified in one formula. Hence sampling strategies will not affect our analysis. Lemma 1 (Adversarial examples perturbed in the latent space). Using these 3 strategies, the adversarial examples can be unified as\nxadv = x ′ +W∆z and x′ ∼ D′j = N (µ∗,U∗Λ(j)UT∗ ), j = 1, 2, 3, (5)\nwhere\nΛ(1) =\n[ (Λq − σ2I)2Λ−1q 0\n0 0\n] , Λ(3) = [ Λq 0 0 σ2I ] ,\nΛ(2) =\n[ (Λq − σ2I)2Λ−1q + (Λq − σ2I)Λ−1q σ2 + σ2I 0\n0 σ2I\n] .\nIf the data lie in a q dimensional subspace, i.e. the covariance matrix Σ∗ is rank q, we have Λ(1) = Λ(2) = Λ(3) = Λ∗. Then D′ = D.\nIn general, the adversarial example can be decomposed into 2 parts, the change of distribution x′ ∼ D′ and the small perturbationW∆z. Therefore the adversarial expected risk can be written as the following minimax problem\nmin µ,Σ Lls(µ,Σ;D′j) = min µ,Σ Ex′∼D′j max‖∆z‖≤ε `(x′ +W∆z,µ,Σ), j = 1, 2, 3. (6)\nWe aim to analyze the different properties between the minimax problems in equations (4) and (6). We give the excess risk and optimal saddle point analysis in the following subsections." }, { "heading": "4.3 EXCESS RISK ANALYSIS", "text": "We consider the difference between Lls and L given the true Θ∗, i.e. Lls(Θ∗;D′j)−L(Θ∗;D). It characterizes the excess risk incured by the optimal perturbation. To derive the expression of excess risk, we decompose it into two parts\nLls(Θ∗;D′j)− L(Θ∗;D) = Lls(Θ∗;D′j)− L(Θ∗;D′j)︸ ︷︷ ︸ perturbation +L(Θ∗;D′j)− L(Θ∗;D)︸ ︷︷ ︸ change of distribution . (7)\nTo simplify the notation, we consider the Lagrange penalty form of the inner maximization problem in equation (6), i.e. max `(x′+W∆z,µ,Σ)−L‖∆z‖2/2, where L is the Lagrange multiplier. The following theorem gives the solution in the general case. Theorem 2 (Excess risk). Let Lls and L be the loss with and without perturbation in latent space (equations (6) and (3) respectively), given the non-robustly learned Θ∗ = (µ∗,Σ∗), thus the excess risk caused by perturbation is\nLls(Θ∗,D′j)− L(Θ∗,D′j) = 1\n2 q∑ i=1 [ (1 +\nλi − σ2\n(L− 1)λi + σ2 )2 − 1 ]λ(j)i λi , j = 1, 2, 3.\nand the excess risk caused by the changed of distribution is\nL(Θ∗,D′j)− L(Θ∗,D) = 1 2 log [∏d i=1 λ (j) i∏d\ni=1 λi\n] + 1\n2 ( d∑ i=1 λ (j) i λi − d ) .\nIt is hard to see which part dominates the excess risk. If we further assume that the data lie in a q dimensional manifold, the excess risk caused by the change of distribution becomes 0 by Lemma 1. We have the following corollary. Corollary 3 (Excess risk). Let Lls and L be the loss with or without perturbation in latent space (equation (6) and (3) respectively), given the non-robustly learned Θ∗ = (µ∗,Σ∗), and rank(Σ∗) = q. The excess risk\nLls(Θ∗,D′j)− L(Θ∗,D) = O(qL−2).\nThe optimal perturbation in the latent space will incur an excess risk in O(qL−2). The adversarial vulnerability depends on the dimension q and the Lagrange multiplier L. It does not depend on the shape of the data manifold. This is because the perturbation constraint (the black block) aligns with the shape of the data manifold (the ellipse) as we demonstrate in Figure 1 (c). Thus generative attacks will focus on the directions of the largest q variance.\nThen, we analyze the excess risk of original space adversarial training. Since the perturbation thresholds, ε, are on different scales in the original space attack and latent space attack, the corresponding Lagrange multipliers L are different. We use L′ for original space adversarial training in the following Theorem.\nTheorem 4 (Excess risk of original space adversarial training). Let Lr and L be the loss with or without perturbation in original space (equations (4) and (3) respectively), given the non-robustly learned Θ∗ = (µ∗,Σ∗). Denote λmin be the smallest eigenvalue of Σ∗. The excess risk is\nΩ((λminL ′)−2) ≤ Lr(Θ∗,D)− L(Θ∗,D) ≤ O(d(λminL′)−2)." }, { "heading": "If the data lie in a low dimensional manifold, i.e. λmin = 0, the excess risk equals to +∞.", "text": "Optimal perturbation in the original space will incur an excess risk in O(d(λminL′)−2). The adversarial vulnerability depends on the smallest eigenvalues λmin, the dimension d, and the Lagrange multiplier L′. The λmin comes from the misalignment between the perturbation constraint (the black block) and the shape of the data manifold (the ellipse) as we demonstrate in Figure 1 (a). Notice that λmin also appears in the lower bound. Hence, the excess risk equals to +∞ when λmin = 0. Thus regular attacks focus on the directions of small variance. Specifically, when λmin = 0, regular adversarial examples leave the data manifold." }, { "heading": "4.4 SADDLE POINT ANALYSIS", "text": "In this subsection we study the optimal solution of optimization problem (6). Since it is not a standard minimax problem, we consider a modified problem:\nmin µ,Σ max Ex′‖∆z‖=ε\nEx′∼D′j `(x ′ +W∆z,µ,Σ), j = 1, 2, 3. (8)\nWe will explain more about the connection between optimization problems in equation (6) and (8). See appendix A. The following theorem is our main result. It gives the optimal solution of latent space adversarial training.\nTheorem 5 (Main result: Optimal Saddle point). The optimal solution of the modified problem in equation (8) is\nµls = µ∗ and Σls = U∗ΛlsUT∗ ,\nwhere\nλlsi = 1\n4\n[ 2λ\n(j) i + 4(λi − σ2) L + 2λ (j) i\n√ 1 +\n4(λi − σ2) λ\n(j) i L\n] for i = 1 ≤ q, λlsi = λ (j) i for i > q,\nand j = 1, 2, 3 corresponding to strategies 1,2 and 3.\nWe assume that the data lie in a q-dimensional manifold again. Then we have λlsi /λi = 1/2 + 1/L+√ 1/4 + 1/L ≥ 1 for i ≤ q and λlsi /λi = 0 for i > q. Latent space adversarial training increases the model robustness by amplifying large eigenvalues of the data manifold. The illustration of the two dimensional case is in Figure 1, (c) and (d).\nIn the same setting, the optimal solution of standard adversarial training (Problem (4)) on the original space is in Theorem 6 (which is Theorem 2 in Ilyas et al. (2019)).\nTheorem 6 (Optimal saddle point Ilyas et al. (2019)). The optimal solution of the problem in equation (4) is\nµr = µ∗ and Σr = 1\n2 Σ∗ +\n1\nL′ I +\n√ 1\nL′ Σ∗ +\n1 4 Σ2∗.\nTheorem 6 is for the problem that the covariance matrix is restricted to be diagonal. Consider the ratio\nλ (r) i\nλi =\n1 2 + 1 L′λi +\n√ 1\nL′λi +\n1 4 .\nFor small true eigenvalue λi, the ratio is large. Standard adversarial training increases the robustness by amplifying the small eigenvalues of the data manifold. The illustration of the two dimensional case in Figure 1, (a) and (b)." }, { "heading": "5 EXPERIMENTS", "text": "In this section we report our experimental results on training LeNet on MNIST (LeCun et al. (1998)) and ResNet (He et al. (2016)) on CIFAR-10 (Krizhevsky et al. (2009)) to confirm our theoretical findings. Both of the two datasets contain 50000 training samples and 10000 test samples. Details of the hyperparameters setting are in appendix B." }, { "heading": "5.1 EIGENVALUES OF MNIST WITH AND WITHOUT ADVERSARIAL TRAINING", "text": "In this subsection, we show that the properties of the eigenvalues are also valid on the real dataset. We use MNIST as an example. Firstly, for each image in the test set Dtest of MNIST, we vectorize it into a 784 dimension vector. Then we calculate the EVD (Eigenvalue decomposition) of the covariance matrix of each class. We plot the 784 eigenvalues of class 0 and 1 in Figure 2 in blue line as examples (the other 8 classes are shown in Appendix B.3). Secondly, we adversarially train a classifier fr(x) in original space. For each point x in Dtest, we construct xr by PGD-attack with ∇xfr(x). Then we obtain a robust test set Drtest. We plot the intra class eigenvalues of the covariance matrix in Figure 2 in orange line. Lastly, we adversarially train a classifier fls(x) in latent space. Then we construct latent space robust dataset Dlstest by generative attack against fls(x) on Dtest. The eigenvalues of the covariance matrix are shown in Figure 2 in green line. We show all the eigenvalues in the first column in Figure 2. After adding a small perturbation in the original space or in the latent space, the distribution of Drtest and Dlstest are close to Dtest. We show the details of the figure of large eigenvalues (the first 30 eigenvalues) in the second column and small eigenvalues (last 754 eigenvalues) in the last column.\nOriginal space adversarial training focuses on the small variance direction We adversarially train a robust classifier fr(x) in the original space. The robust test set Drtest against fr(x) amplified the small eigenvalues a lot. In the third column, the orange line is significantly larger that the other 2 lines. While in the second column, the orange line is below the other two lines. The experiments give us an understanding of how the adversarial examples leave the data manifolds. The original dataset Dtest lies in a low dimension affine plane in R784. After adversarial training, the data move towards the small variance directions. Finally Drtest stay in a full dimension subspace.\nLatent space adversarial training focuses on the large variance direction We adversarially train a robust classifier fls(x) in the latent space. The latent space robust test set Dlstest against fls(x) amplified the large eigenvalues. In the second column, we can see that the green line is above the other two lines. And in the last column, the green line shows that adversarial training in the latent space will not affect the small eigenvalues. The adversarial examples in Dlstest move along the large variance direction. Therefore, Dlstest stay in the low dimension affine space." }, { "heading": "5.2 ROBUST TEST ACCURACIES", "text": "In this subsection, we compare the test accuracy of different attacks (FGSM, PGD, VAE) versus different defense (adversarial training with FGSM, PGD, and VAE) to help explain our theoretical results. We explain the results of MNIST in Table 1 as an example.\nOn-manifold and off-manifold adversarial examples The test accuracies of the standard training model on PGD and VAE-attack data are 3.9% and 42.4% respectively. Firstly, They show that on-manifold adversarial examples exist. Then, we can see that on-manifold adversarial examples are harder to find.\nAttack versus defense Our theory tells us that original or latent space adversarial training increase the model robustness by amplifying the small or large eigenvalues, which are the variance of the distribution, in the data manifold. In experiments, the test accuracy of VAE-adv vs PGD-attack is 1.23%, which shows that extending the manifold boundary in directions of large variance gives no contribution to defending attacks in directions of small variance. Similarly, the test accuracy of PGD-adv vs VAE-attack is 52.18%, which shows that amplifying the boundary in the directions of small variance does not work well in defending attacks in directions of high variance. Further more, we can see that latent space adversarial training does not work well against a simple FGSM attack, and vice versa.\nWe see that PGD-adv can increase the test accuracies on VAE-attack from 42% to 52% on MNIST and from 19.40% to 26.31% on CIFAR-10. A possible reason is that original space adversarial training can amplify the small eigenvalues of the first q dimension but fails to amplify the larger ones. As it is indicated in Figure 2, both original space and latent space adversarial training will increase the eigenvalues which are small but not equal to zero (around the 100th eigenvalue) of the covariance matrix." }, { "heading": "5.3 ROBUSTNESS TRADE-OFF", "text": "Robustness trade-off are common in practice (Su et al. (2018); Tramèr and Boneh (2019)). For example, `1 and `∞ adversarial examples cannot be defended simultaneously. The experiments on robustness trade-off are shown in Table 2. On MNIST, the jointly-trained model using both `1 and `∞ adversarial examples will decrease the robust test accuracy by 20% comparing to the single trained model. On CIFAR-10, the jointly-trained model will decrease the test accuracies by 6% on `1-attack and by 10% on `∞-attack. A possible reason is that they are all original space attacks and focus on\nthe same directions of small variance. Hence they conflict with each other. We study the robustness trade-off between regular and generative adversarial examples in this subsection.\nOur theory suggests that adversarial robustness can be disentangled in different directions. Hence adversarial robustness against attacks on directions of low variance and large variance can be guaranteed simultaneously. On MNIST, the jointly-trained model decreases the test accuracies by 6% comparing to the single trained model. The conflict between on-manifold and off-manifold adversarial examples is much small than the conflict between off-manifold adversarial examples of different norms. On CIFAR-10, the jointly-trained model gets test accuracies 74% on PGD-attack and 42% on VAE-attack, which exhibit nearly no robustness trade-off. Therefore, if our goal is to defend mixture of regular and generative attacks, The jointly-train model will perform well.\nDiscussion Under the q-dimensional manifold assumption, there is an overlap between the directions of the q largest variance and the directions of small variance. This is supported by Theorem 5, Theorem 6, and the first experiment. A possible reason for the robustness trade-off between regular and generative attacks is that they conflict with each other in the overlap directions. The conflict is not serious because they mainly focus on different directions." }, { "heading": "6 CONCLUSION", "text": "In this paper, we show that adversarial robustness can be disentangled in directions of small variance and large variance of the data manifold. Theoretically, we study the excess risk and optimal saddle point of the minimax problem of latent space adversarial training. Experimentally, we show that these phenomena also exist in real datasets.\nFuture works One may design defense algorithms based on this property. We can generate adversarial examples based on the directions of the dataset without access to the model architecture. In white-box settings, we can use them for data augmentation to accelerate adversarial training or even increase the model robustness. However, we need to carefully compare the computational cost between calculating the EVD of the datasets and that of standard adversarial training. In black-box attacks, we can use them to query the target model. But the data manifolds should be transferred from some known datasets (since there is no information available for the training dataset of the target model). Theoretically, it is unclear whether we can find the closed-form solution under nonlinear models. How to analyze the nonlinear model is an open problem. Using linear models, we can further analyze the conflict between robustness and generalization." }, { "heading": "OVERVIEW", "text": "In appendix A, we provide the proof of the Theorems. In appendix B, we show the settings about the experiments. Appendix C is a further discussion of data augmentation using generative models. It is not closely related to our main paper." }, { "heading": "A PROOF OF THE THEOREMS", "text": "" }, { "heading": "A.1 PROBLEM DESCRIPTION", "text": "Lemma 1 (Adversarial examples perturbed in the latent space). Using these 3 strategies, the adversarial examples can be unified as\nxadv = x ′ +W∆z and x′ ∼ D′j = N (µ∗,U∗Λ(j)UT∗ ), j = 1, 2, 3,\nwhere\nΛ(1) =\n[ (Λq − σ2I)2Λ−1q 0\n0 0\n] Λ(3) = [ Λq 0 0 σ2I ]\nΛ(2) =\n[ (Λq − σ2I)2Λ−1q + (Λq − σ2I)Λ−1q σ2 + σ2I 0\n0 σ2I\n] .\nIf the data lie in a q dimensional subspace, i.e. the covariance matrix Σ∗ is rank q, we have Λ(1) = Λ(2) = Λ(3) = Λ∗. Then D′ = D.\nProof:\nRecall that x ∼ N (µ,WW T + σ2I), x|z ∼ N (W z + µ, σ2I) and z|x ∼ N (P−1W T (x − µ), σ2P−1) where P = W TW + σ2I . The maximum likelihood estimator ofW and σ2 are\nWML = Uq(Λq − σ2I)1/2 and σ2ML = 1\nd− q d∑ i=q+1 λi.\nStrategy 1 Sample x ∼ D, encode z = arg max q(z|x) = P−1W T (x− µ∗), add a perturbation ∆z, and finally, decode xadv = arg max p(x|z + ∆z) = W (z + ∆z) + µ∗. Then\nxadv =W (P −1W T (x− µ∗) + ∆z) + µ∗\n=WP−1W T (x− µ∗) + µ∗ +W∆z =x′ +W∆z.\nSince x ∼ (µ∗,Σ∗), we have x− µ∗ ∼ (0,Σ∗), Then\nx′ ∼ N (µ∗,WP−1W TΣ∗(WP−1W T )T ),\nWith WP−1W TΣ∗(WP −1W T )T\n=U∗\n[ Λq − σ2 0\n0 0 ]1/2 [ Λq 0 0 σ2I ]−1 [ Λq − σ2 0 0 0 ]1/2 Λ∗[\nΛq − σ2 0 0 0 ]1/2 [ Λq 0 0 σ2I ]−1 [ Λq − σ2 0 0 0 ]1/2 UT∗\n=U∗\n[ (Λq − σ2I)2Λ−1q 0\n0 0\n] UT∗\n=U∗Λ (j)UT∗ , j = 1.\nStrategy 2 Sample x ∼ D, then sample z ∼ q(z|x), add a perturbation ∆z, and finally, sample xadv ∼ p(x|z + ∆z). Then\nz ∼ N (0,P−1W TΣ∗(P−1W T )T + σ2P−1)\nand\nxadv ∼ N (µ∗ +W∆z,WP−1W TΣ∗(P−1W T )TW T +Wσ2P−1W T + σ2I),\nxadv = x ′ +W∆z,\nWith WP−1W TΣ∗(P −1W T )TW T +Wσ2P−1W T + σ2I\n=U∗\n[ (Λq − σ2I)2Λ−1q 0\n0 0\n] UT∗ +U∗ [ (Λq − σ2I)Λ−1q σ2 0\n0 0\n] UT∗ + σ 2I\n=U∗\n[ (Λq − σ2I)2Λ−1q + (Λq − σ2I)Λ−1q σ2 + σ2I 0\n0 σ2I\n] UT∗\n=U∗Λ (j)UT∗ , j = 2.\nStrategy 3 Sample z ∼ N (0, I), add a perturbation ∆z, and then sample xadv ∼ p(x|z + ∆z). In this strategy, xadv can be viewed as the adversarial example of x = arg maxx q(z|x).\nxadv ∼ N (µ∗ +W∆z,WW T + σ2I),\nWith WW T + σ2I\n=U∗ [ Λq 0 0 σ2I ] UT∗\n=U∗Λ (j)UT∗ , j = 3.\nIn these 3 strageties, the adversarial examples can be summerized as\nxadv = x ′ +W∆z and x′ ∼ D′j , j = 1, 2, 3,\nwhere j = 1, 2, 3 corresponding to strategy 1,2 and 3.\nIf the data lie in a low dimensional space, i.e. the covariance matrix Σ∗ is rank q. Then the maximum likelihood of σ2ML = ∑d i=q+1 λi/(d− q) = 0. Then\nΛ(1) = Λ(2) = Λ(3) = [ Λq 0 0 0 ] = Λ∗.\nThere is no difference among these 3 strategies and there is no change of distribution, i.e. D′ = D." }, { "heading": "A.1.1 EXCESS RISK ANALYSIS", "text": "Before we prove Theorem 2. We need to prove the following lemma first. Lemma 2 (optimal perturbation). Given Θ = (µ,Σ) the optimal solution of the inner max problem in equation 6 is ∆z∗ = W T (LΣ−WW T )−1(x′ − µ), where L is the lagrange multiplier satisfying ‖∆z∗‖ = ε.\nProof: Consider problem max ‖∆z‖≤ε `(x′ +W∆z,µ,Σ).\nThe Lagrangian function is\n`(x′ +W∆z,µ,Σ)− L 2 (‖∆z‖2 − ε2)\n= d\n2 log(2π) +\n1 2 log |Σ|+ 1 2 (x′ − µ+W∆z)TΣ−1(x′ − µ+W∆z)− L 2 (‖∆z‖2 − ε2).\nNotice that this quadratic objective function is concave when L is larger than the largest eigenvalue ofW TΣ−1W . Calculate the partial derivative with respect to ∆z and set it to be zero, we have\nW TΣ−1(x′ − µ+W∆z∗)− L∆z∗ = 0 ⇔(L−W TΣ−1W )∆z∗ = W TΣ−1(x′ − µ) ⇔∆z∗ = (L−W TΣ−1W )−1W TΣ−1(x′ − µ) ⇔∆z∗ = W T (LΣ−WW T )−1(x′ − µ).\nThe last equation comes from the Woodbury matrix inversion Lemma. We can obtain L by solving the equation ‖∆z∗‖ = ε. We don’t have a closed form solution of L but we can solve it numerically. L→∞ as ε→ 0. We only need to know L is a constant in our whole theory. Theorem 2 (Excess risk). Let Lls and L be the loss with or without perturbation in latent space (equation 6 and 3 respectively), given the non-robustly learned Θ∗ = (µ∗,Σ∗), The excess risk caused by perturbation is\nLls(Θ∗,D′j)− L(Θ∗,D′j) = 1\n2 q∑ i=1 [ (1 +\nλi − σ2\n(L− 1)λi + σ2 )2 − 1 ]λ(j)i λi , j = 1, 2, 3\nand the excess risk caused by changed of distribution is\nL(Θ∗,D′j)− L(Θ∗,D) = 1 2 log [∏d i=1 λ (j) i∏d\ni=1 λi\n] + 1\n2 ( d∑ i=1 λ (j) i λi − d ) .\nProof: Since x′ ∼ Dj = N (µ∗,Σj) = N (µ∗,U∗Λ(j)UT∗ ).\nDenote v = x′ − µ∗ ∼ N (0,U∗Λ(j)UT∗ ).\nAnd we have\nWW T = Uq(Λq − σ2I)UTq = U∗ [ Λq − σ2I 0\n0 0\n] UT∗ .\nThe excess risk caused by perturbation is\n2(Lls(Θ∗,D′j)− L(Θ∗,D′j)) =E(v +WW T (LΣ∗ −WW T )−1v)TΣ−1∗ (v +WW T (LΣ∗ −WW T )−1v)− EvTΣ−1∗ v =Tr [ (I +WW T (LΣ∗ −WW T )−1)TΣ−1∗ (I +WW T (LΣ∗ −WW T )−1)EvvT\n] −Tr [ Σ−1∗ EvvT\n] =Tr [ U∗ [ [I + (Λq − σ2I)((L− 1)Λq + σ2I)−1]2 0\n0 I\n] Λ−1∗ Λ (j)UT∗ ] − Tr [ Λ−1∗ Λ (j) ]\n=Tr [ [[I + (Λq − σ2I)((L− 1)Λq + σ2I)−1]2 0\n0 I\n] Λ−1∗ Λ (j) ] − Tr [ Λ−1∗ Λ (j) ]\n= q∑ i=1 [ (1 +\nλi − σ2\n(L− 1)λi + σ2 )2 − 1 ]λ(j)i λi , j = 1, 2, 3.\nand the excess risk caused by changed of distribution is\n2(L(Θ∗,D′j)− L(Θ∗,D)) = log |Σj | − log |Σ∗|+ Ex′(x′ − µ∗)TΣ−1∗ (x′ − µ∗)− Ex(x− µ∗)TΣ−1∗ (x− µ∗) = log |Σj | − log |Σ∗|+ Tr(Σ−1∗ Ex′(x′ − µ∗)(x′ − µ∗)T )− Tr(Σ−1∗ Ex(x− µ∗)(x− µ∗)T )\n= log [∏d i=1 λ (j) i∏d\ni=1 λi\n] + Tr(Λ−1∗ Λ (j))− Tr(Λ−1∗ Λ∗)\n= log [∏d i=1 λ (j) i∏d\ni=1 λi ] + ( d∑ i=1 λ (j) i λi − d ) .\nIt is hard to see which part dominates the excess risk. If we further assume that the data lie in a q dimension manifold. The excess risk caused by the change of distribution becomes 0. We have the following corollary.\nCorollary 3 (Excess risk). Let Lls and L be the loss with or without perturbation in latent space (equation (6) and (3) respectively), given the non-robustly learned Θ∗ = (µ∗,Σ∗), and rank(Σ∗) = q. The excess risk\nLls(Θ∗,D′j)− L(Θ∗,D) = O(qL−2).\nProof: By Lemma 1, we have σ2 = 0. λ(j)i = λi and D′j = D. Hence the excess risk caused by changed of distribution L(Θ∗,D′j)− L(Θ∗,D) = 0. The excess risk caused by perturbation is\n2(Lls(Θ∗,D′j)− L(Θ∗,D′j))\n= q∑ i=1 [ (1 +\nλi − σ2\n(L− 1)λi + σ2 )2 − 1 ]λ(j)i λi\n= q∑ i=1 [ (1 +\n1\n(L− 1) )2 − 1 ] =O(qL−2).\nTheorem 4 (Excess risk of orginal space adversarial training). Let Lr and L be the loss with or without perturbation in original space (equations (4) and (3) respectively), given the non-robustly learned Θ∗ = (µ∗,Σ∗). Denote λmin be the smallest eigenvalue of Σ∗. The excess risk\nΩ((λminL) −2) ≤ Lr(Θ∗,D)− L(Θ∗,D) ≤ O(d(λminL)−2).\nTheorem 4 can be viewed as a corollary of Theorem 1 in Ilyas et al. (2019). We give the prove here.\nProof: Consider the Lagrange multiplier form of the inner maximization problem in equation 4.\nmax ‖∆x‖≤ε `(x+ ∆x,µ,Σ).\nThe Lagrangian function is\n`(x+ ∆x,µ,Σ)− L 2 (‖∆x‖2 − ε2)\n= d\n2 log(2π) +\n1 2 log |Σ|+ 1 2 (x− µ+ ∆x)TΣ−1(x− µ+ ∆x)− L 2 (‖∆x‖2 − ε2).\nNotice that this quadratic objective function is concave when L is larger than the largest eigenvalue of Σ−1. Calculate the partial derivative with respect to ∆x and set it to be zero, we have\nΣ−1(x− µ+ ∆x∗)− L∆x∗ = 0 ⇔∆x∗ = (LΣ− I)−1(x− µ).\nThe excess risk is\n2(Lr(Θ∗,D)− L(Θ∗,D)) =E(v + (LΣ∗ − I)−1v)TΣ−1∗ (v + (LΣ∗ − I)−1v)− EvTΣ−1∗ v =Tr [ (I + (LΣ∗ − I)−1)TΣ−1∗ (I + (LΣ∗ − I)−1)EvvT ] − Tr [ Σ−1∗ EvvT\n] =\nd∑ i=1 [(1 + 1 Lλi − 1 )2 − 1].\nOn the one hand, d∑ i=1 [(1 + 1 Lλi − 1 )2 − 1]\n≥[(1 + 1 Lλmin − 1 )2 − 1] ≥Ω((Lλmin)−2).\nOn the other hand, d∑ i=1 [(1 + 1 Lλi − 1 )2 − 1]\n≤d[(1 + 1 Lλmin − 1 )2 − 1] ≤O(d(Lλmin)−2)." }, { "heading": "A.1.2 SADDLE POINT ANALYSIS", "text": "Theorem 5 (Main result: Optimal Saddle point). The optimal solution of the modified problem in equation (8) is\nµls = µ∗ and Σls = U∗ΛlsUT∗ ,\nwhere\nλlsi = 1\n4\n[ 2λ\n(j) i + 4(λi − σ2) L + 2λ (j) i\n√ 1 +\n4(λi − σ2) λ\n(j) i L\n] for i = 1 ≤ q and λlsi = λ (j) i for i > q.\nj = 1, 2, 3 corresponding to strategies 1,2 and 3.\nProblem 6 is not a standard minimax problem, consider the modified problem\nmin µ,Σ max Ex′‖∆z‖=ε\nEx′∼D′j `(x ′ +W∆z,µ,Σ), j = 1, 2, 3. (9)\nBy lemma 3, the optimal perturbation ∆z∗ is a matrix M times x− µ. Consider the problem\nmin µ,Σ max Ex′‖M(x′−µ)‖=ε\nEx′∼D′j `(x ′ +WM(x′ − µ),µ,Σ), j = 1, 2, 3. (10)\nLemma 6 (optimal perturbation). Given Θ = (µ,Σ) the optimal solution of the inner max problem of 10 is\nM∗ = W T (LΣ−WW T )−1.\nProof: Consider the problem\nmax E‖M(x′−µ)‖=ε\nE`(x′ +WM(x′ − µ),µ,Σ)).\nThe lagrangian function is\nE [ `(x′ +WM(x′ − µ),µ,Σ)− L\n2 (‖M(x′ − µ)‖2 − ε2)\n] .\nLet x′ − µ = v, Take the gradient with respect to M and set it to be zero, we have\n∂ ∂M E [ `(x′ +WM(x′ − µ),µ,Σ)− L 2 (‖M(x′ − µ)‖2 − ε2) ] =∇ME [ vTMW TΣ−1v + 1\n2 vTMW TΣ−1WMv − LvTMMv/2 ] = [ W TΣ−1 +W TΣ−1WM − LM ] E[vvT ]\n=0.\nThen we have M∗ =(L−W TΣ−1W )−1W TΣ−1\n=W T (LΣ−WW T )−1.\nThe last equality is the Woodbury matrix inversion Lemma. Notice that lemma 3 and Lemma 6 have the same form of solution. This is why we can use Problem 10 to approximate Problem 6. To solve the problem 10, we need to introduce Danskin’s Theorem.\nTheorem 7 (Danskin’s Theorem). Suppose φ(x, z) : X × Z → R is a continuous function of two arguments, where Z ⊂ Rm is compact. Define f(x) = maxz∈Z φ(x, z). Then, if for every z ∈ Z , φ(x, z) is convex and differentiable in x, and ∂φ/∂x is continuous: The subdifferential of f(x) is given by\n∂f(x) = conv {∂φ(x, z)\n∂x , z ∈ Z0(x)\n} ,\nwhere conv(·) is the convex hull, and Z0(x) is\nZ0(x) = {z̄ : φ(x, z̄) = maxφ(x, z)}.\nIf the outer minimization problem is convex and differentiable, we can use any maximizer for the inner maximization problem to find the saddle point. But the outer problem of problem 4 is not convex, we need to modify the problem again. Assume that we have already obtained the eigenvector U∗ from the ML estimator. The optimization variables of the outer minimization problem are µ and Λ. Then we have Σ = U∗ΛUT∗ in problem 10. Another reason to make this assumption is that we only need to consider the eigenvalue problem to compare with standard adversarial training (Theorem 2 of Andrew Ilyas et al., 2019).\nProof of Theorem 5:\nBy Lemma 6, we have\nM∗ =W T (LΣ−WW T )−1\n=\n[ Λq − σ2 0\n0 0\n]1/2 ( LΛ− [ Λq − σ2 0\n0 0\n])−1 UT∗ .\nWhich is a diagonal matrix ΛM times UT∗ . Let T = Λ −1, m = Λ−1UT∗ µ and x ′′ = UT∗ x ′. The optimization problem becomes\nmin m,T max ΛM\nEx′∼D′j `(x ′ +WΛMU T ∗ (x ′ − µ),m, T )\ns.t. Ex′‖ΛMU∗(x′ − µ)‖2 = ε2. (11)\nObviously, the inner constraint is compact (by Heine-Borel theorem), we only need to prove the convexity of the outer problem to use Danskin’s Theorem. For any x′ and ΛM ,\n`(x′ +WΛTMU∗(x ′ − µ),m, T )\n= d\n2 log(2π) +\n1 2 log |Σ|+ 1 2 (x′ − µ+WM(x′ − µ)TΣ−1(x′ − µ+WM(x′ − µ)." }, { "heading": "Let u = UT∗ (x", "text": "′ − µ), and A = (I +\n[ Λq − σ2 0\n0 0\n] ΛM ) 2, consider the third term, we have\n1 2 log |Σ|+ 1 2 (x′ − µ+WM(x′ − µ)TΣ−1(x′ − µ+WM(x′ − µ)\n= 1\n2 uTA2Tu.\nBy Daskalakis et al., 2018Daskalakis et al. (2018), The hessian matrix is H = Covz∼N (T−1m,(AT )−1) [(vec(− 12AzzT )\nz\n) , ( vec(− 12Azz\nT ) z\n)] 0.\nTherefore, this is a convex problem. By the same calculation in Lemma 6, a maximizer of the inner problem is\nΛ∗M =\n[ Λq − σ2 0\n0 0\n]1/2 ( LΛ− [ Λq − σ2 0\n0 0\n])−1 .\nThen\nA = [ I + [ Λq − σ2 0\n0 0\n]( LΛ− [ Λq − σ2 0\n0 0\n])−1]2 .\nThen the first order derivative (by Daskalakis et al. (2018)) is ∇[T,m]T ` = [ 1 2AΛ (j) − 12T −1\nAT−1m−AUT∗ µ∗\n] = 0.\nFrom the second equation, we directly have µls = µ∗. From the first equation, for i > q, we have\n(1 + 0)2λ (j) i = λ ls i .\nFor i ≤ q, we have (1 + (λi − σ2)/(Lλlsi − λi + σ2))2λ (j) i = λ ls i .\nIt equivalents to a second order equation of √ λlsi√\nλlsi\n2 − √ λ\n(j) i √ λlsi − λi − σ2\nL = 0.\nSolving this equation, we obtained\nλlsi = 1\n4\n[ 2λ\n(j) i + 4(λi − σ2) L + 2λ (j) i\n√ 1 +\n4(λi − σ2) λ\n(j) i L\n] for i = 1 ≤ q and λlsi = λ (j) i , for i > q." }, { "heading": "B EXPERIMENTS SETTINGS", "text": "" }, { "heading": "B.1 MNIST", "text": "For Mnist, we use LeNet5 for the classifier and 2 layers MLP (with hidden size 256 and 784) for the encoder and decoder of conditional VAE. For standard training of the classifier, we use 30 epochs, batch size 128, learning rate 10−3, and weight decay 5× 10−4. For the CVAE, we use 20 epochs, learning rate 10−3, batch size 64, and latent size 10.\nFor standard adversarial training, we use ε = 0.25 for FGSM and PGD. in PGD, we use 40 steps for the inner part. Adversarial training start after 10 epochs standard training.\nFor generative adversarial training, we use ε = 1 in the latent space with FGSM. Adversarial training start after 10 epoches standard training.\nIn the attack part, we use ε = 0.2 for norm-based attack and ε = 1 for generative attack on the test set." }, { "heading": "B.2 CIFAR10", "text": "For CIFAR10, we use ResNet32 for the classifier and 4 layers CNN for the encoder and decoder of conditional VAE. For standard training of the classifier, we use 200 epochs, batch size 128, learning rate 10−3, and weight decay 5× 10−4. For the CVAE, we use 100 epochs, learning rate 10−3, batch size 64, and latent size 128.\nFor standard adversarial training, we use ε = 4/255 for FGSM and PGD. in PGD, we use 10 steps for the inner part. Adversarial training start after 100 epochs standard training.\nFor generative adversarial training, we use ε = 0.1 in the latent space with FGSM. Adversarial training start after 100 epoches standard training. Since we see that the modeling power of VAE in CIFAR10 is not good enough. For each of the image, the encode variance is very small. When we\nadd a small perturbation to the encode mean value, the output image are blured. Hence we only use a small ε = 0.1.\nIn the attack part, we use ε = 4/255 for norm-based attacks and ε = 0.1 for generative attack on the test set. The test accuracy of VAE-adv against VAE attack is 40.18% in our experiments. It is not good enough because of the modeling power of VAE. But the results on standard adversarial training versus VAE-attacks are worse, and vice versa. The experiments support our findings." }, { "heading": "B.3 EIGENVALUES OF COVARIANCE MATRIX OF MNIST", "text": "We plot the eigenvalues of all the classes in this section, see Figure 3." }, { "heading": "C FINITE SAMPLES CASE: DATA AUGMENTATION OR ADVERSARIAL TRAINING", "text": "In this section we discuss the question that can we use the generative model to generate more examples for training in our theoretical framework. Because it is not closely related to our main results, we only discuss it in appendix. Let us focus on the case that the number of samples are not enough.\nGenerative model cannot help data augmentation Given dataset {xi}ni=1 ⊂ Rd, let µ̂ and Ŝ = ÛΛ̂Û be the sample mean and sample covariance matrix. The generative model learned by this dataset is x = Ûq(Λ̂q − σ2I)1/2z + µ̂+ . The distribution of the data sample from the model is\nxgen ∼ N (µ̂,WW T + σ2I) = N (µ̂, ÛΛ̂genÛ) , N (µ̂, Ŝ′). (12) If we use n samples from the original dataset and m samples from the generative model, the loss function is\nL(µ,Σ) = min µ,Σ\n1\nn+m n∑ i=1 `(xi;µ,Σ) + m n+m Ex∼N (µ̂,Ŝ′)[`(x;µ,Σ)]. (13)\nTheorem 8 (Data augmentation by generative model). Given MLE µ̂ and Ŝ, The optimal solution training with n true samples and m generated samples is µda = µ̂ and Σda = ÛΛdaÛT , where\nλdai = λ̂i for i ≤ q and λdai = n\nn+m λ̂i +\nm\n(n+m)(d− q) d∑ k=q+1 λ̂k for i > q.\nProof: Consider the derivative of Problem 13.\n∇µL = 1\nn+m n∑ i=1 Σ−1(xi − µ) + m n+m ExΣ−1(x− µ) = 0.\nThen we can obtain µda = µ̂.\n∇Σ−1L = −Σ + 1\nn+m n∑ i=1 (xi − µ)(xi − µ)T + m n+m Ex(x− µ)(x− µ)T = 0.\nThen we have Σls = n\nn+m Ŝ +\nm\nn+m Ŝ′.\nFor i ≤ q, λlsi = n\nn+m λ̂i +\nm\nn+m λ̂i = λ̂i.\nFor i > q,\nλlsi = n\nn+m λ̂i +\nm\nn+m σ̂2 =\nn\nn+m λ̂i +\nm\n(n+m)(d− q) d∑ k=q+1 λ̂k.\nThe optimal solution is a little bit destoryed. In this perspective, generative models give no help to data augumentation.\nGenerative model can help robust training Theorem 9 (Adversarial learned parameters). The Adversarial learned features of problem 10 under finite samples is µls = µ̂ and Σls = ÛΛlsÛT where Λls is of the same form in Theorem 5 except replacing λi by λ̂i under strategy 3.\nProof: Considering strategy 3. In this case\nW = Û(Λ̂q − σ2)1/2.\nand x′ ∼ (µ̂,WW T + σ2I).\nSince\nWW T + σ2I = Û [ Λ̂q 0 0 σ2I ] ÛT .\nIn words, the eigenvectors of the dsitribution Û is the same as the one in perturbationW∆z. The optimization problem is the same as the one we use in the proof of Theorem 5 if we replace λi by λ̂i.\nIf the data lie in a q dimensional subspace, we do not neet to make the assumption on strategy 3." } ]
2,020
null
SP:000d80bbed580799f47117d2c65cb08f17b783e3
[ "The paper proposes a method for the sequential meta-learning problem. The author meta learn not only model parameters but also learning rate vectors for parameter blocks. To this end, the meta-learn model finds appropriate model parameters and adaptive learning rate vectors that capture task-general information. Overall experiments are performed on few-shot meta-learning settings with sequential domains (datasets)." ]
Meta-learning has made rapid progress in past years, with recent extensions made to avoid catastrophic forgetting in the learning process, namely continual meta learning. It is desirable to generalize the meta learner’s ability to continuously learn in sequential domains, which is largely unexplored to-date. We found through extensive empirical verification that significant improvement is needed for current continual learning techniques to be applied in the sequential domain meta learning setting. To tackle the problem, we adapt existing dynamic learning rate adaptation techniques to meta learn both model parameters and learning rates. Adaptation on parameters ensures good generalization performance, while adaptation on learning rates is made to avoid catastrophic forgetting of past domains. Extensive experiments on a sequence of commonly used real-domain data demonstrate the effectiveness of our proposed method, outperforming current strong baselines in continual learning. Our code is made publicly available online (anonymous) https://github.com/ ICLR20210927/Sequential-domain-meta-learning.git.
[]
[ { "authors": [ "Marcin Andrychowicz", "Misha Denil", "Sergio Gomez", "Matthew W. Hoffman", "David Pfau", "Tom Schaul", "Brendan Shillingford", "Nando de Freitas" ], "title": "Learning to learn by gradient descent by gradient descent", "venue": "Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Antreas Antoniou", "Amos J. Storkey" ], "title": "Learning to learn via self-critique", "venue": "ArXiv, abs/1905.10295,", "year": 2019 }, { "authors": [ "Antreas Antoniou", "Harrison Edwards", "Amos Storkey" ], "title": "How to train your maml", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Antreas Antoniou", "Massimiliano Patacchiola", "Mateusz Ochal", "Amos Storkey" ], "title": "Defining benchmarks for continual few-shot learning", "venue": null, "year": 2004 }, { "authors": [ "Atilim Gunes Baydin", "Robert Cornish", "David Martinez Rubio", "Mark Schmidt", "Frank Wood" ], "title": "Online learning rate adaptation with hypergradient descent", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Luca Bertinetto", "João F. Henriques", "Philip H.S. Torr", "Andrea Vedaldi" ], "title": "Meta-learning with differentiable closed-form solvers", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Massimo Caccia", "P. Rodrı́guez", "O. Ostapenko", "Fabrice Normandin", "Min Lin", "L. Caccia", "Issam H. Laradji", "I. Rish", "Alexande Lacoste", "D. Vázquez", "Laurent Charlin" ], "title": "Online fast adaptation and knowledge accumulation: a new approach to continual learning", "venue": "ArXiv, abs/2003.05856,", "year": 2020 }, { "authors": [ "Arslan Chaudhry", "Marc’Aurelio Ranzato", "Marcus Rohrbach", "Mohamed Elhoseiny" ], "title": "Efficient lifelong learning with a-gem", "venue": "Proceedings of the International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Arslan Chaudhry", "Marcus Rohrbach", "Mohamed Elhoseiny", "Thalaiyasingam Ajanthan", "Puneet K. Dokania", "Philip H.S. Torr", "Marc’Aurelio Ranzato" ], "title": "Continual learning with tiny episodic memories. https://arxiv.org/abs/1902.10486, 2019b", "venue": null, "year": 2019 }, { "authors": [ "Tristan Deleu", "Tobias Würfl", "Mandana Samiei", "Joseph Paul Cohen", "Yoshua Bengio" ], "title": "Torchmeta: A Meta-Learning library for PyTorch. 2019", "venue": "URL https://arxiv.org/abs/1909", "year": 1909 }, { "authors": [ "Michele Donini", "Luca Franceschi", "Orchid Majumder", "Massimiliano Pontil", "Paolo Frasconi" ], "title": "Marthe: Scheduling the learning rate via online hypergradients", "venue": "Proceedings of the 29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Sayna Ebrahimi", "Mohamed Elhoseiny", "Trevor Darrell", "Marcus Rohrbach" ], "title": "Uncertainty-guided continual learning with bayesian neural networks", "venue": "International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Sayna Ebrahimi", "Mohamed Elhoseiny", "Trevor Darrell", "Marcus Rohrbach" ], "title": "Uncertainty-guided continual learning with bayesian neural networks", "venue": "Proceedings of the International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "H. Edwards", "A. Storkey" ], "title": "Towards a neural statistician", "venue": "ArXiv,", "year": 2017 }, { "authors": [ "Chrisantha Fernando", "Dylan Banarse", "Charles Blundell", "Yori Zwols", "David Ha", "Andrei A. Rusu", "Alexander Pritzel", "Daan Wierstra" ], "title": "Pathnet: Evolution channels gradient descent in super neural networks", "venue": null, "year": 2017 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Chelsea Finn", "Kelvin Xu", "Sergey Levine" ], "title": "Probabilistic model-agnostic meta-learning", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Sebastian Flennerhag", "Andrei A. Rusu", "Razvan Pascanu", "Francesco Visin", "Hujun Yin", "Raia Hadsell" ], "title": "Meta-learning with warped gradient descent", "venue": "International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Luca Franceschi", "Michele Donini", "Paolo Frasconi", "Massimiliano Pontil" ], "title": "Forward and reverse gradient-based hyperparameter optimization", "venue": "Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Spyros Gidaris", "Nikos Komodakis" ], "title": "Dynamic few-shot visual learning without forgetting", "venue": null, "year": 2018 }, { "authors": [ "E. Grant", "Chelsea Finn", "S. Levine", "Trevor Darrell", "T. Griffiths" ], "title": "Recasting gradient-based metalearning as hierarchical bayes", "venue": "ArXiv,", "year": 2018 }, { "authors": [ "Gunshi Gupta", "Karmesh Yadav", "Liam Paull" ], "title": "La-maml: Look-ahead meta learning for continual learning", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Khurram Javed", "Martha White" ], "title": "Meta-learning representations for continual learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Ghassen Jerfel", "Erin Grant", "Thomas L. Griffiths", "Katherine Heller" ], "title": "Reconciling meta-learning and continual learning with online mixtures of tasks", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "James Kirkpatrick", "Razvan Pascanu", "Neil Rabinowitz", "Joel Veness", "Guillaume Desjardins", "Andrei A. Rusu", "Kieran Milan", "John Quan", "Tiago Ramalho", "Agnieszka Grabska-Barwinska", "Demis Hassabis", "Claudia Clopath", "Dharshan Kumaran", "Raia Hadsell" ], "title": "Overcoming catastrophic forgetting in neural networks", "venue": "Proceedings of the national academy of sciences,", "year": 2017 }, { "authors": [ "Brenden Lake", "Ruslan Salakhutdinov", "Jason Gross", "Joshua Tenenbaum" ], "title": "One shot learning of simple visual concepts", "venue": "Conference of the Cognitive Science Society,", "year": 2011 }, { "authors": [ "Yoonho Lee", "Seungjin Choi" ], "title": "Gradient-based meta-learning with learned layerwise metric and subspace", "venue": "International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Timothée Lesort", "A. Gepperth", "A. Stoian", "David Filliat" ], "title": "Marginal replay vs conditional replay for continual learning", "venue": "In ICANN,", "year": 2019 }, { "authors": [ "Zhenguo Li", "Fengwei Zhou", "F. Chen", "H. Li" ], "title": "Meta-sgd: Learning to learn quickly for few shot", "venue": "learning. ArXiv,", "year": 2017 }, { "authors": [ "Zhaojiang Lin", "Andrea Madotto", "Chien-Sheng Wu", "Pascale Fung" ], "title": "Personalizing dialogue agents via meta-learning", "venue": "Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "David Lopez-Paz", "Marc’Aurelio Ranzato" ], "title": "Gradient episodic memory for continual learning", "venue": "Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Subhransu Maji", "Esa Rahtu", "Juho Kannala", "Matthew Blaschko", "Andrea Vedaldi" ], "title": "Fine-grained visual classification of aircraft", "venue": null, "year": 2013 }, { "authors": [ "Arun Mallya", "Svetlana Lazebnik" ], "title": "Packnet: Adding multiple tasks to a single network by iterative pruning", "venue": null, "year": 2017 }, { "authors": [ "Michael McCloskey", "Neal J Cohen" ], "title": "Catastrophic interference in connectionist networks: The sequential learning problem", "venue": "Psychology of learning and motivation,", "year": 1989 }, { "authors": [ "Fei Mi", "Liangwei Chen", "Mengjie Zhao", "Minlie Huang", "Boi Faltings" ], "title": "Continual learning for natural language generation in task-oriented dialog systems", "venue": "The 2020 Conference on Empirical Methods in Natural Language Processing,", "year": 2020 }, { "authors": [ "Vladimir Mikulik", "Grégoire Delétang", "Tom McGrath", "Tim Genewein", "Miljan Martic", "Shane Legg", "Pedro A. Ortega" ], "title": "Meta-trained agents implement bayes-optimal agents", "venue": "34th Conference on Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Nikhil Mishra", "Mostafa Rohaninejad", "Xi Chen", "Pieter Abbeel" ], "title": "A simple neural attentive metalearner", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Cuong V. Nguyen", "Yingzhen Li", "Thang D. Bui", "Richard E. Turner" ], "title": "Variational continual learning", "venue": "Proceedings of the International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Aniruddh Raghu", "Maithra Raghu", "Samy Bengio", "Oriol Vinyals" ], "title": "Rapid learning or feature reuse? towards understanding the effectiveness of maml", "venue": "International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "S. Ravi", "H. Larochelle" ], "title": "Optimization as a model for few-shot learning", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Sachin Ravi", "Alex Beatson" ], "title": "Amortized bayesian meta-learning", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Mengye Ren", "Renjie Liao", "Ethan Fetaya", "Richard S. Zemel" ], "title": "Incremental few-shot learning with attention attractor networks", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Matthew Riemer", "Ignacio Cases", "Robert Ajemian", "Miao Liu", "Irina Rish", "Yuhai Tu", "Gerald Tesauro" ], "title": "Learning to learn without forgetting by maximizing transfer and minimizing interference", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Adam Santoro", "Sergey Bartunov", "Matthew Botvinick", "Daan Wierstra", "Timothy Lillicrap" ], "title": "Metalearning with memory-augmented neural networks", "venue": "Proceedings of the 34th International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "J. Schmidhuber" ], "title": "A neural network that embeds its own meta-levels", "venue": "IEEE International Conference on Neural Networks,", "year": 1993 }, { "authors": [ "Joan Serrà", "Dı́dac Surı́s", "Marius Miron", "Alexandros Karatzoglou" ], "title": "Overcoming catastrophic forgetting with hard attention to the task", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard S. Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Eleni Triantafillou", "Tyler Zhu", "Vincent Dumoulin", "Pascal Lamblin", "Utku Evci", "Kelvin Xu", "Ross Goroshin", "Carles Gelada", "Kevin Swersky", "Pierre-Antoine Manzagol", "Hugo Larochelle" ], "title": "Metadataset: A dataset of datasets for learning to learn from few examples", "venue": "Proceedings of the International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Hung-Yu Tseng", "Hsin-Ying Lee", "Jia-Bin Huang", "Ming-Hsuan Yang" ], "title": "Cross-domain few-shot classification via learned feature-wise transformation", "venue": "Proceedings of the International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy Lillicrap", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": null, "year": 2016 }, { "authors": [ "Risto Vuorio", "Shao-Hua Sun", "Hexiang Hu", "Joseph J. Lim" ], "title": "Multimodal model-agnostic meta", "venue": "hypernetworks", "year": 1906 }, { "authors": [ "A survey on few-shot learning." ], "title": "Zhenyi Wang, Yang Zhao, Ping Yu, Ruiyi Zhang, and Changyou Chen", "venue": "Bayesian meta sampling for", "year": 2020 }, { "authors": [ "Jaehong Yoon", "Eunho Yang", "Jeongtae Lee", "Sung Ju Hwang" ], "title": "Lifelong learning with dynamically", "venue": "Birds", "year": 2010 }, { "authors": [ "2018b. Sung Whan Yoon", "Do-Yeon Kim", "Jun Seo", "Jaekyun Moon" ], "title": "Xtarnet: Learning to extract", "venue": null, "year": 2018 }, { "authors": [ "2020. Friedemann Zenke", "Ben Poole", "Surya Ganguli" ], "title": "Continual learning through synaptic intelligence", "venue": null, "year": 2020 }, { "authors": [ "Ba" ], "title": "2014), and the learning rate is 1e-3", "venue": "For continual learning method,", "year": 2014 }, { "authors": [ "Donini" ], "title": "2020) and continual learning (Ebrahimi et al., 2020b). Different from standard supervised learning whose goal is to minimize the generalization error, continual learning aims to avoid catastrophic forgetting while maintaining good performance on current domain. Inspired by standard gradient-based hyperparameter optimization algorithms such as (Baydin et al., 2018", "venue": "Franceschi et al.,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Humans have the ability to quickly learn new skills from a few examples, without erasing old skills. It is desirable for machine-learning models to adopt this capability when learning under changing contexts/domains, which are common scenarios for real-world problems. These tasks are easy for humans, yet pose challenges for current deep-learning models mainly due to the following two reasons: 1) Catastrophic forgetting is a well-known problem for neural networks, which are prone to drastically losing knowledge on old tasks when a domain is shifted (McCloskey & Cohen, 1989); 2) It has been a long-standing challenge to make neural networks generalize quickly from a limited amount of training data (Wang et al., 2020a). For example, the dialogue system can be trained on a sequence of domains, (hotel booking, insurance, restaurant, car services, etc) due to the sequential availability of dataset (Mi et al., 2020). For each domain, each task is defined as learning one customer-specific model (Lin et al., 2019). After finishing meta training, the model could be deployed to the previously trained domains, as the new (unseen) customers from previous domains may arrive later, they have their own (small) training data (support set) used for adapting the sequentially meta-learned models. After adaptation, the newly adapted model for the new customers can be deployed to make responses to the customers.\nWe formulate the above problem as sequential domain few-shot learning, where a model is required to make proper decisions based on only a few training examples while undergoing constantly changing contexts/domains. It is expected that adjustments to a new context/domain should not erase knowledge already learned from old ones. The problem consists of two key components that have been considered separately in previous research: the ability to learn from a limited amount of data, referred to as few-shot learning; and the ability to learn new tasks without forgetting old knowledge, known as continual learning. The two aspects have been proved to be particularly challenging for deep learning models, explored independently by extensive previous work (Finn et al., 2017; Snell et al., 2017; Kirkpatrick et al., 2017; Lopez-Paz & Ranzato, 2017). However, a more challenging yet useful perspective to jointly integrate the two aspects remains less explored.\nGenerally speaking, meta-learning targets learning from a large number of similar tasks with a limited number of training examples per class. Most existing works focus on developing the general-\nization ability under a single context/domain (Santoro et al., 2016; Finn et al., 2017; 2018; Snell et al., 2017; Ravi & Beatson, 2019). Recently, it has been shown that catastrophic forgetting often occurs when transferring a meta-learning model to a new context (Ren et al., 2019; Yoon et al., 2020). Continual learning aims to mitigate negative backward transfer effects on learned tasks when input distribution shift occurs during sequential context changes. Related techniques of which are currently applied mostly on standard classification problems (Serrà et al., 2018; Ebrahimi et al., 2020b). In this paper, we generalize it to the sequential domain meta-learning setting, which seeks good generalization on unseen tasks from all domains with only limited training resources from previous domains. We term the problem sequential domain meta learning. Note this setting is different from continual few-shot learning that focuses on remembering previously learned lowresource tasks in a single domain. Our setting does not aim to remember on a specific task, but rather to maintain good generalization to a large amount of unseen few-shot tasks from previous domains without catastrophic forgetting. This setting is common and fits well in dynamic real-world scenarios such as recommendation system and dialogue training system.\nThe domain shift arised from this setting during meta learning poses new challenges to existing continual-learning techniques. This is mainly due to the high variability underlying a large number of dynamically formed few-shot tasks, making it infeasible for a model to explicitly remember each task. In our setting, a model is expected to remember patterns generic to a domain, while neglecting noise and variance of a specific few-shot task. This ability, termed as remember to generalize, allows a model to capture general patterns of a domain that repeatedly occur in batches of tasks while avoid being too sensitive to a specific few-shot task.\nIn this paper, we propose to address the aforementioned challenges by designing a dynamic learningrate adaptation scheme for learning to remember previous domains. These techniques could jointly consider gradients from multiple few-shot tasks to filter out task variance and only remember patterns that are generic in each domain. Our main idea is to meta learn both the model parameters and learning rates by backpropagating both a domain loss and a memory loss to adaptively update model parameters and the learning rates, respectively. Specifically, our mechanism keeps a small memory of tasks from previous domains, which are then used to guide the dynamic and adaptive learning behaviors on different portions of the network parameters. The proposed mechanism is versatile and applicable to both the metric-based prototypical network (Snell et al., 2017) and the gradient-based ANIL (Raghu et al., 2020) meta-learning model.\nOur contributions are summarized as follows:\n• We propose a challenging benchmark that requires a meta learning model to sequentially learn on a sequence of domains enduring domain shift without much forgetting on previous domains.\n• We extend meta learning models with existing dynamic learning rate modeling techniques. This can mitigate catastrophic forgetting through meta learning both model parameters and learning rates to dynamically control the network update process. This can be seamlessly integrated into both metric-based and gradient-based meta learning approaches.\n• We conduct extensive experiments on multiple public datasets under different sequential domain few-shot learning scenarios. We further test functionality of the dynamic learning-rate update mechanism for both metric-based and gradient-based meta-learning approaches. Comparisons are made towards a wide range of representative continuallearning techniques and models. Results demonstrate that our method outperforms strong baselines by a large margin." }, { "heading": "2 RELATED WORKS", "text": "" }, { "heading": "2.1 META LEARNING", "text": "Meta learning (Schmidhuber, 1993), aka, learning to learn, aims to rapidly adapt to a new task by reusing previous experience through training on a large number of tasks. Meta learning can be roughly classified into the following categories: 1) Metric/Embedding-based approaches such as (Vinyals et al., 2016; Snell et al., 2017; Edwards & Storkey, 2017), which map input data into embedding (feature) spaces with decisions made based on some distance metric in the feature space; 2) Black-box learning methods such as (Andrychowicz et al., 2016; Graves et al., 2014; Mishra et al., 2018); 3) Optimization-based methods such as (Finn et al., 2017; Ravi & Larochelle, 2017; Li et al., 2017; Antoniou & Storkey, 2019), which improve gradient-based optimization algorithms or learn to initialize network parameters; and 4) Bayesian meta-learning methods such as (Ravi & Beatson, 2019; Finn et al., 2018; Yoon et al., 2018b; Grant et al., 2018; Wang et al., 2020b). These methods are used to either interprete and understand MAML (Grant et al., 2018), or to model uncertainty of meta learning models (Yoon et al., 2018b; Finn et al., 2018; Wang et al., 2020b). 5) Memorybased meta learning (Santoro et al., 2016; Munkhdalai & Yu, 2017; Mikulik et al., 2020), which apply additional memory component for meta learning. Online meta learning (Finn et al., 2019) is also related to us. They focus on forward transfer, i.e., achieving better performance on future task and use all the data from previous tasks to do meta learning, while our setting is significantly different from theirs as we focus on mitigating catastrophic forgetting with only very limited access to previous domains.\nDynamically updating the learning rates for networks is not new and has been explored in several contexts. Meta-SGD (Li et al., 2017) learns the per parameter learning rates for meta learning to improve flexibility and performance. (Gupta et al., 2020) use dynamic learning rates to mitigate forgetting in online continual learning. T-net (Lee & Choi, 2018) learns a metric in activation space, which informs the update direction and step size for task-specific learning. Flennerhag et al. (2020) proposes e warped gradient descent to meta-learns an efficiently parameterised preconditioning matrix to dynamically update the network. Our work extends dynamic learning rate techniques to sequential domain meta learning setting to mitigate catastrophic forgetting." }, { "heading": "2.2 CONTINUAL LEARNING", "text": "Continual learning tackles the problem of maintaining knowledge when input distribution shift happens in sequentially arriving tasks. There are different methods to address this problem, including 1) retaining memory for future replay (Lopez-Paz & Ranzato, 2017; Chaudhry et al., 2019a; Riemer et al., 2019; Chaudhry et al., 2019b); 2) designing tailored network architectures (Rusu et al., 2016; Fernando et al., 2017; Yoon et al., 2018a); 3) performing proper regularization during parameter updates (Kirkpatrick et al., 2017; Zenke et al., 2017; von Oswald et al., 2019); and 4) introducing Bayesian methods for model parameter inference (Nguyen et al., 2018; Ebrahimi et al., 2020a). Specifically, methods based on memory replay store representative samples from old tasks and rehearsal is performed during training (Lopez-Paz & Ranzato, 2017; Chaudhry et al., 2019a; Riemer et al., 2019). Recent research also utilizes generative models to memorize previously seen data (Lesort et al., 2019). Representatives of architecture-based methods include Progressive Neural Networks (Rusu et al., 2016), PathNet (Fernando et al., 2017), Dynamically Expandable Networks (Yoon et al., 2018a), Hard Attention Mask (HAT) (Serrà et al., 2018) and PackNet (Mallya & Lazeb-\nnik, 2017), etc. These models explicitly modify network topology to preserve previous knowledge. The classic architecture-based approaches proposed in (Serrà et al., 2018) and (Mallya & Lazebnik, 2017) do not fit into this setting, as they attempt to fully remember each historic task. Progressive Neural Networks (Rusu et al., 2016) guarantee zero forgetting but at the cost of growing network architectures and increasing parameters rapidly, which is unaffordable in memory-constraint cases. Regularization-based methods constrain the updated parameters to avoid drastic changes to previously learned tasks (Kirkpatrick et al., 2017; Zenke et al., 2017; von Oswald et al., 2019; Ebrahimi et al., 2020c). They can restrict the capacity to meta-learning of new domains, thus it could hurt the performance on a new domain. Bayesian-based methods model parameters in a probabilistic way, and then parameters are updated either based on their posterior distributions (Nguyen et al., 2018) or on their uncertainty (Ebrahimi et al., 2020a). However, in the context of meta-learning, the uncertainty or posterior estimation could be highly inaccurate due to the small-data setting in each task, thus hindering the performance. Recently, there are works using meta learning to improve continual learning. For example, (Javed & White, 2019) proposes to learn versatile representations by explicit training towards minimizing forgetting." }, { "heading": "2.3 MULTI-DOMAIN META LEARNING", "text": "As a new research direction, multi-domain meta learning aims to achieve good generalization across multiple domains. (Triantafillou et al., 2020) releases a dataset containing few-shot tasks from multiple domains. (Tseng et al., 2020) proposes to use transformation layers to associate feature distributions across different domains. (Vuorio et al., 2019) proposes a tailored initialization process of model parameters based on task embedding to enable proper functionality in multiple domains. It is worth noting that the aforementioned multi-domain few-shot learning methods assume data from all domains are jointly available during training. We consider a more challenging setting where domain shift comes sequentially, thus a model needs to remember knowledge from previous domains." }, { "heading": "2.4 INCREMENTAL FEW-SHOT LEARNING", "text": "Incremental few-shot learning (Gidaris & Komodakis, 2018; Ren et al., 2019; Yoon et al., 2020) aims to handle new categories with limited resources while preserving knowledge on old categories. The task requires building a generalized model while preventing catastrophic forgetting, with an implicit assumption of unlimited access to the base categories. This paper, by contrast, focuses on the case where only very limited access to previous domain is available, and generalization to unseen categories in previous domains is required. (Gidaris & Komodakis, 2018) introduces a novel cosine similarity function to relate incremental classes with base classes. (Ren et al., 2019) proposes attention attractor network to regularize the learning of new categories. (Yoon et al., 2020) proposes to integrate two sets of features with one from a pretrained base module and the other from a metatrained module for a smooth adaptation on novel categories. For incremental few-shot learning, the increasing new classes are assumed to be in the same domain as the base classes, which is different from the changing domains in our setting." }, { "heading": "2.5 CONTINUAL FEW-SHOT LEARNING", "text": "Continual few-shot learning is a relatively new research topic. (Antoniou et al., 2020) proposes a general framework for various continual few-shot learning settings, and proposes benchmarks for proper performance evaluation. (Caccia et al., 2020) introduces Continual-MAML for online fast adaptation to new tasks while accumulating knowledge on old tasks and conducted experiments on three domains, our setting is different from theirs because they assume previous tasks can be revisited, we assume only very limited access to previous domains. (Jerfel et al., 2019) considers task distribution shift within a single domain and proposes Dirichlet process mixture of hierarchical Bayesian models (DPM) to tackle the problem. Their ideas are interesting, but they assume different set of parameters for each cluster, which is not efficient in memory-limited setting." }, { "heading": "3 THE PROPOSED METHOD", "text": "Problem setup Our goal is to perform meta learning on a sequence of N domains, denoted as D1,D2, . . . ,DN . Each domain Di consists of data divided into meta-training, meta-validation and meta-testing sets, denoted as {Dtri ,Dvali ,Dtesti }. Only Dtri is used for training a meta-learning model; Dvali is used for validation; and Dtesti is used for meta testing. Each task T ∈ Di consists of K data examples, {(xk,yk)}Kk=1. A task is divided into T tr and T test. Our goal is to sequentially meta-learn a model fθ that maps inputs x to y for each arriving domain while not forgetting too much of all previous domains. To this end, our framework consists of a very limited memory of\nMd, which is used to store a small number of training tasks from each previous domain Dd. In domain Dq , the parameters are adopted from that of the last domain, θq−1. Data available for training are thus Dq ⋃ ( ⋃q−1 d=1Md). Our goal is to update the parameter to θq such that it achieves good performance on a domainDq by transferring useful knowledge from previous domains without degrading performance on these domains. The final performance is evaluated on the meta-test sets of current and all previous domains.\nBasic idea and model architecture For effective learning without catastrophic forgetting of previous domains, our basic ideas include 1) minimizing a loss on the current domain to achieve fast adaptation and good generalization; 2) adjusting the adaptation via optimizing the learning rates to force the model to remember previous domains. In our implementation, we consider a convolutional neural network (CNN) based classifier, which is supposed to evolve over sequences of domains via meta-learning adaptation. We assume all domains share the same CNN-based structure for feature extraction, while the model should also have the flexibility to define domain-specific components. As a result, we divide the CNN classifier into two parts, corresponding to bottom convolutional layers for feature extraction and top convolutional layers for classification. The bottom layers of the CNN are defined as domain-shared layers to serve as a common feature extractor for all domains. In addition, the top layers of the CNN are defined as domain-specific layers, which are defined as N parallel sub-CNNs with each accessible by only one domain. The network structure is illustrated in Figure 1. For more flexible adaptation, we group the parameters (the convolutional filters) in each domain-shared layer into several blocks and associate each block with one learning rate. In other words, different filters will be updated with different learning rates, which are also adaptively optimized to enforce the memorization of previous domains (specified in the following sections). The CNN structure is also reflected in Figure 1. Note the learning rate in each of the domain-specific layers is not optimized as the layers in different domains are independent with each other, making the adaptive learning-rate scheme used in the domain-shared part inapplicable.\nProblem formulation Similar to (Javed & White, 2019), we treat the domain-shared and domainspecific parameters separately. The meta parameters θS (parameters of bottom convolutional layers) and its corresponding network block learning rates λS are shared across all the domains. We use θD1:T to denote the domain-specific parameters and λ D 1:T denote the domain-specific learning rates from domain 1 · · ·T , where each element θDq and λDq represent domain Dq specific parameters and its corresponding learning rates. All the learnable parameters for domain Dq are defined as θq = {θS ,θDq } and learning rates as λq = {λS ,λDq }. λS are dynamically updated and λD1:T are fixed during the meta-training process. During meta testing, λS and λD1:T are not used and updated. We use θq,j to denote all the learnable parameters at jth iteration and λSq,j to denote all learnable learning rates at the jth iteration when meta training in domain Dq . Each element of λSq,j is the learning rate for parameters inside one block, each of which consists of several filters. Note that λS and λSq,j are the same set of learning rates, the difference is that the latter one denotes the specific learning rates at one iteration. M = ⋃q−1 d=1Md denotes all the memory tasks in all the domains before domain Dq . L(T ,θ(λ)) is the task-specific loss function with model parameters θ that depends on learning rates λ and data from task T . JDtrq (θq,j+k(λ S q,j)) (Equation 5) represents the expected loss function with respect to training task distributions in domain Dq and is used for updating model parameters. JM(θq,j+K(λSq,j)) (Equation 2) is the expected loss function with respect to memory task distributions and is used for adjusting learning rates. Detailed definitions are as following.\nmin λSq,j\nJM(θq,j+K(λSq,j)) (1)\nJM(θq,j+K(λSq,j)) = E Ti∈M L(Ti,θq,j+K(λSq,j)) (2)\nθq,j+k+1(λ S q,j) = φ(θq,j+k,λq,j ;Dtrq ), for k = 0, 1, 2, . . . ,K − 1, (3)\nφ(θq,j+k,λq,j ;Dtrq ) = θq,j+k − λq,j∇θqJDtrq (θq,j+k(λq,j)) (4)\nJDtrq (θq,j+k(λ S q,j)) = E Ti∈Dtrq L(Ti,θq,j+k(λSq,j)) (5)\nθq,j+k+1(λ S q,j) is the function of λ S q,j instead of λq,j since λ D q is fixed during training Dq for the reasons stated above. Where K is the number of iterations in domain Dq after current iteration, we use K = 1 for simplicity. φ(·, ·) denotes a gradient adaptation function. Similarly, for the learning rate adaptation at domain Dq , based on the objective equation 2, an adaptation process is defined as:\nλSq,j+1 = λ S q,j − η∇λSJM(θq,j+K(λSq,j)) , (6)\nwhere η is the hyper learning rate of the learning-rate update rule.\nTo sum up, our method is a bi-level adaptation process. The lower level optimizes on the current domain, while the upper level optimizes on previous domains for updating the block-wise learning rates. Detailed algorithms describing the meta-training and testing phases are given in Algorithm 1 and 2 (Appendix A), respectively.\nAlgorithm 1 Meta training. Require: A sequence of training domain data D1,D2, . . . ,DN ; Initial learning rates λS and λD1:T to λ0 and model parameters θ\nS = θ0; λD1:T is fixed during the training process. Require: Maximum number of iterations M for each domain.\n1: M = {} 2: for q = 1 to N do 3: Randomly initialize θDq 4: for j = 1 to M do 5: Sample batch of tasks T from P (Dq), distribution over tasks in domainDq 6: Evaluate JDtrq (θq,j+1(λ S q,j)) =\nE T ∈Dtrq\nL(T ,θq,j+1(λSq,j))\n7: Perform adaptation: θq,j+1 = θq,j − λq,j∇θqJDtrq (θq,j ,λq,j) 8: if q = 1 then 9: λSq,j = λ0\n10: else 11: λSq,j+1 = λ S q,j − η∇λSJM(θq,j ,λSq,j) 12: end if 13: end for 14: Mq = {a small batch of sampled tasks fromDq}\n15: M =M∪Mq 16: end for\nDomain-level meta loss computation Our model contains a component to train on previous tasks stored in the memory to optimize the learning rates. In practice, only a minibatch of data is used at each iteration. Note each task is in the format of training-validation pairs from one domain. If one simply uses one task from one domain at each iteration, the model could overfit to this domain and may not generalize well to other domains. To alleviate this issue, we propose to compute the meta loss at two different levels, called intra-domain and interdomain meta losses. The intra-domain meta loss is defined with tasks in the same domain to ensure good generalization in one domain. The inter-domain meta loss, by contrast, is defined with tasks sampled across all available domains in the memory to encourage model adaptation across different domains. When calculating the inter-domain loss with one task from the memory, at each iteration, we augment the data with those from tasks of other domains. In this way, the model can be trained to better generalize to other domains. Figure 2 illustrates our proposed method. More details can be found in Appendix A." }, { "heading": "4 EXPERIMENT", "text": "We compare our method against several related strong baselines on a sequence of five domain datasets, which exhibit large domain shift, thus posing new challenges for existing methods. We evaluate the effectivenss of our method on both gradient-based and metric-based meta-learning frameworks. We also conduct ablation studies to verify the effectiveness of each component of our model. Our implementation is based on Torchmeta (Deleu et al., 2019). Results are reported in terms of mean and standard deviation over three independent runs.\nImplementation details For prototypical-network-based (Protonet) (Snell et al., 2017) baselines, we use a four-layer CNN with 64 filters of kernel size being 3 as shared domain feature extractor for all the domains. The last layer is defined as domain specific with 64 filters. No fully connected layers are used following existing works. For gradient-based meta-learning algorithms (Raghu et al., 2020), following (Antoniou et al., 2019), we use a three-layer CNN with 48 filters of kernel size being 3 as shared domain feature extractor, and one convolutional layer with 48 filters and one fully-connected layer for domain-specific learning. Such architectures for Prototypical network and\nANIL are widely adopted in existing works on continual learning to mitigate catastrophic forgetting (Serrà et al., 2018; Ebrahimi et al., 2020b). 750 evaluation tasks from each domain are used for meta testing. The hyper learning rate η is set to 1e-4 for Protonet and 5e-5 for ANIL. The gradients with respect to all the learnable learning rates are clipped to [-10, 10] for both Protonet and ANIL. Unless otherwise specified, the number of learning blocks for each CNN layer is 4, the number of memory tasks for each previous domain is 6 and the arriving order of the domain sequence is MiniImagenet, CIFARFS, Omniglot, Aircraft and CUB. More implementation details are given in Appendix A.\nDatasets The five datasets include Miniimagenet (Vinyals et al., 2016), CIFARFS (Bertinetto et al., 2019), Omniglot (Lake et al., 2011), CUB (Welinder et al., 2010), AIRCRAFT (Maji et al., 2013). All images are resized into 84×84. We follow the splits in torchmeta∗ for the Miniimagenet, CIFARFS, Omniglot and CUB datasets. For AIRCRAFT, we follow the split in (Vuorio et al., 2019). We compare all the baselines on 5-way-1-shot and 5-way-5-shot learning settings.\nEvaluation metrics We adapt existing work on continual learning to use the evaluation metrics of ACC and BWT (Ebrahimi et al., 2020b). ACC is defined as the average meta testing classification accuracy across all domains (the higher the better), and BWT measures the average forgetting on all the previous domains evaluated at the end of the sequential meta learning task (the lower the better). Formaly, the ACC and BWT are defined as: ACC = 1N ∑N j=1 aN,j and BWT = 1 N−1 ∑N−1 i=1 aN,i− ai,i, where an,i is the meta testing accuracy on domain i after meta training on domain n, aN,i−ai,i measures the forgetting score on domain i after meta training on domain N .\nBaselines For meta-learning methods, we compare our method with ANIL (Raghu et al., 2020) and Prototypical Network (Snell et al., 2017). The former is a simplified version of MAML (Finn et al., 2017) with inner loop only conducted on the final layer. For continual learning methods, we compare our method with related strong baselines, which are regularization based method such as the Elastic Weight Consolidation (Kirkpatrick et al., 2017), architecture-based methods (Hard Attention Mask (HAT) (Serrà et al., 2018) with publicly available code from the authors), Bayesian methods (Ebrahimi et al., 2020b) (with the authors’ implementation), Memory-based (including A-GEM (Chaudhry et al., 2019a), ER-Ringbuffer (Chaudhry et al., 2019b) and Meta Experience Replay (MER) (Riemer et al., 2019)). We also learn all the domains jointly in a multi-domain meta-learning setting, whose performance can be considered as the upper bound for our sequential learning setting. We also provide a simple baseline, which trains sequentially without using any external memory. We denote this method as “Sequential”, whose performance can be considered as reference for the catastrophic forgetting of our setting. It is worth noting that the above baselines have only been used for traditional continual learning. We adapt these methods to our setting to demonstrate the ineffectiveness of these existing techniques when generalizing them to a meta-learning setting.\nComparisons to baselines Table 1 show the comparisons to the baselines we constructed for this problem setting in terms of 5-way 5-shot accuracy. Results of 5-way 1-shot classification is given in Appendix A. In the table, ’N/A’ means the BWT is not available since the method does not learn sequentially. Our method significantly outperforms baselines. Especially, in the Protonet-based model, the performance of our model almost matches that of the joint-training model, indicating excellent memorization of past domains. From the results of “Sequential” baseline, we see that there is a significant performance drop if no external mechanism is introduced to prevent forgetting. Among the baselines, ER-Ringbuffer seems to perform worse overall. We believe this is because of the repeatedly-trained memory, which leads to overfiting, making recovery of previous knowledge difficult. Memory data in our method are not directly fit into network but are only used for guiding the training of network on current domain data, thus avoiding overfitting. The inferior performance of UCB might be because the uncertainty estimation with limited data is inaccurate in the metalearning setting. HAT is also worse because it simply remembers history tasks but not domain general information. Furthermore, A-GEM restricts gradient updates on new examples in the same direction as the gradient direction in memory tasks, potentially leading to wrong update directions. In MER, the fast weight for each task in memory across different domains could vary significantly, making updates by Reptile oscillate and unstable, and thus hindering its performance.\nSensitivity to domain ordering We study the sensitivity of all methods on the order of domain arrivals. We use a different domain-sequence order as: CIFARFS, MiniImagenet, Aircraft, CUB and Omniglot. Results of 5-way-5-shot learning are summarized in Table 2. Appendix A details the\n∗https://github.com/tristandeleu/pytorch-meta\nresults of 5-way-1-shot setting. It can be seen that although there are some performance differences compared to those of the previous order, our method still outperforms the baselines.\nWe use another different domain-sequence order as: Omniglot, Aircraft, CUB, CIFARFS and MiniImagenet. Since Omniglot is used as first domain, the model cannot learn good representations for the following domains, thus poses more challenges than the other two domain sequences. Table 3 shows the results, where the baseline Protonet-fixfirst is the method that freeze the model parameters after finishing training on the first domain. For this baseline, there is small BWT, this is because of random testing task variance at different time. In this case, all the baselines and our methods performance drop significantly compared to the sequence with CIFARFS or Miniimagenet as first domain. It indicates that this domain sequence is significantly more challenging than the other two domain sequences.\nEffect of domain-level meta loss This sets of experiments verify the effectiveness of our proposed domain-level meta loss in Section 3. We compare the domain-level meta loss with the standard intradomain meta loss, which only considers task data in one domain. The results are shown in Table 4. It is clear that our proposed domain-level meta loss consistently outperforms the intra-domain meta loss by obtaining better accuracy.\nEffect of number of learning block Our method assigns a different learning rate for each block of the network parameters. A finer grained division of learning blocks allows more flexible control, but leads to higher computation cost. In this experiment, we investigate the impact of blocks numbers to model performance. Specifically, we implement each layer with 2, 4 and 8 blocks. Results are shown in Table 5. It can be seen that when the block number is relatively large (4 in our case), the accuracies are relatively stable. Thus we can simply set the block number to 4 in practice.\nEffect of a small memory size We compare performance differences when the number of tasks stored in the memory for each domain is small. Table 6 shows the results of 1 and 6 tasks from each domain. It is interesting to see that with a small memory to store only one task from each domain, performances of all the memory-based baselines drop significantly, while our method maintains relatively stable performance. This might be due to the fact that with a smaller memory size, all memory-based baselines either overfit to the small memory or are impacted by unstable gradient direction updates, which would not happen in our method." }, { "heading": "5 CONCLUSION", "text": "In this paper, we propose a challenging Benchmark that requires a meta learning model to sequentially meta learn on a sequence of domains with domain shift but without much forgetting on previous domains. Then, we extend existing dynamic learning rate techniques to existing meta learning model to meta learn the learning rates on meta parameters blocks, which can be seamlessly integrated into both metric-based and gradient-based meta learning approaches to mitigate catastrophic forgetting. The adaptation on parameters maintains generalization performance on current domain, while adaptation on learning rates is made remember knowledge of past domains. Our proposed method significantly outperforms existing continual-learning techniques adapted to our problem, achieving significantly better performance than strong baselines. There are several possible future directions to be pursued for future work, such as the cases of imbalanced classes in each task, scaling to longer domain sequences, and fewer training tasks in each arrival domain." }, { "heading": "A APPENDIX", "text": "Joint-training details We describe details of ANIL and Prototypical Network, the two base models tested in our framework.\nFor ANIL: The meta batch size for each domain is set to 3. For 1-shot and 5-shot learning, the number of training data for each task are set to 1 and 5, and the number of query data is 15 for both cases. The number of inner loop update steps is 5 with a learning rate of 1e-2. The outer-loop learning rate is 1e-3, and the number of meta training iterations is 20000 with the Adam optimizer (Kingma & Ba, 2014).\nFor Prototypical Network: The meta batch size for each domain is 3, and the number of meta training iterations is 40000. For 1 shot learning, the number of support data for each task is 1, the number of query data for each task is 10. For 5 shot learning, the number of support data for each task is 5, the number of query data for each task is 10. The meta training loss is optimized by Adam (Kingma & Ba, 2014), and the learning rate is 1e-3.\nSequential domain training details\nFor ANIL and Prototypical Network, we adopt the following hyperparameter.\n• ANIL (Raghu et al., 2020) The meta batch size for each domain is 3. The total number of meta training iterations is 40000. The number of meta training iterations for each domain is 8000. For 1-shot and 5-shot learning, the number of training data for each task are set to 1 and 5, and the number of query data is 15 for both cases. The number of inner loop update steps is 5, the inner loop learning rate is 1e-2, the outer loop learning rate is 1e-3, the outer loop meta training loss is optimized by Adam (Kingma & Ba, 2014).\n• Prototypical Network (Snell et al., 2017) the meta batch size for each domain is 3, and the number of meta training iterations is 40000. The number of meta training iterations for each domain is 8000. For 1-shot and\n5-shot learning, the number of training data for each task are set to 1 and 5, and the number of query data is 10 for both cases. The meta training loss is optimized by Adam (Kingma & Ba, 2014), and the learning rate is 1e-3.\nFor continual learning method, we adopt the following hyperparameter\n• HAT details (Serrà et al., 2018) This baseline is adapted from authors implementation †. The output hl of the units in layer l is element-wise multiplied by the following: h′l = a t l hl, where atl is the annealed\nversion of the single layer gated domain embedding etl , defined as\natl = σ(se t l) (7)\nOther hyperparameter are following (Serrà et al., 2018), the stability parameter or scaling parameter s is annealed according to\ns = 1\nsmax + (smax −\n1 smax ) b− 1 B − 1 , (8)\nwhere smax = 400, b is 1, · · · , B is batch index and B is the total number of batches.\n• EWC details (Kirkpatrick et al., 2017) We vary the weight penalty λ for avoiding drastic change in parameters with λ = 5, 1 × 101, 5× 101, 1× 102, 5× 102, 1× 103, 5× 103 and select the best as our baseline.\n• A-GEM details (Chaudhry et al., 2019a) The implementation is based on‡.\n• MER details (Riemer et al., 2019) The implementation is based on§. The within batch meta-learning rate β is set to 0.03, across batch meta-learning rate γ to 1.0, and batches per example to 5.\n• UCB details (Ebrahimi et al., 2020a) Following (Ebrahimi et al., 2020a), we scale the learning rate of µ and ρ for each parameter proportional to its importance Ω, which is measured by its variance, to reduce changes in important parameters and adapt authors implementation¶. We use 10 Monte Carlo samples at each iteration as in (Ebrahimi et al., 2020a).\n• ER-Ringbuffer details (Chaudhry et al., 2019b) We use a fixed amount of memory of each previous domain to jointly train with current domain.\nOther meta learning related settings are the same as those in sequential domain meta training as described above.\nDomain-level meta loss In this part, we will elaborate on the computation of proposed domainlevel meta loss. To alleviate the overfitting issue of a specific domain during the learning process, we propose to compute the meta loss at two different separated levels, intra-domain and inter-domain meta losses. The intra-domain meta loss is defined with tasks in the same domain to ensure good generalization in one domain. The inter-domain meta loss, by contrast, is defined with tasks sampled across all available domains in the memory to encourage model adaptation across different domains. When calculating the inter-domain loss with one task from the memory, at each iteration, we augment the data with those from tasks of other domains. In this way, the model can be trained to better generalize to other domains.\n†https://github.com/joansj/hat ‡https://github.com/GMvandeVen/continual-learning §https://github.com/mattriemer/MER ¶https://github.com/SaynaEbrahimi/UCB\nWe use Sup to denote the examples labeled with u for domain p, and Svq to denote the examples labeled with class v for domain q. Suppose P -way K-shot learning is performed in both domain p and domain q. At each iteration, the two tasks from both domains are combined together, and the problem becomes 2P -way K-shot learning. That is, the network is used for 2P class classification problem for images in both domain p and domain q\nDefine the prototype for class u in a domain as\ncu = 1 |Su| ∑\n(xp,yp)∈Su fθ(xp), u ∈ {0, 1, · · · , 2P − 1} . (9)\nThis will encourage the network to differentiate the image classes across two different domains and help the network generalize across different domains.\nFor a given query data x, the goal is to model\npθ(y = u|x) = exp(−d(fθ(x), cu)) 2P−1∑ u′=0 exp(−d(fθ(x), cu′)) (10)\nThe memory loss is defined as minimizing the log probability J (θ) = − log pθ(y = u|x), where d is distance function.\nFor MAML (ANIL), let Dvalq ⋃ Dvalp denote the validation data from both domain q and p. The loss is defined as\narg min θ\nLmeta(θ,φ∗(θ),Dvalq ⋃ Dvalp ) (11)\nφ∗(θ) = arg min φ Ltask(θ,φ,Dtrq ) (12)\nBoth mechanisms for the prototypical network and MAML (ANIL) can avoid overfitting and generalize across different domains.\nOn learning-rate optimization Learning rate is one of the most important and effective hyperparameter in standard neural-network optimization (Baydin et al., 2018; Franceschi et al., 2017; Donini et al., 2020) and continual learning (Ebrahimi et al., 2020b). Different from standard supervised learning whose goal is to minimize the generalization error, continual learning aims to avoid catastrophic forgetting while maintaining good performance on current domain. Inspired by standard gradient-based hyperparameter optimization algorithms such as (Baydin et al., 2018; Franceschi et al., 2017; Donini et al., 2020), we develop a novel automatic and adaptive learningrate update scheme for newly arrival domains. Our goal is achieved by updating the learning rate for each parameter block, which is designed to adaptively update parts of the network to fit future domains while remembering previous domains.\nMore Results\nCompare to Baselines\n5-way 1-shot learning Table 7 show the comparisons to the baselines we constructed for this problem setting in terms of 5-way 1-shot accuracy. In the table, ’N/A’ means the BWT is not available since the method does not learn sequentially. Our method significantly outperform baselines. Especially, similar to the performance in 5-way 5-shot learning in Table 1, in the Protonet-based model, the performance of our model almost matches that of the joint-training model, indicating excellent memorization of past domains.\nCompare to Baselines with different domain ordering\n5-way 1-shot learning We study the sensitivity of all methods on the order of domain arrivals in the setting of 5-way 1-shot learning. We use a different domain-sequence order as: CIFARFS, MiniImagenet, Aircraft, CUB and Omniglot. Results are summarized in Table 8. Similar to the results in Table 2 , it can be seen that although there are some performance differences compared to those of the previous order, our method consistently outperforms the baselines.\nMeta testing of current and all previous domains Algorithm 2 shows the algorithm of meta testing the model on current and all the previous domains.\nAlgorithm 2 Meta testing. Require: A sequence of training domain data D1,D2, . . . ,DN ; Require: Learned meta parameter θ of the models after meta training on domain N . 1: for q = 1 to N do 2: Sample T meta-testing tasks T testq from P (Dq), the distribution over tasks in domain Dq 3: Evaluate the meta testing accuracy and measurements of forgetting for domainDq using the meta learned\nmodel f(T ,θMN ), T ∈ T testq 4: end for 5: Evaluate the average meta testing accuracy on all learned domains and measurements of forgetting over all\nprevious domains" } ]
2,020
null
SP:e3330123a00c4e32e60792230c6a7a883e84aa98
[ "In this paper, the authors studied the problem of training neural networks under data poisoning, i.e., when a small fraction of the training data is corrupted by the adversary. They considered two data corruption settings, one allows both the data x and supervision y to be corrupted, which is called general corruption, and one with only supervision y corrupted. Their first algorithm, which removes the datapoints whose gradient norm is large when computing the average gradient, applies to the general supervision setting. They showed their algorithm has eps\\sqrt(d) error or eps*L error, which can be quite large for high-dimensional and deep neural nets learning settings. Their second algorithm applies to the setting where only supervision y is corrupted, and the algorithm works by removing the datapoints whose output layer gradient is large. Assuming the clean data has bounded gradient, and the dimension of y is p, their algorithm achieves error eps*sqrt(p). " ]
Training deep neural models in the presence of corrupted supervisions is challenging as the corrupted data points may significantly impact the generalization performance. To alleviate this problem, we present an efficient robust algorithm that achieves strong guarantees without any assumption on the type of corruption, and provides a unified framework for both classification and regression problems. Different from many existing approaches that quantify the quality of individual data points (e.g., loss values) and filter out data points accordingly, the proposed algorithm focuses on controlling the collective impact of data points on the averaged gradient. Even when a corrupted data point failed to be excluded by the proposed algorithm, the data point will have very limited impact on the overall loss, as compared with state-of-the-art filtering data points based on loss values. Extensive empirical results on multiple benchmark datasets have demonstrated the robustness of the proposed method under different types of corruptions.
[]
[ { "authors": [ "Ahmad Ajalloeian", "Sebastian U Stich" ], "title": "Analysis of sgd with biased gradient estimators", "venue": "arXiv preprint arXiv:2008.00051,", "year": 2020 }, { "authors": [ "Yoshua Bengio", "Jérôme Louradour", "Ronan Collobert", "Jason Weston" ], "title": "Curriculum learning", "venue": "In Proceedings of the 26th annual international conference on machine learning,", "year": 2009 }, { "authors": [ "Jeremy Bernstein", "Yu-Xiang Wang", "Kamyar Azizzadenesheli", "Anima Anandkumar" ], "title": "signsgd: Compressed optimisation for non-convex problems", "venue": "arXiv preprint arXiv:1802.04434,", "year": 2018 }, { "authors": [ "Kush Bhatia", "Prateek Jain", "Purushottam Kar" ], "title": "Robust regression via hard thresholding", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Kush Bhatia", "Prateek Jain", "Parameswaran Kamalaruban", "Purushottam Kar" ], "title": "Consistent robust regression", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Yu Cheng", "Ilias Diakonikolas", "Rong Ge", "Mahdi Soltanolkotabi" ], "title": "High-dimensional robust mean estimation via gradient descent", "venue": "arXiv preprint arXiv:2005.01378,", "year": 2020 }, { "authors": [ "Alexandre d’Aspremont" ], "title": "Smooth optimization with approximate gradient", "venue": "SIAM Journal on Optimization,", "year": 2008 }, { "authors": [ "Olivier Devolder", "François Glineur", "Yurii Nesterov" ], "title": "First-order methods of smooth convex optimization with inexact oracle", "venue": "Mathematical Programming,", "year": 2014 }, { "authors": [ "Ilias Diakonikolas", "Gautam Kamath", "Daniel M Kane", "Jerry Li", "Ankur Moitra", "Alistair Stewart" ], "title": "Robust estimators in high dimensions without the computational intractability", "venue": "IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS),", "year": 2016 }, { "authors": [ "Ilias Diakonikolas", "Gautam Kamath", "Daniel Kane", "Jerry Li", "Jacob Steinhardt", "Alistair Stewart" ], "title": "Sever: A robust meta-algorithm for stochastic optimization", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Yihe Dong", "Samuel Hopkins", "Jerry Li" ], "title": "Quantum entropy scoring for fast robust mean estimation and improved outlier detection", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ian Goodfellow" ], "title": "Efficient per-example gradient computations", "venue": "arXiv preprint arXiv:1510.01799,", "year": 2015 }, { "authors": [ "Bo Han", "Quanming Yao", "Xingrui Yu", "Gang Niu", "Miao Xu", "Weihua Hu", "Ivor Tsang", "Masashi Sugiyama" ], "title": "Co-teaching: Robust training of deep neural networks with extremely noisy labels", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Yifan Hu", "Siqi Zhang", "Xin Chen", "Niao He" ], "title": "Biased stochastic gradient descent for conditional stochastic optimization", "venue": "arXiv preprint arXiv:2002.10790,", "year": 2020 }, { "authors": [ "Peter J Huber" ], "title": "Robust estimation of a location parameter", "venue": "In Breakthroughs in statistics,", "year": 1992 }, { "authors": [ "Lu Jiang", "Zhengyuan Zhou", "Thomas Leung", "Li-Jia Li", "Li Fei-Fei" ], "title": "Mentornet: Learning datadriven curriculum for very deep neural networks on corrupted labels", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "M Pawan Kumar", "Benjamin Packer", "Daphne Koller" ], "title": "Self-paced learning for latent variable models", "venue": "In Advances in neural information processing systems,", "year": 2010 }, { "authors": [ "Kevin A Lai", "Anup B Rao", "Santosh Vempala" ], "title": "Agnostic estimation of mean and covariance", "venue": "IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS),", "year": 2016 }, { "authors": [ "Yuncheng Li", "Jianchao Yang", "Yale Song", "Liangliang Cao", "Jiebo Luo", "Li-Jia Li" ], "title": "Learning from noisy labels with distillation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Eran Malach", "Shai Shalev-Shwartz" ], "title": "Decoupling” when to update” from” how to update", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Aditya Krishna Menon", "Ankit Singh Rawat", "Sashank J Reddi", "Sanjiv Kumar" ], "title": "Can gradient clipping mitigate label noise", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Nagarajan Natarajan", "Inderjit S Dhillon", "Pradeep K Ravikumar", "Ambuj Tewari" ], "title": "Learning with noisy labels", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Jorge Nocedal", "Annick Sartenaer", "Ciyou Zhu" ], "title": "On the behavior of the gradient norm in the steepest descent method", "venue": "Computational Optimization and Applications,", "year": 2002 }, { "authors": [ "Giorgio Patrini", "Alessandro Rozza", "Aditya Krishna Menon", "Richard Nock", "Lizhen Qu" ], "title": "Making deep neural networks robust to label noise: A loss correction approach", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Adarsh Prasad", "Arun Sai Suggala", "Sivaraman Balakrishnan", "Pradeep Ravikumar" ], "title": "Robust estimation via robust gradient estimation", "venue": "arXiv preprint arXiv:1802.06485,", "year": 2018 }, { "authors": [ "Scott Reed", "Honglak Lee", "Dragomir Anguelov", "Christian Szegedy", "Dumitru Erhan", "Andrew Rabinovich" ], "title": "Training deep neural networks on noisy labels with bootstrapping", "venue": "arXiv preprint arXiv:1412.6596,", "year": 2014 }, { "authors": [ "Mark Schmidt", "Nicolas L Roux", "Francis R Bach" ], "title": "Convergence rates of inexact proximal-gradient methods for convex optimization", "venue": "In Advances in neural information processing systems,", "year": 2011 }, { "authors": [ "Vatsal Shah", "Xiaoxia Wu", "Sujay Sanghavi" ], "title": "Choosing the sample with lowest loss makes sgd robust", "venue": "arXiv preprint arXiv:2001.03316,", "year": 2020 }, { "authors": [ "Yanyao Shen", "Sujay Sanghavi" ], "title": "Learning with bad training data via iterative trimmed loss minimization", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "John W Tukey" ], "title": "Mathematics and the picturing of data", "venue": "In Proceedings of the International Congress of Mathematicians, Vancouver,", "year": 1975 }, { "authors": [ "Kun Yi", "Jianxin Wu" ], "title": "Probabilistic end-to-end noise correction for learning with noisy labels", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Xingrui Yu", "Bo Han", "Jiangchao Yao", "Gang Niu", "Ivor W Tsang", "Masashi Sugiyama" ], "title": "How does disagreement help generalization against label corruption", "venue": null, "year": 1901 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "arXiv preprint arXiv:1611.03530,", "year": 2016 }, { "authors": [ "Songzhu Zheng", "Pengxiang Wu", "Aman Goswami", "Mayank Goswami", "Dimitris Metaxas", "Chao Chen" ], "title": "Error-bounded correction of noisy labels", "venue": "In International Conference on Machine Learning,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Corrupted supervision is a common issue in real-world learning tasks, where the learning targets are not accurate due to various factors in the data collection process. In deep learning models, such corruptions are especially severe, whose degree-of-freedom makes them easily memorize corrected examples and susceptible to overfitting (Zhang et al., 2016).\nThere are extensive efforts to achieve robustness against corrupted supervisions. A natural approach to deal with corrupted supervision in deep neural networks (DNNs) is to reduce the model exposure to corrupted data points during training. By detecting and filtering (or re-weighting) the possible corrupted samples, the learning is expected to deliver a model that is similar to the one trained on clean data (without corruption) (Kumar et al., 2010; Han et al., 2018; Zheng et al., 2020). There are different criteria designed to identify the corrupted data points in training. For example, Kumar et al. (2010); Han et al. (2018); Jiang et al. (2018) leveraged the loss function values of data points; Zheng et al. (2020) tapped prediction uncertainty for filtering data; Malach & Shalev-Shwartz (2017) used the disagreement between two deep networks; Reed et al. (2014) utilized the prediction consistency of neighboring iterations. The success of these methods highly depends on the effectiveness of the detection criteria in correctly identifying the corrupted data points. Since the corrupted labels remain unknown throughout the learning, such “unsupervised” detection approaches may not be effective, either lack theoretical guarantees of robustness (Han et al., 2018; Reed et al., 2014; Malach & Shalev-Shwartz, 2017; Li et al., 2017) or provide guarantees under assumptions of the availability of prior knowledge about the type of corruption (Zheng et al., 2020; Shah et al., 2020; Patrini et al., 2017; Yi & Wu, 2019). Besides, another limitation of many existing approaches is that, they are exclusively designed for classification problems (e.g., Malach & Shalev-Shwartz (2017); Reed et al. (2014); Menon et al. (2019); Zheng et al. (2020)) and are not straightforward to extend to solve regression problems.\nTo tackle these challenges, this paper presents a unified optimization framework with robustness guarantees without any assumptions on how supervisions are corrupted, and is applicable to both classification and regression problems. Instead of developing an accurate criterion for detection corrupted samples, we adopt a novel perspective and focus on limiting the collective impact of corrupted samples during the learning process through robust mean estimation of gradients. Specifically, if our estimated average gradient is close to the gradient from the clean data during the learning iterations,\nthen the final model will be close to the model trained on clean data. As such, a corrupted data point can still be used during the training when it does not considerably alter the averaged gradient. This observation has remarkably impact on our algorithm design: instead of explicitly quantifying (and identifying) individual corrupted data points, which is a hard problem in itself, we are now dealing with an easier task, i.e., eliminating training data points that significantly distort the mean gradient estimation. One immediate consequence of this design is that, even when a corrupted data point failed to be excluded by the proposed algorithm, the data point is likely to have very limited impact on the overall loss, as compared with state-of-the-art filtering data points based on loss values. We perform experiments on both regression and classification with corrupted supervision on multiple benchmark datasets. The results show that the proposed method outperforms state-of-the-art." }, { "heading": "2 BACKGROUND", "text": "Learning from corrupted data (Huber, 1992) has attracted considerable attention in the machine learning community (Natarajan et al., 2013). Many recent studies have investigated robustness of classification tasks with noisy labels. For example, Kumar et al. (2010) proposed a self-paced learning (SPL) approach, which assigns higher weights to examples with smaller loss. A similar idea was used in curriculum learning (Bengio et al., 2009), in which the model learns easy samples first before learning harder ones. Alternative methods inspired by SPL include learning the data weights (Jiang et al., 2018) and collaborative learning (Han et al., 2018; Yu et al., 2019). Label correction (Patrini et al., 2017; Li et al., 2017; Yi & Wu, 2019) is another approach, which revises original labels in data with a goal to recover clean labels from corrupt ones. However, since we do not have access to which data points are corrupted, it is hard to get provable guarantees for label correction without strong assumptions regarding the corruption type.\nAccurate estimation of gradients is a key step for successful optimization. The relationship between gradient estimation and its final convergence has been widely studied in the optimization community. Since computing an approximated (and potentially biased) gradient is often more efficient than computing the exact gradient, many studies used approximated gradients to optimize their models and showed that they suffer from the biased estimation problem if there is no assumptions on the gradient estimation (d’Aspremont, 2008; Schmidt et al., 2011; Bernstein et al., 2018; Hu et al., 2020; Ajalloeian & Stich, 2020).\nA closely related topic is robust estimation of the mean. Given corrupted data, robust mean estimation aims at generating an estimated mean µ̂ such that the difference between the estimated mean on corrupted data and the mean of clean data ‖µ̂− µ‖2 is minimized. It was showed that median or trimmed-mean are the optimal statistics for mean estimation in one-dimensional data (Huber, 1992). However, robustness in high dimension is quite challenging since applying the coordinate-wise optimal robust estimator would lead to an error factor O( √ d) that scales with the data dimension. Although some classical work, such as Tukey median (Tukey, 1975), successfully designed algorithms to get rid of the O( √ d) error, the algorithms themselves are not polynomial-time algorithm. More recently, Diakonikolas et al. (2016); Lai et al. (2016) successfully designed polynomial-time algorithms with dimension-free error bounds. The results have been widely applied to improve algorithmic efficiency in various scenarios (Dong et al., 2019; Cheng et al., 2020).\nRobust optimization aims to optimize the model given corrupted data. Many previous studies improve the robustness of the optimization in different problem settings. However, most of them either study linear regression and its variantes(Bhatia et al., 2015; 2017; Shen & Sanghavi, 2019) or study the convex optimization (Prasad et al., 2018). Thus, those results cannot be directly generalized to deep neural networks. Diakonikolas et al. (2019) is a very generalized non-convex optimization method with the agnostic corruption guarantee. However, the space complexity of the algorithm is high, thus cannot be applied to deep neural networks given current hardware limitations." }, { "heading": "3 METHODOLOGY", "text": "Before introducing our algorithm, we first discuss the corrupted supervision. To characterize agnostic corruptions, we make use of an adversary that tries to corrupt the supervision of a clean data. There is no limitation on how the adversary corrupts the supervision, which can either be randomly permuting the target, or in a way that maximizes the negative impact (i.e., lower performance).\nFirstly, the adversary can choose up to fraction of the clean target Dy ∈ Rn×q and change the selected row of Dy to arbitrary valid numbers, generating D y ∈ Rn×q . Then, the adversary returns the corrupted dataset Dx, D y to our learning algorithmA. In this process, the only constraint on the adversary is the fraction, and the adversary has full knowledge of the data, and even the learning algorithm A. A natural question to ask is: Given a data set with -fraction corrupted supervision Dx ∈ Rn×p, D y , and a learning objective φ : Rp × Rq × Rd → R parameterized by θ, can we output parameters θ ∈ Rd such that ‖∇θφ(θ;Dx,Dy)‖ is minimized. When = 0, we have D y = Dy and learning is done on the clean data. The stochastic gradient descent could converge to a stationary point, where ‖∇θφ(θ;Dx,Dy)‖ = 0. However, when the supervision is corrupted as above, this is not the case any more, due to the error in θ impacted by the corrupted data. We thus want an efficient algorithm to find a model θ that minimizes ‖∇θφ(θ;Dx,Dy)‖. A robust model θ should have a small value of ‖∇θφ(θ;Dx,Dy)‖, and we hypothesize that a smaller ‖∇θφ(θ;Dx,Dy)‖ has better generalization." }, { "heading": "3.1 STOCHASTIC GRADIENT DESCENT WITH BIASED GRADIENT", "text": "A direct consequence of corrupted supervision is biased gradient estimation. In this section, we will first analyze how such biased gradient estimation affects the robustness of learning. The classical analysis of stochastic gradient descent (SGD) requires access to the stochastic gradient oracle, which is an unbiased estimation of the true gradient. However, corrupted supervision leads to corrupted gradients, and it is thus difficult to get unbiased gradient estimation without assumptions of how the gradients are corrupted. We start the analysis by the following informal theorem (without elaborated discussions of assumptions) of how biased gradient affects the final convergence of SGD. Its formal version is provided in Theorem 4, Appendix.\nTheorem 1 (Convergence of Biased SGD (Informal)) Under mild assumptions, denote ζ to be the maximum `2 norm of the difference between clean minibatch gradient and corrupted minibatch gradient ‖g− g̃‖ ≤ ζ, then by using biased gradient estimation, SGD converges to the ζ-approximated stationary points: E‖∇φ(θt)‖2 = O(ζ2). Remark 1 In the corrupted supervision setting, let the gradient estimated by corrupted data D be ĝ, the gradient estimated by clean data D be g. Assume ‖g̃ − g‖ ≤ ζ, it follows that when using corrupted dataset in SGD, it converges to the ζ-approximated stationary point of the objective defined by the clean data. Note the difference between above theorem and typical convergence theorem is that we are using a biased gradient estimation.\nAccording to Theorem 1 and the remark, a robust estimation of the gradient g is the key to ensure a robust model (converge to the clean solution). We also assume the loss function has the form of L(y, ŷ), where many commonly used loss functions fall in this category." }, { "heading": "3.2 ROBUST GRADIENT ESTIMATION FOR GENERAL DATA CORRUPTION", "text": "We first introduce Algo. 2 for general corruption (i.e. corruption on both features and/or supervisions). The algorithm excludes the data points with large gradient norms, and uses the empirical mean of the remaining to update gradients. In Thm. 2 we give its robustness property.\nAlgorithm 1: Robust Mean Estimation for Corrupted Gradient input: gradient matrix G ∈ m× d, corruption rate return estimated mean µ̂ ∈ Rd ; 1. For each row zi in G, calculate the l2 norm ‖zi‖ 2. Choose the -fraction rows with large ‖zi‖ 3. Remove those selected rows, and return the empirical mean of the rest points as µ̂.\nAssumption 1 (Individual L-smooth loss) For every individual loss function φi, there exists constant L > 0, such that for a clean sample i, we have |φi(x) − φi(y)| ≤ L|x − y| for any x,y.\nTheorem 2 (Robust Gradient Estimation For Data Corruption) Let G̃ ∈ Rm×d be a corrupted gradient matrix, and G ∈ Rm×d be the clean gradient matrix. Let µ be the empirical mean function,\nAlgorithm 2: (PRL(G)) Provable Robust Learning for General Corrupted Data input: Label corrupted dataset Dx,D y , learning rate γt; return model parameter θ; for t = 1 to maxiter do\nRandomly sample a minibatch M from Dx,D y Calculate the individual gradient G̃ for M Apply Algorithm 1 on G̃ to get robust gradient estimation µ̂ Update model θt+1 = θt − γtµ̂\nend we have that the output of Algo. 1 µ̂ of G̃ satisfies ‖µ(G) − µ̂‖ = O( √ d). Moreover, if Asm. 1 holds, we further have ‖µ(G)− µ̂‖ = O( L).\nCombining with the aforementioned convergence analysis of biased SGD, we get the following:\nCorollary 1 (Robust Optimization For Corrupted Data) Given assumptions used in Thm. 1, and Asm. 1, applying Algo. 1 to any -fraction corrupted data, we get mint∈[T ] E‖∇φ(xt)‖ = O( L) with large enough T . If Asm. 1 does not hold, then we get mint∈[T ] E‖∇φ(xt)‖ = O( √ d) with large enough T .\nThe robustness guarantee states that even training on generally corrupted data (corrupted supervision is a special case), Algo. 2 guarantee that the gradient norm on remaining data cannot be too large. Since Thm. 2 gives a dimension-free error bound when Asm. 1 holds, Corollary 1 also gives the dimension-free robustness guarantee with Asm. 1. We defer the detailed discussion ofO( L) to later sections. Although the error bound O( L) sounds good, we note that it still has several drawbacks: First, the dimension-free error bound means the error does not grow with increasing dimensions, and is critical when working with neural networks, due to the extremely large gradient dimension (i.e., #parameters of neural network). Thm. 2 gives the dimension-free error bound only when Asm. 1 holds, which is quite strong. In addition, even when Asm. 1 holds, L can be large, leading to a large gradient estimation error. Existing work (Diakonikolas et al., 2019) already acheives the dimensionfreeO( √ ) guarantee with general corruptions, which is a much more better theoretical results than above theorem. However, in practice, we found that the gradient norms of deep neural networks for individual data points are usually not very large, even at the beginning of the training. This can be partially due to the network structure. Further discussion on this issue is beyond the scope of this paper, but the theoretical bound above states that the robustness should depend on the number of parameters for the general models.\nAnother concern of Alg. 2 is the efficiency. It requires computing individual gradients. Although there are some advanced approaches to get the individual gradient, e.g., (Goodfellow, 2015), it is still relatively slow as compared to commonly used back-propagation. Moreover, these methods are usually not compatible with popular components such as batch normalization (BN) since the individual gradients are not independent inside BN, using of which will lose the benefits from parallelization." }, { "heading": "3.3 ROBUST GRADIENT ESTIMATION FOR ONE DIMENSIONAL CORRUPTED SUPERVISION", "text": "In this section, we show that the above robustness bound can be improved if we assume the corruption only comes from supervision. Also, by fully exploiting the gradient structure of the corrupted supervision, our algorithm can be much more efficient and meanwhile compatible with batch normalization. We use the one dimensional supervision setting (binary classification or single-target regression) to illustrate this intuition and extend it more general settings in the next section. Consider a high-dimensional supervised learning problem with X ∈ Rn×p and y ∈ Rn. The goal is to learn a function f parameterized by θ ∈ Rd minimizing the following loss minθ ∑n i=1 φi =\nminθ ∑n i=1 L(yi, f(xi, θ)). The gradient for a data point i is ∇θφi = ∂li ∂fi ∂fi ∂θ = αigi.\nOne key observation is that: when only supervision is corrupted, then the corruption contributes only to the term αi = ∂li∂fi , which is a scalar in the one-dimensional setting. In other words, given the clean gradient of ith point, gi ∈ Rd, the corrupted supervision can only perturbs the the length of the gradient vector, changing the gradient from αigi to δigi, where δi =\n∂l i ∂fi . When αi and δi are\nknown, then we can easily eliminate the impact from corrupted supervision. But this is not the case since we have have only the possibly corrupted target ŷi as opposed to the ground truth yi.\nOn the other hand, the fact that corrupted supervision scales the clean gradient can be used to reshape the robust optimization problem. Recall that in every iteration, we update our model by θ+ = θ− γµ(G), where µ denotes the empirical mean function and G = [∇θφT1 , . . . ,∇θφTm] ∈ Rm×d is the gradient matrix with mini-batch size m. We then have the following:\nProblem 1 (Robust Gradient Estimation for Corrupted Supervision - One Dimensional Case) Given a clean gradient matrix G ∈ Rm×d, an -corrupted matrix G̃ with at most -fraction rows are corrupted from αigi to δigi, design an algorithmA : Rm×d → Rd that minimizes ‖µ(G)−A(G̃)‖.\nNote that when ‖δi‖ is large, the corrupted gradient will have large effect on the empirical mean, and vice versa. This motivates us to develop an algorithm that filters out data points by the loss layer gradient ‖ ∂li∂fi ‖. If the norm of the loss layer gradient of a data point is large (in one-dimensional case, this gradient reduces to a scalar and the norm becomes its absolute value), we exclude the data point when computing the empirical mean of gradients for this iteration. Note that this algorithm is applicable to both regression and classification problems. Especially, when using the mean squared error (MSE) loss for regression, its gradient norm is exactly the loss itself, and the algorithm reduces to self-paced learning Kumar et al. (2010). We summarize the procedure in Alg. 3 and extend it to the more general multi-dimension case in the next section.\nAlgorithm 3: (PRL(L)) Efficient Provable Robust Learning for Corrupted Supervision input: dataset Dx,D y with corrupted supervision, learning rate γt; return model parameter θ; for t = 1 to maxiter do\nRandomly sample a minibatch M from Dx,D y Compute the predicted label Ŷ from M Calculate the gradient norm for the loss layer, (i.e. ‖ŷ − y‖ for mean square error or cross entropy)\nfor each data point in M Remove the top τ -fraction data from M according to ‖ŷ − y‖ Return the empirical mean of the remaining M as the robust mean estimation µ̂ Update model θt+1 = θt − γtµ̂\nend" }, { "heading": "3.4 EXTENSION TO MULTI-DIMENSIONAL CORRUPTED SUPERVISION", "text": "To extend our algorithm and analysis to multi-dimensional case, let q to be the supervision dimension, the gradient for each data point is ∇θφi = ∂li∂fi ∂fi ∂θ , where ∂li ∂fi ∈ Rq is the gradient of loss respect to model outputs, and ∂fi∂θ ∈ R q×d is the gradient of model outputs respect to model parameters. Similarly, when the supervision is corrupted, the corruption comes from the term ∂li∂fi , which is a vector. Let δi = ∂l i ∂fi ∈ Rq , αi = ∂li∂fi ∈ R q , Wi = ∂fi∂θ ∈ R q×d, m be the minibatch size. Denote the clean gradient matrix G ∈ Rm×d, where the ith row of gradient matrix gi = αiWi. Now the multi-dimensional robust gradient estimation problem is defined by:\nProblem 2 (Robust Gradient Estimation for Corrupted Supervision - Multi-Dimensional Case) Given a clean gradient matrix G, an -corrupted matrix G̃ with at most -fraction rows are corrupted from αiWi to δiWi, design an algorithmA : Rm×d → Rd that minimizes ‖µ(G)−A(G̃)‖.\nWe start our analysis by investigating the effects of the filtering-base algorithm, i.e. use the empirical mean gradient of (1 − )-fraction subset to estimate the empirical mean gradient of clean gradient matrix. We have the following for a randomized filtering-based algorithm(proof in Appendix):\nLemma 1 (Gradient Estimation Error for Random Dropping -fraction Data) Let G̃ ∈ Rm×d be a corrupted matrix generated as in Problem 2, and G ∈ Rm×d be the original clean gradient matrix. Suppose arbitrary (1− )-fraction rows are selected from G̃ to form the matrix N ∈ Rn×d. Let µ be the empirical mean function. Assume the clean gradient before loss layer has bounded\noperator norm, i.e., ‖W‖op ≤ C, the maximum clean gradient in loss layer maxi∈G ‖αi‖ = k, the maximum corrupted gradient in loss layer maxi∈N ‖δi‖ = v, then we have:\n‖µ(G)− µ(N)‖ ≤ Ck 3 − 4 2\n1− + Cv 1− .\nWe see that v is the only term that is related to the corrupted supervision. If v is large, then the bound is not safe since the right-hand side can be arbitrarily large (i.e. an adversary can change the label in a way such that v is extremely large). Thus controlling the magnitude of v provides a way to effectively reduce the bound. For example, if we manage to control v ≤ k, then the bound is safe. This can be achieved by sorting the gradient norms at the loss layer, and then discarding the largest -fraction data points. We thus have the following result.\nTheorem 3 (Robust Gradient Estimation For Supervision Corruption) Let G̃ be a corrupted matrix generated in Problem 2, q be the label dimension, µ be the empirical mean of clean matrix G. Assume the maximum clean gradient before loss layer has bounded operator norm: ‖W‖op ≤ C, then the output of gradient estimation in Algo 3 µ̂ satisfies ‖µ− µ̂‖ = O( √q) ≈ O( ).\nCompare Thm. 2 and Thm. 3, we see that when the corruption only comes from supervision, the dependence on d is reduced to q, where in most deep learning cases we have d n. Applying Thm 1 directly shows that our algorithm is also robust in multi-label settings." }, { "heading": "3.5 COMPARISON WITH DIAKONIKOLAS ET AL. (2019) AND OTHER METHODS", "text": "SEVER (Diakonikolas et al., 2019) showed promising state-of-the-art theoretical results in general corruptions, which achievesO( √ ) dimension-free guarantee for general corruptions. Compared to Diakonikolas et al. (2019), we have two contributions: a). By assuming the corruption comes from the label (we admit that this is quite strong compared to the general corruption setting), we could get a better error rate. b). Our algorithm can be scaled to deep neural networks while Diakonikolas et al. (2019) cannot. We think this is a contribution considering the DNN based models are currently state-of-the-art methods for noisy label learning problems (at least in empirical performance).\nAlthough Diakonikolas et al. (2019) achieves very nice theoretical results, unfortunately, it cannot be applied to DNN with the current best hardware configuration. Diakonikolas et al. (2019) uses dimension-free robust mean estimation breakthroughs to design the learning algorithm, while we notice that most robust mean estimation relies on filtering out data by computing the score of projection to the maximum singular vector. For example, in Diakonikolas et al. (2019), it requires performing SVD on n×d individual gradient matrix, where n is the sample size and d is the number of parameters. This method works well for small datasets and small models since both n and d is small enough for current memory limitation. However, for deep neural networks, this matrix size is far beyond current GPU memory capability. That could be the potential reason why in Diakonikolas et al. (2019), only ridge regression and SVM results for small data are shown (we are not saying that they should provide DNN results). In our experiment, our n is 60000 and d is in the magnitude of millions (network parameters). It is impractical to store 60000 copies of neural networks in a single GPU card. In contrast, in our algorithm, we do not need to store the full gradient matrix. By only considering the loss-layer gradient norm, we can easily extend our algorithm to DNN, and we showed that this simple strategy works well in both theory and challenging empirical tasks.\nWe notice that there are some linear (Bhatia et al., 2015; 2017) or convex method (Prasad et al., 2018) achieves the better robustness guarantee. However, most of them cannot be directly applied to deep neural networks." }, { "heading": "4 RELATIONSHIP TO SELF-PACED LEARNING (SPL)", "text": "SPL looks very similar to our method at first glance. Instead of keeping data point with small gradient norm, SPL tries to keep data with small loss. The gradient norm and loss function can be tied by the famous Polyak-Łojasiewicz (PL) condition. The PL condition assumes there exists some constant s > 0 such that 12‖∇φ(x)‖\n2 ≥ s (φ(x)− φ∗) , ∀x holds. As we can see, when the neural network is highly over-parameterized, the φ∗ can be assumed to be equal across different\nsamples since neural networks can achieve 0 training loss (Zhang et al., 2016). By sorting the error φ(xi) for every data point, SPL actually is sorting the lower bound of the gradient norm if the PL condition holds. However, the ranking of gradient norm and the ranking loss can be very different since there is no guarantee that the gradient norm is monotonically increasing with the loss value. We provide illustration of why SPL is not robust from geometric perspective in the appendix. Here we show even for simple square loss, the monotonic relationship is easy to break. One easy counter-example is φ(x1, x2) = 0.5x21 + 50x 2 2. Take two points (1000, 1) and (495, - 49.5), we will find the monotonic relationship does not hold for these two points. Nocedal et al. (2002) showed that the monotonic relationship holds for square loss (i.e.φ(x) = 12 (x−x\n∗)TQ(x− x∗) ) if the condition number of Q is smaller than 3 + 2 √ 2, which is a quite strong assumption especially when x is in high-dimension. If we consider the more general type of loss function (i.e. neural network), the assumptions on condition number should only be stronger, thus breaking the monotonic relationship. Thus, although SPL sorts the lower bound of the gradient norm under mild assumptions, our algorithm is significantly different from the proposed SPL and its variations.\nNow, we discuss the relationship between SPL and algorithm 3 under supervision corruptions. SPL has the same form as algorithm 3 when we are using mean square error to perform regression tasks since the loss layer gradient norm is equal to loss itself. However, in classification, algorithm 3 is different from the SPL. In order to better understand the algorithm, we further analyze the difference between SPL and our algorithm for cross-entropy loss.\nFor cross entropy, denote the output logit as o, we have H(yi, fi) = −〈yi, log(softmax(oi))〉 = −〈yi, log(fi)〉. The gradient norm of cross entropy w.r.t. oi is: ∂Hi ∂oi\n= yi− softmax(oi) = fi−yi. Thus, the gradient of loss layer is the MSE between yi and fi. Next, we investigate when MSE and Cross Entropy gives non-monotonic relationship. For the sake of simplification, we only study the sufficient condition of the non-monotonic relationship, which is showed in lemma 2.\nLemma 2 Let y ∈ Rq , where yk = 1,yi = 0 for i 6= k, and α, β are two q dimensional vector in probability simplex. Without loss of generality, suppose α has smaller cross entropy loss αk ≥ βk, then the sufficient condition for ‖α − y‖ ≥ ‖β − y‖ is Vari 6=k({αi}) − Vari 6=k({βi}) ≥\nq (q−1)2 ((αk − βk)(2− αk − βk))\nAs αk ≥ βk, the right term is non-negative. In conclusion, when MSE generates a different result from cross-entropy, the variance of the probability of the non-true class of the discarded data point is larger. Suppose we have a ground-truth vector y = [0, 1, 0, 0, 0, 0, 0, 0, 0, 0], and we have two predictions α = [0.08, 0.28, 0.08, 0.08, 0.08, 0.08, 0.08, 0.08, 0.08, 0.08] and β = [0.1, 0.3, 0.34, 0.05, 0.05, 0.1, 0.03, 0.03, 0, 0]. The prediction α have a smaller mse loss while prediction β have a smaller cross-entropy loss. It is intuitive that β is more likely to be noisy data since it has two peak on the prediction (i.e. 0.3, 0.34). However, since cross entropy loss only considers one dimension, it cannot detect such situation. Compared to the cross-entropy, the gradient (mse loss) considers all dimension, and thus will consider the overall prediction distributions." }, { "heading": "5 COMBINING WITH CO-TEACHING STYLE TRAINING", "text": "Motivated by co-teaching (Han et al., 2018), which is one of currently state-of-the-art deep methods for learning under noisy label, we propose Co-PRL(L), which has the same framework of coteaching but uses the loss-layer gradient to select the data. The full algorithm is shown in algorithm 4 in the appendix. The meaning of all hyper-parameters in algorithm 4 are all the same as in the original Han et al. (2018). Compared with algorithm 3, except sampling data according to the loss layer gradient norm, the Co-PRL(L) has two other modules. The first is we gradually increase the amount of the data to be dropped. The second is that two networks will exchange the selected data to update their own parameters." }, { "heading": "6 EXPERIMENT", "text": "In this section, we perform experiments on benchmark regression and classification dataset. The code is available in supplementary materials of submission. We compare PRL(G)(Algo. 2), PRL(L)\n(Algo. 3), and Co-PRL(L) (Algo. 4) to the following baselines. Standard: standard training without filtering data (mse for regression, cross entropy for classification); Normclip: standard training with norm clipping; Huber: standard training with huber loss (regression only); Decouple: decoupling network, update two networks by using their disagreement (Malach & Shalev-Shwartz, 2017) (classification only); Bootstrap: It uses a weighted combination of predicted and original labels as the correct labels, and then perform back propagation (Reed et al., 2014) (classification only); Min-sgd: choosing the smallest loss sample in minibatch to update model (Shah et al., 2020); SPL: self-paced learning, dropping the data with large losses (same as PRL(L) in regression setting with MSE loss); Ignormclip: clipping individual gradient then average them to update model (regression only); Co-teaching: collaboratively train a pair of SPL model and exchange selected data to another model(Han et al., 2018) (classification only); It is hard to design experiments for agnostic corrupted supervision and we tried our best to include different types of supervision noise. The supervision corruption settings are as follows: linadv: the corrupted supervision is generated by random wrong linear relationship of features (regression); signflip: the supervision sign is flipped (regression); uninoise: random sampling from uniform distribution as corrupted supervision (regression); mixture: mixture of above types of corruptions (regression); pairflip: shuffle the coordinates (i.e. eyes to mouth in celebA or cat to dog in CIFAR) (regression and classification); symmetric: randomly assign wrong class label (classification). For classification, we use classification accuracy as the evaluation metric, and R-square is used to evaluate regression experiments. Due to the limit of the space, we only show the average evaluation score on testing data for the last 10 epochs. The whole training curves are attached in the appendix. All experiments are repeated 5 times for regression experiments and 3 times for classification experiments.Main hyperparameters are showed in appendix." }, { "heading": "6.1 REGRESSION EXPERIMENT", "text": "We use CelebA data to perform regression tasks. CelebA dataset has 162770 training images, 19867 validation images, 19962 test images. The target variable is ten-dimensional coordinates of the left eye, right eye, nose, left mouth, and right mouth. Given the human face image, the goal is to predict 10 face landmark coordinates in the image. We tried adding different types of noise on the landmark coordinates. We preprocess the CelebA data as following: we use three-layer CNN to train 162770 training images to predict clean coordinates (we use 19867 validation images to do the early stopping). Then, we use well-trained network to extract the 512-dimensional feature on testing sets. Thus, our final data to perform experiment has feature sets X ∈ R19962×512, and the target variable Y ∈ R19962×10. We further split the data to the training and testing set, where training sets contain 80% of the data. Then, we manually add linadv, signflip, uninoise, pairflip, mixture types of supervision noise on the target variable on training data. The corruption rate for all types of corruption is varied from 0.1 to 0.4. We use 3-layer fully connect networks in experiments. The results of averaged last 10 epoch r-square are in table 1." }, { "heading": "6.2 CLASSIFICATION EXPERIMENT", "text": "We perform experiments on CIFAR10, and CIFAR100 to illustrate the effectiveness of our algorithm in classification setting. We use the 9-layer Convolutional Neural Network, which is the same as Han et al. (2018). Since most baselines include batch normalization, it is difficult to get individual gradient efficiently, we will drop the ignormclip and PRL baselines. In the appendix, we attached the results if both co-teaching and Co-PRL(L) drops batch normalization module. We will see that coteaching cannot maintain robustness while our method still has robustness. The reason is discussed in the appendix. We consider pairflip and symmetric supervision corruptions in experiments. Also, to compare with the current state of the art method, for symmetric noise, we use corruption rate which beyond 0.5. Although our theoretical analysis assumes the noise rate is small than 0.5, when the noise type is not adversary (i.e. symmetric), we empirically show that our method can also deal with such type of noise. Results on CIFAR10, CIFAR100 are in Table 2. As we can see, no matter using one network (PRL vs SPL) or two networks (Co-PRL(L) vs Co-teaching), our method performs significantly better. Since in real-world problems, it is hard to know that the ground-truth corruption rate, we also perform the sensitivity analysis in classification tasks to show the effect of overestimating and underestimating . The results are in Table 3. More discussion about sensitivity analysis can be found in appendix.\n7 CONCLUSION\nIn this paper, we proposed efficient algorithm to defense against agnostic supervision corruptions. Both theoratical and empirical analysis showed the effectiveness of our algorithm. There are two remaining questions in this paper which deserves study in future. The first one is whether we can further improve O( ) error bound or show that O( ) is tight. The second one is to utilize more properties of neural networks, such as the sparse gradient, to see whether it is possible to get better algorithms." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 CO-IGFILTER ALGORITHM", "text": "See algorithm 4.\nAlgorithm 4: Co-PRL(L) input: initialize wf and wg , learning rate η, fixed τ , epoch Tk and Tmax, iterations Nmax return model parameter wf and wg; for T = 1, 2, ..., Tmax do\nfor N = 1, ..., Nmax do random sample a minibatch M from Dx,D y (noisy dataset) get the predicted label Ŷf and Ŷg from M by wf . wg calculate the individual loss lf = L(Y, Ŷf ), lg = L(Y, Ŷg) calculate the gradient norm of loss layer scoref = ‖\n∂lf ∂ŷf ‖, scoreg = ‖ ∂lg ∂ŷg ‖.\nsample R(T )% small-loss-layer-gradient-norm instances by scoref and scoreg to get Nf , Ng update wf = wf − η∇wfL(Nf , wf ), wg = wg − η∇wgL(Ng, wg) (selected dataset) update model xt+1 = xt − γtµ̂\nend Update R(T ) = 1−min { T\nTk τ, τ } end" }, { "heading": "A.2 FURTHER ILLUSTRATION OF THE DIFFERENCE BETWEEN SPL AND PRL(G)", "text": "In this section, we will further illustrate the difference between SPL and PRL(G). In order to have a more intuitive understanding of our algorithm, we could look at the Figure 1a and 1b. Since we are in the agnostic label corruption setting, it is difficult to filtering out the correct corrupted data. We showed two situations when loss filtering failed and gradient filtering failed. As we could see that when loss filtering method failed, the remaining corrupted data could have large impact on the overall loss surface while when gradient filtering method failed, the remaining corrupted data only have limited impact on the overall loss surface, thus gaining robustness." }, { "heading": "A.3 NETWORKS AND HYPERPARAMETERS", "text": "The hyperparameters are in Table 4. For Classification, we use the same hyperparameters in Han et al. (2018). For CelebA, we use 3-layer fully connected network with 256 hidden nodes in hidden layer and leakly-relu as activation function. We also attached our code in supplementary materials." }, { "heading": "A.4 REGRESSION R2 ON TESTING DATA CURVE", "text": "The curve for CelebA data is showed in Figure 2." }, { "heading": "A.5 CLASSIFICATION CURVE", "text": "The classification curve is in Figure 3" }, { "heading": "A.6 SENSITIVITY ANALYSIS", "text": "Since in real-world problems, it is hard to know that the ground-truth corruption rate, we perform the sensitivity analysis in classification tasks to show the effect of . The results are in Table 5. As we could see, the performance is stable if we overestimate the corruption rate, this is because only\nwhen we overestimate the , we could guarantee that the gradient norm of the remaining set is small. However, when we underestimate the corruption rate, in the worst case, there is no guarantee that the gradient norm of the remaining set is small. By using the empirical mean, even one large bad individual gradient would ruin the gradient estimation, and according to the convergence analysis of biased gradient descent, the final solution could be very bad in terms of clean data. That explains why to underestimate the corruption rate gives bad results. Also, from Table 5, we could see that using the ground truth corruption rate will lead to small uncertainty." }, { "heading": "A.7 EMPIRICAL RESULTS ON RUNNING TIME", "text": "As we claimed in paper, the algorithm 2 is not efficient. In here we attached the execution time for one epoch for three different methods: Standard, PRL(G), PRL(L). For fair comparison, we replace all batch normalization module to group normalization for this comparison, since it is hard\nto calculate individual gradient when using batch normalization. For PRL(G), we use opacus libarary (https://opacus.ai/) to calculate the individual gradient.\nThe results are showed in Table 6" }, { "heading": "A.8 PROOF OF CONVERGENCE OF BIASED SGD", "text": "We gave the proof of the theorem of how biased gradient affect the final convergence of SGD. We introduce several assumptions and definition first:\nAssumption 2 (L-smoothness) The function φ: Rd → R is differentiable and there exists a constant L > 0 such that for all θ1, θ2 ∈ Rd, we have φ(θ2) ≤ φ(θ1)+〈∇φ(θ1), θ2−θ1〉+ L2 ‖θ2−θ1‖ 2\nDefinition 1 (Biased gradient oracle) A map g : Rd × D → Rd, such that g(θ, ξ) = ∇φ(θ) + b(θ, ξ) + n(θ, ξ) for a bias b : Rd → Rd and zero-mean noise n : Rd × D → Rd, that is Eξn(θ, ξ) = 0.\nCompared to standard stochastic gradient oracle, the above definition introduces the bias term b. In noisy-label settings, the b is generated by the data with corrupted labels.\nAssumption 3 (σ-Bounded noise) There exists constants σ > 0, such that Eξ‖n(θ, ξ)‖2 ≤ σ, ∀θ ∈ Rd\nAssumption 4 (ζ-Bounded bias) There exists constants ζ > 0, such that for any ξ, we have ‖b(θ, ξ)‖2 ≤ ζ2, ∀θ ∈ Rd\nFor simplicity, assume the learning rate is constant γ, then in every iteration, the biased SGD performs update θt+1 ← θt − γtg(θt, ξ). Then the following theorem showed the gradient norm convergence with biased SGD.\nTheorem 4 (Convergence of Biased SGD(formal)) Under assumptions 2, 3, 4, define F =\nφ(θ0)− φ∗and step size γ = min\n{ 1\nL , (\n√ LF\nσT )\n} , denote the desired accuracy as k, then\nT = O ( 1\nk + σ2 k2 ) iterations are sufficient to obtain mint∈[T ] E‖∇φ(θt)‖2 = O(k + ζ2).\nRemark 2 Let k = ζ2, T = O (\n1 ζ2 +\nσ2 ζ4 ) iterations is sufficient to get mint∈[T ] E‖∇φ(θt)‖2 =\nO(ζ2), and performing more iterations does not improve the accuracy in terms of convergence.\nSince this is a standard results, similar results are showed in Bernstein et al. (2018); Devolder et al. (2014); Hu et al. (2020); Ajalloeian & Stich (2020). we provide the proof here. Proof: by L-smooth, we have:\nφ(θ2) ≤ φ(θ1) + 〈∇φ(θ1), θ2 − θ1〉+ L\n2 ‖θ2 − θ1‖2\nby using γ ≤ 1 L , we have Eφ (θ1t+1) ≤ φ (θ1t)− γ 〈∇φ (θ1t) ,Egt〉+ γ2L\n2\n( E ‖gt − Egt‖2 + E ‖Egt‖2 ) = φ (θ1t)− γ 〈∇φ (θ1t) ,∇φ (θ1t) + bt〉+ γ2L\n2\n( E ‖nt‖2 + E ‖∇φ (θ1t) + bt‖ 2 )\n≤ φ (θ1t) + γ\n2\n( −2 〈∇φ (θ1t) ,∇φ (θ1t) + bt〉+ ‖∇φ (θ1t) + bt‖ 2 ) + γ2L\n2 E ‖nt‖2\n= φ (θ1t) + γ\n2\n( −‖∇φ (θ1t)‖ 2 + ‖bt‖2 ) + γ2L\n2 E ‖nt‖2\nSince we have ‖bt‖2 ≤ ζ2, ‖nt‖2 ≤ σ2, by plug in the learning rate constraint, we have\nEφ (θ1t+1) ≤ φ (θ1t)− γ\n2 ‖∇φ (θ1t)‖\n2 + γ\n2 ζ2 +\nγ2L\n2 σ2\nEφ (θ1t+1)− φ (θ1t) ≤ − γ\n2 ‖∇φ (θ1t)‖\n2 + γ\n2 ζ2 +\nγ2L\n2 σ2\nThen, removing the gradient norm to left hand side, and sum it across different iterations, we could get\n1\n2T T−1∑ t=0 E‖φ (θ1t) ‖ ≤ F Tγ + ζ2 2 + γLσ2 2\nTake the minimum respect to t and substitute the learning rate condition will directly get the results." }, { "heading": "A.9 PROOF OF THEOREM 2", "text": "Denote G̃ to be the set of corrupted minibatch, G to be the set of original clean minibatch and we have |G| = |G̃| = m. Let N to be the set of remaining data and according to our algorithm, the remaining data has the size |N| = n = (1 − )m. Define A to be the set of individual clean gradient, which is not discarded by algorithm 1. B to be the set of individual corrupted gradient, which is not discarded. According to our definition, we have N = A ∪ B. AD to be the set of individual good gradient, which is discarded, AR to be the set of individual good gradient, which is replaced by corrupted data. We have G = A∪AD∪AR. BD is the set of individual corrupted gradient, which is discarded by our algorithm. Denote the good gradient to be gi = αiWi, and the bad gradient to be g̃i, according to our assumption, we have ‖g̃i‖ ≤ L. Now, we have the l2 norm error:\n‖µ(G)− µ(N)‖ = ‖ 1 m m∑ i∈G gi −\n( 1\nn ∑ i∈A gi + 1 n ∑ i∈B g̃i\n) ‖\n= ‖ 1 n m∑ i=1 n m gi −\n( 1\nn ∑ i∈A gi + 1 n ∑ i∈B g̃i\n) ‖\n= ‖ 1 n ∑ i∈A n m gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi −\n( 1\nn ∑ i∈A gi + 1 n ∑ i∈B g̃i\n) ‖\n= ‖ 1 n ∑ i∈A ( n−m m )gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi − 1 n ∑ i∈B g̃i‖\n≤ ‖ 1 n ∑ i∈A ( n−m m )gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi‖+ ‖ 1 n ∑ i∈B g̃i‖\n≤ ‖ ∑ A m− n nm gi + ∑ AD 1 m gi + ∑ AR 1 m gi‖+ ∑ B 1 n ‖g̃i‖\n≤ ∑ A ‖m− n nm gi‖+ ∑ AD ‖ 1 m gi‖+ ∑ AR ‖ 1 m gi‖+ ∑ B 1 n ‖g̃i‖\nBy using the filtering algorithm, we could guarantee that ‖g̃i‖ ≤ L. Let |A| = x, we have |B| = n− x = (1− )m− x, |AR| = m− n = m, |AD| = m− |A| − |AR| = m− x− (m− n) = n− x = (1− )m− x. Thus, we have:\n‖µ(G)− µ(N)‖ ≤ xm− n nm L+ (n− x) 1 m L+ (m− n) 1 m L+ (n− x) 1 n L\n≤ x(m− n nm − 1 m )L+ n 1 m L+ (m− n) 1 m L+ (n− x) 1 n L = 1\nm ( 2 − 1 1− )xL+ L+ L− 1 n xL\n= xL( 2 − 2 n ) + 2L\nTo minimize the upper bound, we need x to be as small as possible since 2 − 2 < 1. According to our problem setting, we have x = n−m ≤ (1− 2 )m, substitute back we have:\n‖µ(G)− µ(N)‖ ≤ (1− 2 )Lm(2 − 2 n ) + 2L\n= 1− 2 1− 2L+ 2L\n= 4L− 1− 2L\nSince < 0.5, we use tylor expansion on 1− , by ignoring the high-order terms, we have\n‖µ(G)− µ(N)‖ = O( L)\nNote, if the Lipschitz continuous assumption does not hold, then L should be dimension dependent." }, { "heading": "A.10 PROOF OF RANDOMIZED FILTERING ALGORITHM", "text": "Lemma 3 (Gradient Estimation Error for Randomized Filtering) Given a corrupted matrix G̃ ∈ Rm×d generated in problem 2. Let G ∈ Rm×d be the original clean gradient matrix. Suppose we are arbitrary select n = (1− )m rows from G̃ to get remaining set N ∈ Rn×d. Let µ to be the empirical mean function, assume the clean gradient before loss layer has bounded operator norm: ‖W‖op ≤ C, the maximum clean gradient in loss layer maxi ‖αi‖ = k, the maximum corrupted gradient in loss layer maxi ‖δi‖ = v, assume < 0.5, then we have:\n‖µ(G)− µ(N)‖ ≤ Ck 3 − 4 2\n1− + Cv 1−" }, { "heading": "A.10.1 PROOF OF LEMMA 3", "text": "Denote G̃ to be the set of corrupted minibatch, G to be the set of original clean minibatch and we have |G| = |G̃| = m. Let N to be the set of remaining data and according to our algorithm, the remaining data has the size |N| = n = (1 − )m. Define A to be the set of individual clean gradient, which is not discarded by algorithm 3. B to be the set of individual corrupted gradient, which is not discarded. According to our definition, we have N = A ∪ B. AD to be the set of individual good gradient, which is discarded, AR to be the set of individual good gradient, which is replaced by corrupted data. We have G = A∪AD∪AR. BD is the set of individual corrupted gradient, which is discarded by our algorithm. Denote the good gradient to be gi = αiWi, and the bad gradient to be g̃i = δiWi, according to our assumption, we have ‖Wi‖op ≤ C.\nNow, we have the l2 norm error:\n‖µ(G)− µ(N)‖ = ‖ 1 m m∑ i∈G gi −\n( 1\nn ∑ i∈A gi + 1 n ∑ i∈B g̃i\n) ‖\n= ‖ 1 n m∑ i=1 n m gi −\n( 1\nn ∑ i∈A gi + 1 n ∑ i∈B g̃i\n) ‖\n= ‖ 1 n ∑ i∈A n m gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi −\n( 1\nn ∑ i∈A gi + 1 n ∑ i∈B g̃i\n) ‖\n= ‖ 1 n ∑ i∈A ( n−m m )gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi − 1 n ∑ i∈B g̃i‖\n≤ ‖ 1 n ∑ i∈A ( n−m m )gi + 1 n ∑ i∈AD n m gi + 1 n ∑ i∈AR n m gi‖+ ‖ 1 n ∑ i∈B g̃i‖ (1)\nLet |A| = x, we have |B| = n−x = (1− )m−x, |AR| = m−n = m, |AD| = m−|A|−|AR| = m− x− (m− n) = n− x = (1− )m− x. Thus, we have:\n‖µ(G)− µ(N)‖ ≤ ‖ ∑ A m− n nm gi + ∑ AD 1 m gi + ∑ AR 1 m gi‖+ ∑ B 1 n ‖g̃i‖\n≤ ∑ A ‖m− n nm gi‖+ ∑ AD ‖ 1 m gi‖+ ∑ AR ‖ 1 m gi‖+ ∑ B 1 n ‖g̃i‖\nFor individual gradient, according to the label corruption gradient definition in problem 2, assuming the ‖W‖op ≤ C, we have ‖gi‖ ≤ ‖αi‖‖Wi‖op ≤ C‖αi‖. Also, denote maxi ‖αi‖ = k, maxi ‖δi‖ = v, we have ‖gi‖ ≤ Ck, ‖g̃i‖ ≤ Cv.\n‖µ(G)− µ(N)‖ ≤ Cxm− n nm k + C(n− x) 1 m k + C(m− n) 1 m k + C(n− x) 1 n v\nNote the above upper bound holds for any x, thus, we would like to get the minimum of the upper bound respect to x. Rearrange the term, we have\n‖µ(G)− µ(N)‖ ≤ Cx(m− n nm − 1 m )k + Cn 1 m k + C(m− n) 1 m k + C(n− x) 1 n v\n= C 1 m ( 2 − 1 1− )xk + Ck + Cv − 1 n Cxv\n= Cx ( k(2 − 1) m(1− ) − v n ) + Ck + Cv\n= Cx ( k(2 − 1)− v m(1− ) ) + Ck + Cv\nSince when < 0.5, k(2 − 1)− v m(1− ) < 0, we knew that x should be as small as possible to continue the bound. According to our algorithm, we knew n −m = m(1 − ) −m = (1 − 2 )m ≤ x ≤ n = (1− )m. Then, substitute x = (1− 2 )m, we have\n‖µ(G)− µ(N)‖ ≤ Ck(1− 2 )2 − 1 1− + Ck + Cv − Cv 1− 2 1−\n= Ck 3 − 4 2\n1− + Cv 1−" }, { "heading": "A.11 PROOF OF THEOREM 3", "text": "According to algorithm3, we could guarantee that v ≤ k, then, we will have:\n‖µ(G)− µ(N)‖ ≤ Ck 3 − 4 2\n1− + Cv 1−\n≤ Ck 4 − 4 2 1− = 4 Ck = O( √q)(C is constant, k is the norm of q-dimensional vector)" }, { "heading": "A.12 COMPARISON BETWEEN SORTING LOSS LAYER GRADIENT NORM AND SORTING THE", "text": "LOSS VALUE\nAssume we have a d class label y ∈ Rd, where yk = 1, yi = 0, i 6= k. We have two prediction p ∈ Rd, q ∈ Rd. Assume we have a d class label y ∈ Rd, where yk = 1, yi = 0, i 6= k. With little abuse of notation, suppose we have two prediction p ∈ Rd, q ∈ Rd. Without loss of generality, we could assume that p1 has smaller cross entropy loss, which indicates pk ≥ qk For MSE, assume we have opposite result\n‖p− y‖2 ≥ ‖q− y‖2 ⇒ ∑ i 6=k p2i + (1− pk)2 ≥ ∑ i6=k q2i + (1− qk)2 (2)\nFor each pi, i 6= k, We have\nV ar(pi) = E(p 2 i )− E(pi)2 =\n1 d− 1 ∑ i 6=k p2i − 1 (d− 1)2 (1− pk)2 (3)\nThen ∑ i 6=k p2i + (1− pk)2 ≥ ∑ i 6=k q2i + (1− qk)2\n⇒V ari 6=k(pi) + d\n(d− 1)2 (1− pk)2 ≥ V ari 6=k(qi) +\nd\n(d− 1)2 (1− qk)2\n⇒V ari 6=k(pi)− V ari 6=k(qi) ≥ d (d− 1)2 ( (1− qk)2 − (1− pk)2 ) ⇒V ari 6=k(pi)− V ari 6=k(qi) ≥ d\n(d− 1)2 ((pk − qk)(2− pk − qk))\n(4)" } ]
2,020
null
SP:6dd9907f23d32802fd10d9405d165269fd1492ee
[ "This paper highlights the issues with the scaling method and histogram binning i.e., underestimate calibration error in scaling methods and failing to preserve classification accuracy, and sample-inefficiency in HB. They use the I-Max concept for binning, which maximizes the mutual information between labels and quantized logits. They claim that their approach mitigates potential loss in ranking performance and allows simultaneous improvement of ranking and calibration performance by disentangling the optimization of bin edges and representatives. They also propose a shared class-wise (sCW) strategy that fits a single calibrator on the merged training sets of all K class-wise problems to improve the sample efficiency." ]
Post-hoc multi-class calibration is a common approach for providing high-quality confidence estimates of deep neural network predictions. Recent work has shown that widely used scaling methods underestimate their calibration error, while alternative Histogram Binning (HB) methods often fail to preserve classification accuracy. When classes have small prior probabilities, HB also faces the issue of severe sample-inefficiency after the conversion into K one-vs-rest class-wise calibration problems. The goal of this paper is to resolve the identified issues of HB in order to provide calibrated confidence estimates using only a small holdout calibration dataset for bin optimization while preserving multi-class ranking accuracy. From an information-theoretic perspective, we derive the I-Max concept for binning, which maximizes the mutual information between labels and quantized logits. This concept mitigates potential loss in ranking performance due to lossy quantization, and by disentangling the optimization of bin edges and representatives allows simultaneous improvement of ranking and calibration performance. To improve the sample efficiency and estimates from a small calibration set, we propose a shared class-wise (sCW) calibration strategy, sharing one calibrator among similar classes (e.g., with similar class priors) so that the training sets of their class-wise calibration problems can be merged to train the single calibrator. The combination of sCW and I-Max binning outperforms the state of the art calibration methods on various evaluation metrics across different benchmark datasets and models, using a small calibration set (e.g., 1k samples for ImageNet).
[ { "affiliations": [], "name": "Kanil Patel" }, { "affiliations": [], "name": "William Beluch" }, { "affiliations": [], "name": "Bin Yang" }, { "affiliations": [], "name": "Michael Pfeiffer" }, { "affiliations": [], "name": "Dan Zhang" } ]
[ { "authors": [ "David Arthur", "Sergei Vassilvitskii" ], "title": "k-means++: the advantages of careful seeding", "venue": "In Proceedings of the ACM-SIAM symposium on Discrete algorithms,", "year": 2007 }, { "authors": [ "Arsenii Ashukha", "Alexander Lyzhov", "Dmitry Molchanov", "Dmitry Vetrov" ], "title": "Pitfalls of in-domain uncertainty estimation and ensembling in deep learning", "venue": null, "year": 2020 }, { "authors": [ "Charles Blundell", "Julien Cornebise", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Weight uncertainty in neural networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2015 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "ImageNet: A Large-Scale Hierarchical Image Database", "venue": "In Proc. of the IEEE/CVF Int. Conf. on Computer Vision and Pattern Recognition (CVPR),", "year": 2009 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "venue": "In International Conference on Machine Learning (ICML),", "year": 2016 }, { "authors": [ "Chuan Guo", "Geoff Pleiss", "Yu Sun", "Kilian Q. Weinberger" ], "title": "On calibration of modern neural networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Dan Hendrycks", "Norman Mu", "Ekin Dogus Cubuk", "Barret Zoph", "Justin Gilmer", "Balaji Lakshminarayanan" ], "title": "Augmix: A simple method to improve robustness and uncertainty under data shift", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "G. Huang", "Z. Liu", "L. v. d. Maaten", "K.Q. Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition", "year": 2017 }, { "authors": [ "B. Ji", "H. Jung", "J. Yoon", "K. Kim", "y. Shin" ], "title": "Bin-wise temperature scaling (BTS): Improvement in confidence calibration performance through simple scaling techniques", "venue": "IEEE/CVF International Conference on Computer Vision Workshop (ICCVW),", "year": 2019 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Meelis Kull", "Telmo Silva Filho", "Peter Flach" ], "title": "Beta calibration: a well-founded and easily implemented improvement on logistic calibration for binary classifiers", "venue": "In International Conference on Artificial Intelligence and Statistics (AISTATS),", "year": 2017 }, { "authors": [ "Meelis Kull", "Miquel Perello Nieto", "Markus Kängsepp", "Telmo Silva Filho", "Hao Song", "Peter Flach" ], "title": "Beyond temperature scaling: Obtaining well-calibrated multi-class probabilities with dirichlet calibration", "venue": "In Advances in Neural Information Processing Systems (NeurIPs),", "year": 2019 }, { "authors": [ "Ananya Kumar", "Percy S Liang", "Tengyu Ma" ], "title": "Verified uncertainty calibration", "venue": "In Advances in Neural Information Processing Systems (NeurIPs),", "year": 2019 }, { "authors": [ "Aviral Kumar", "Sunita Sarawagi", "Ujjwal Jain" ], "title": "Trainable calibration measures for neural networks from kernel mean embeddings", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Andrey Malinin", "Bruno Mlodozeniec", "Mark Gales" ], "title": "Ensemble distribution distillation", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Dimitrios Milios", "Raffaello Camoriano", "Pietro Michiardi", "Lorenzo Rosasco", "Maurizio Filippone" ], "title": "Dirichlet-based gaussian processes for large-scale calibrated classification", "venue": "Advances in Neural Information Processing Systems (NeurIPs),", "year": 2018 }, { "authors": [ "Mahdi Pakdaman Naeini", "Gregory F. Cooper", "Milos Hauskrecht" ], "title": "Obtaining well calibrated probabilities using Bayesian binning", "venue": "In Proc. of Conference on Artificial Intelligence (AAAI),", "year": 2015 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y. Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "In NIPS Workshop on Deep Learning and Unsupervised Feature Learning,", "year": 2011 }, { "authors": [ "Jeremy Nixon", "Michael W. Dusenberry", "Linchuan Zhang", "Ghassen Jerfel", "Dustin Tran" ], "title": "Measuring calibration in deep learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops,", "year": 2019 }, { "authors": [ "Kanil Patel", "William Beluch", "Dan Zhang", "Michael Pfeiffer", "Bin Yang" ], "title": "On-manifold adversarial data augmentation improves uncertainty calibration", "venue": null, "year": 1912 }, { "authors": [ "Gabriel Pereyra", "George Tucker", "Jan Chorowski", "Łukasz Kaiser", "Geoffrey Hinton" ], "title": "Regularizing neural networks by penalizing confident output distributions", "venue": null, "year": 2017 }, { "authors": [ "John Platt" ], "title": "Probabilities for SV machines", "venue": "In Advances in Large Margin Classifiers, pp. 61–74,", "year": 1999 }, { "authors": [ "Janis Postels", "Francesco Ferroni", "Huseyin Coskun", "Nassir Navab", "Federico Tombari" ], "title": "Samplingfree epistemic uncertainty estimation using approximated variance propagation", "venue": "In Proc. of the IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Serim Ryou", "Seong-Gyun Jeong", "Pietro Perona" ], "title": "Anchor loss: Modulating loss scale based on prediction difficulty", "venue": "In Proc. of the IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Christian Szegedy", "Sergey Ioffe", "Vincent Vanhoucke", "Alexander A. Alemi" ], "title": "Inception-v4, Inception-ResNet and the impact of residual connections on learning", "venue": "In Proc. of the Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Sunil Thulasidasan", "Gopinath Chennupati", "Jeff Bilmes", "Tanmoy Bhattacharya", "Sarah Michalak" ], "title": "On mixup training: Improved calibration and predictive uncertainty for deep neural networks", "venue": "Advances in Neural Information Processing Systems (NeurIPs),", "year": 2019 }, { "authors": [ "Naftali Tishby", "Fernando C. Pereira", "William Bialek" ], "title": "The information bottleneck method", "venue": "In Proc. of the 37-th Annual Allerton Conference on Communication, Control and Computing,", "year": 1999 }, { "authors": [ "Juozas Vaicenavicius", "David Widmann", "Carl Andersson", "Fredrik Lindsten", "Jacob Roll", "Thomas Schön" ], "title": "Evaluating model calibration in classification", "venue": "In International Conference on Artificial Intelligence and Statistics (AISTATS),", "year": 2019 }, { "authors": [ "Jonathan Wenger", "Hedvig Kjellström", "Rudolph Triebel" ], "title": "Non-parametric calibration for classification", "venue": "In International Conference on Artificial Intelligence and Statistics (AISTATS),", "year": 2020 }, { "authors": [ "David Widmann", "Fredrik Lindsten", "Dave Zachariah" ], "title": "Calibration tests in multi-class classification: A unifying framework", "venue": "In Advances in Neural Information Processing Systems (NeurIPs),", "year": 2019 }, { "authors": [ "Saining Xie", "Ross B. Girshick", "Piotr Dollár", "Zhuowen Tu", "Kaiming He" ], "title": "Aggregated residual transformations for deep neural networks", "venue": "Proc. of the IEEE Conference on Computer Vision and Pattern Recognition", "year": 2017 }, { "authors": [ "Sangdoo Yun", "Dongyoon Han", "Seong Joon Oh", "Sanghyuk Chun", "Junsuk Choe", "Youngjoon Yoo" ], "title": "Cutmix: Regularization strategy to train strong classifiers with localizable features", "venue": "In Proc. of the IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Bianca Zadrozny", "Charles Elkan" ], "title": "Obtaining calibrated probability estimates from decision trees and naive bayesian classifiers", "venue": "In International Conference on Machine Learning (ICML),", "year": 2001 }, { "authors": [ "Bianca Zadrozny", "Charles Elkan" ], "title": "Transforming classifier scores into accurate multiclass probability estimates", "venue": "In SIGKDD Conference on Knowledge Discovery and Data Mining,", "year": 2002 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "In Proceedings of the British Machine Vision Conference (BMVC),", "year": 2016 }, { "authors": [ "Hongyi Zhang", "Moustapha Cissé", "Yann Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Jize Zhang", "Bhavya Kailkhura", "T Han" ], "title": "Mix-n-Match: Ensemble and compositional methods for uncertainty calibration in deep learning", "venue": "In International Conference on Machine Learning (ICML), Vienna,", "year": 2020 }, { "authors": [ "Guo" ], "title": "2017) to regain the normalization constraint. Here, we note that this extra normalization is unnecessary and partially undoes the per-class calibration effect. For HB, normalization will make its outputs continuous like any other scaling methods, thereby suffering from the same issue at ECE evaluation. One-vs-rest strategy essentially marginalizes the multi-class predictive distribution over each class", "venue": null, "year": 2017 }, { "authors": [ "Zhang" ], "title": "2020), we observe that the KDE with the setting in their paper can A9", "venue": null, "year": 2020 }, { "authors": [ "Deng" ], "title": "CIFAR-100 Krizhevsky (2009), CIFAR-10 Krizhevsky (2009) and SVHN (Netzer et al., 2011), and across three modern DNNs for each dataset, i.e., InceptionResNetV2", "venue": "Szegedy et al. (2017),", "year": 2009 }, { "authors": [ "Xie" ], "title": "Published as a conference paper at ICLR 2021 train DenseNet-BC (L = 190, k = 40) Huang et al", "venue": null, "year": 2017 }, { "authors": [ "isotonic regression Zadrozny", "Elkan", "Beta Kull" ], "title": "A8.3 TRAINING I-MAX BINNING The I-Max bin optimization started from k-means++ initialization, which uses JSD instead of Euclidean metric as the distance measure, see Sec. A3.4", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Despite great ability in learning discriminative features, deep neural network (DNN) classifiers often make over-confident predictions. This can lead to potentially catastrophic consequences in safety critical applications, e.g., medical diagnosis and autonomous driving perception tasks. A multi-class classifier is perfectly calibrated if among the cases receiving the prediction distribution q, the ground truth class distribution is also q. The mismatch between the prediction and ground truth distribution can be measured using the Expected Calibration Error (ECE) (Guo et al., 2017; Kull et al., 2019).\nSince the pioneering work of (Guo et al., 2017), scaling methods have been widely acknowledged as an efficient post-hoc multi-class calibration solution for modern DNNs. The common practice of evaluating their ECE resorts to histogram density estimation (HDE) for modeling the distribution of the predictions. However, Vaicenavicius et al. (2019) proved that with a fixed number of evaluation bins the ECE of scaling methods is underestimated even with an infinite number of samples. Widmann et al. (2019); Kumar et al. (2019); Wenger et al. (2020) also empirically showed this underestimation phenomena. This deems scaling methods as unreliable calibration solutions, as their true ECEs can be larger than evaluated, putting many applications at risk. Additionally, setting HDE also faces the bias/variance trade-off. Increasing its number of evaluation bins reduces the bias, as the evaluation quantization error is smaller, however, the estimation of the ground truth correctness begins to suffer from high variance. Fig. 1-a) shows that the empirical ECE estimates of both the raw network outputs and the temperature scaling method (TS) (Guo et al., 2017) are sensitive to the number of evaluation\n∗firstname.lastname@de.bosch.com Code available at https://github.com/boschresearch/imax-calibration\n1\nbins. It remains unclear how to optimally choose the number of evaluation bins so as to minimize the estimation error. Recent work (Zhang et al., 2020; Widmann et al., 2019) suggested kernel density estimation (KDE) instead of HDE. However, the choice of the kernel and bandwidth also remains unclear, and the smoothness of the ground truth distribution is hard to verify in practice.\nAn alternative technique for post-hoc calibration is Histogram Binning (HB) (Zadrozny & Elkan, 2001; Guo et al., 2017; Kumar et al., 2019). Note, here HB is a calibration method and is different to the HDE used for evaluating ECEs of scaling methods. HB produces discrete predictions, whose probability mass functions can be empirically estimated without using HDE/KDE. Therefore, its ECE estimate is constant and unaffected by the number of evaluation bins in Fig. 1-a) and it can converge to the true value with increasing evaluation samples (Vaicenavicius et al., 2019), see Fig. 1-b).\nThe most common variants of HB are Equal (Eq.) size (uniformly partitioning the probability interval [0, 1]), and Eq. mass (uniformly distributing samples over bins) binning. These simple methods for multi-class calibration are known to degrade accuracy, since quantization through binning may remove a considerable amount of label information contained by the classifier’s outputs.\nIn this work we show that the key for HB to retain the accuracy of trained classifiers is choosing bin edges that minimize the amount of label information loss. Both Eq. size and mass binning are suboptimal. We present I-Max, a novel iterative method for optimizing bin edges with proved convergence. As the location of its bin edges inherently ensures sufficient calibration samples per bin, the bin representatives of I-Max can then be effectively optimized for calibration. Two design objectives, calibration and accuracy, are thus nicely disentangled under I-Max. For multi-class calibration, I-Max adopts the one-vs-rest (OvR) strategy to individually calibrate the prediction probability of each class. To cope with a limited number of calibration samples, we propose to share one binning scheme for calibrating the prediction probabilities of similar classes, e.g., with similar class priors or belonging to the same class category. At small data regime, we can even choose to fit one binning scheme on the merged training sets of all per-class calibrations. Such a shared class-wise (sCW) calibration strategy greatly improves the sample efficiency of I-Max binning.\nI-Max is evaluated according to multiple performance metrics, including accuracy, ECE, Brier and NLL, and compared against benchmark calibration methods across multiple datasets and trained classifiers. For ImageNet, I-Max obtains up to 66.11% reduction in ECE compared to the baseline and up to 38.14% reduction compared to the state-of-the-art GP-scaling method (Wenger et al., 2020)." }, { "heading": "2 RELATED WORK", "text": "For confidence calibration, Bayesian DNNs and their approximations, e.g. (Blundell et al., 2015) (Gal & Ghahramani, 2016) are resource-demanding methods to consider predictive model uncertainty. However, applications with limited complexity overhead and latency require sampling-free and singlemodel based calibration methods. Examples include modifying the training loss (Kumar et al., 2018), scalable Gaussian processes (Milios et al., 2018), sampling-free uncertainty estimation (Postels et al., 2019), data augmentation (Patel et al., 2019; Thulasidasan et al., 2019; Yun et al., 2019; Hendrycks et al., 2020) and ensemble distribution distillation (Malinin et al., 2020). In comparison, a simple approach that requires no retraining of the models is post-hoc calibration (Guo et al., 2017).\n2\nPrediction probabilities (logits) scaling and binning are the two main solutions for post-hoc calibration. Scaling methods use parametric or non-parametric models to adjust the raw logits. Guo et al. (2017) investigated linear models, ranging from the single-parameter based TS to more complicated vector/matrix scaling. To avoid overfitting, Kull et al. (2019) suggested to regularize matrix scaling with a L2 loss on the model weights. Recently, Wenger et al. (2020) adopted a latent Gaussian process for multi-class calibration. Ji et al. (2019) extended TS to a bin-wise setting, by learning separate temperatures for various confidence subsets. To improve the expressive capacity of TS, an ensemble of temperatures were adopted by Zhang et al. (2020). Owing to continuous outputs of scaling methods, one critical issue discovered in the recent work is: Their empirical ECE estimate is not only non-verifiable (Kumar et al., 2019), but also asymptotically smaller than the ground truth (Vaicenavicius et al., 2019). Recent work (Zhang et al., 2020; Widmann et al., 2019) exploited KDEs for an improved ECE evaluation, however, the parameter setting requires further investigation. Nixon et al. (2019) and (Ashukha et al., 2020) discussed potential issues of the ECE metric, and the former suggested to 1) use equal mass binning for ECE evaluation; 2) measure both top-1 and class-wise ECE to evaluate multi-class calibrators, 3) only include predictions with a confidence above some epsilon in the class-wise ECE score.\nAs an alternative to scaling, HB quantizes the raw confidences with either Eq. size or Eq. mass bins (Zadrozny & Elkan, 2001). It offers asymptotically convergent ECE estimation (Vaicenavicius et al., 2019), but is less sample efficient than scaling methods and also suffers from accuracy loss (Guo et al., 2017). Kumar et al. (2019) proposed to perform scaling before binning for an improved sample efficiency. Isotonic regression (Zadrozny & Elkan, 2002) and Bayesian binning into quantiles (BBQ) (Naeini et al., 2015) are often viewed as binning methods. However, their ECE estimates face the same issue as scaling methods: though isotonic regression fits a piecewise linear function, its predictions are continuous as they are interpolated for unseen data. BBQ considers multiple binning schemes with different numbers of bins, and combines them using a continuous Bayesian score, resulting in continuous predictions.\nIn this work, we improve the current HB design by casting bin optimization into a MI maximization problem. Furthermore, our findings can also be used to improve scaling methods." }, { "heading": "3 METHOD", "text": "Here we introduce the I-Max binning scheme, which addresses the issues of HB in terms of preserving label-information in multi-class calibration. After the problem setup in Sec. 3.1, Sec. 3.2) presents a sample-efficient technique for one-vs-rest calibration. In Sec. 3.3 we formulate the training objective of binning as MI maximization and derive a simple algorithm for I-Max binning." }, { "heading": "3.1 PROBLEM SETUP", "text": "We address supervised multi-class classification tasks, where each input x ∈ X belongs to one of K classes, and the ground truth labels are one-hot encoded, i.e., y = [y1, y2, . . . , yK ] ∈ {0, 1}K . Let f : X 7→ [0, 1]K be a DNN trained using cross-entropy loss. It maps each x onto a probability vector q = [q1, . . . , qK ] ∈ [0, 1]K , which is used to rank the K possible classes of the current instance, e.g., argmaxk qk being the top-1 ranked class. As the trained classifier tends to overfit to the cross-entropy loss rather than the accuracy (i.e., 0/1 loss), q as the prediction distribution is typically poorly calibrated. A post-hoc calibrator h to revise q can deliver an improved performance. To evaluate the calibration performance of h ◦ f , class-wise ECE averaged over the K classes is a common metric, measuring the expected deviation of the predicted per-class confidence after calibration, i.e., hk(q), from the ground truth probability p(yk = 1|h(q)):\ncwECE(h ◦ f) = 1\nK K∑ k=1 Eq=f(x) {∣∣∣p(yk = 1|h(q))− hk(q)∣∣∣} . (1) When h is a binning scheme, hk(q) is discrete and thus repetitive. We can then empirically set p(yk = 1|h(q)) as the frequency of label-1 samples among those receiving the same hk(q). On the contrary, scaling methods are continuous. It is unlikely that two samples attain the same hk(q), thus requiring additional quantization, i.e., applying HDE for modeling the distribution of hk(q), or alternatively using KDE. It is noted that ideally we should compare the whole distribution h(q) with\n3\nthe ground truth p(y|h(q)). However, neither HDE nor KDE scales well with the number of classes. Therefore, the multi-class ECE evaluation often boils down to the one-dimensional class-wise ECE as in (1) or the top-1 ECE, i.e., E [ |p(yk=arg maxk hk(q) = 1|h(q))−maxk hk(q)| ] ." }, { "heading": "3.2 ONE-VS-REST (OVR) STRATEGY FOR MULTI-CLASS CALIBRATION", "text": "HB was initially developed for two-class calibration. When dealing with multi-class calibration, it separately calibrates the prediction probability qk of each class in a one-vs-rest (OvR) fashion: For any class-k, HB takes yk as the binary label for a two-class calibration task in which the class-1 means yk = 1 and class-0 collects all other K − 1 classes. It then revises the prediction probability qk of yk = 1 by mapping its logit λk ∆ = log qk − log(1− qk) onto a given number of bins, and reproducing it with the calibrated prediction probability. Here, we choose to bin the logit λk instead of qk, as the former is unbounded, i.e., λk ∈ R, which eases the bin edge optimization process. Nevertheless, as qk and λk have a monotonic bijective relation, binning qk and λk are equivalent. We note that after K class-wise calibrations we avoid the extra normalization step as in (Guo et al., 2017). After OvR marginalizes the multi-class predictive distribution, each class is treated independently (see Sec. A1).\nThe calibration performance of HB depends on the setting of its bin edges and representatives. From a calibration set C = {(y, z)}, we can construct K training sets, i.e., Sk = {(yk, λk)} ∀k, under the one-vs-rest strategy, and then optimize the class-wise (CW) HB over each training set. As two common solutions in the literature, Eq. size and Eq. mass binning focus on bin representative optimization. Their bin edge locations, on the other hand, are either fixed (independent of the calibration set) or only ensures a balanced training sample distribution over the bins. After binning the logits in the calibration set Sk = {(yk, λk)}, the bin representatives are set as the empirical frequencies of samples with yk = 1 in each bin. To improve the sample efficiency of bin representative optimization, Kumar et al. (2019) proposed to perform scaling-based calibration before HB. Namely, after properly scaling the logits {λk}, the bin representative per bin is then set as the averaged sigmoid-response of the scaled logits in Sk belonging to each bin.\nHowever, pre-scaling does not resolve the sample inefficiency issue arising from a small class prior pk.The two-class ratio in Sk is pk : 1 − pk. When pk is small we will need a large calibration set C = {(y,x)} to collect enough class-1 samples in Sk for setting the bin representatives. To address this, we propose to merge {Sk} across similar classes and then use the merged set S for HB training, yielding one binning scheme shareable to multiple per-class calibration tasks, i.e., shared class-wise (sCW) binning instead of CW binning respectively trained on Sk. In Sec. 4, we respectively experiment using a single binning schemes for all classes in the balanced multi-class setting, and sharing one binning among the classes with similar class priors in the imbalanced multi-class setting. Note, both Sk and S serve as empirical approximations to the inaccessible ground truth distribution p(yk, λk) for bin optimization. The former suffers from high variances, arising from insufficient samples (Fig. A1-a), while the latter is biased due to having samples drawn from the other classes (Fig. A1-b). As the calibration set size is usually small, the variance is expected to outweigh the approximation error over the bias (see an empirical analysis in Sec. A2)." }, { "heading": "3.3 BIN OPTIMIZATION VIA MUTUAL INFORMATION (MI) MAXIMIZATION", "text": "Binning can be viewed as a quantizer Q that maps the real-valued logit λ ∈ R to the bin interval m ∈ {1, . . . ,M} if λ ∈ Im = [gm−1, gm), where M is the total number of bin intervals, and the bin edges gm are sorted (gm−1 < gm, and g0 = −∞, gM = ∞). Any logit binned to Im will be reproduced to the same bin representative rm. In the context of calibration, the bin representative rm assigned to the logit λk is used as the calibrated prediction probability of the class-k. As multiple classes can be assigned with the same bin representative, we will then encounter ties when making top-k predictions based on calibrated probabilities. Therefore, binning as lossy quantization generally does not preserve the raw logit-based ranking performance, being subject to potential accuracy loss.\nUnfortunately, increasing M to reduce the quantization error is not a good solution here. For a given calibration set, the number of samples per bin generally reduces as M increases, and a reliable frequency estimation for setting the bin representatives {rm} demands sufficient samples per bin. Considering that the top-k accuracy reflects how well the ground truth label can be recovered from the logits, we propose bin optimization via maximizing the MI between the quantized logits Q(λ)\n4\nand the label y\n{g∗m} = arg max Q: {gm} I(y;m = Q(λ)) (a) = arg max Q: {gm} H(m)−H(m|y) (2)\nwhere the index m is viewed as a discrete random variable with P (m|y) = ∫ gm gm−1\np(λ|y)dλ and P (m) = ∫ gm gm−1\np(λ)dλ, and the equality (a) is based the relation of MI to the entropy H(m) and conditional entropy H(m|y) of m. Such a formulation offers a quantizer Q∗ optimal at preserving the label information for a given budget on the number of bins. Unlike designing distortion-based quantizers, the reproducer values of raw logits, i.e., the bin representatives {rm}, are not a part of the optimization space, as it is sufficient to know the mapped bin index m of each logit. Once the bin edges {g∗m} are obtained, the bin representative rm to achieve zero calibration error shall equal P (y = 1|m), which can be empirically estimated from the samples within the bin interval Im. It is interesting to analyze the objective function after the equality (a) in (2). The first term H(m) is maximized if P (m) is uniform, which is attained by Eq. mass binning. A uniform sample distribution over the bins is a sample-efficient strategy to optimize the bin representatives for the sake of calibration. However, it does not consider any label information, and thus can suffer from severe accuracy loss. Through MI maximization, we can view I-Max as revising Eq. mass by incorporating the label information into the optimization objective, i.e., having the second term H(m|y). As a result, I-Max not only enjoys a well balanced sample distribution for calibration, but also maximally preserved label information for accuracy.\nIn the example of Fig. 2, the bin edges of I-Max binning are densely located in an area where the uncertainty of y given the logit is high. This uncertainty results from small gaps between the top class predictions. With small bin widths, such nearby prediction logits are more likely located to different bins, and thus distinguishable after binning. On the other hand, Eq. mass binning has a single bin stretching across this high-uncertainty area due to an imbalanced ratio between the p(λ|y = 1) and p(λ|y = 0) samples. Eq. size binning follows a pattern closer to I-Max binning. However, its very narrow bin widths around zero may introduce large empirical frequency estimation errors when setting the bin representatives.\nFor solving the problem (2), we formulate an equivalent problem. Theorem 1. The MI maximization problem given in (2) is equivalent to\nmax Q: {gm} I(y;m = Q(λ)) ≡ min {gm,φm} L({gm, φm}) (3)\nwhere the loss L({gm, φm}) is defined as\nL({gm, φm}) ∆ = M−1∑ m=0 ∫ gm+1 gm p(λ) ∑ y′∈{0,1} P (y = y′|λ) log P (y = y ′) σ [(2y′ − 1)φm] dλ (4)\nand {φm} as a set of real-valued auxiliary variables are introduced here to ease the optimization.\n5\n0 20 40 60 80 100 Iterations\n0.056\n0.058\n0.060\n0.062 0.064 I(y ;Q ( ))\nEq. mass MI\nEq. size MI I-Max(Eq. mass init.) I-Max(Eq. size init.) Upper Bound\n(a) Convergence behavior\n0 2 40.000\n0.025\n0.050\nI(y ;Q\n( ))\n[b its\n]\nIB Eq. Size Eq. Mass I-Max\n0.5 1.0 0.05\n0.06\nI( ; Q( )) [bits] (b) Label-information vs. compression rate\nFigure 3: MI evaluation: The KDEs of p(λ|y) for y ∈ {0, 1} shown in Fig. 2 are used as the ground truth distribution to synthesize a dataset Skde and evaluate the MI of Eq. mass, Eq. size, and I-Max binning trained over Skde. (a) The developed iterative solution for I-Max bin optimization over Skde successfully increases the MI over iterations, approaching the theoretical upper bound I(y;λ). For comparison, I-Max is initialized with both Eq. size and Eq. mass bin edges, both of which are suboptimal at label information preservation. (b) We compare the three binning schemes with 2 to 16 quantization levels against the IB limit (Tishby et al., 1999) on the label-information I(y;Q(λ)) vs. the compression rate I(λ;Q(λ)). The information-rate pairs achieved by I-Max binning are very close to the limit. The information loss of Eq. mass binning is considerably larger, whereas Eq. size binning gets stuck in the low rate regime, failing to reach the upper bound even with more bins.\nProof . See A3 for the proof.\nNext, we compute the derivatives of the loss L with respect to {gm, φm}. When the conditional distribution P (y|λ) takes the sigmoid model, i.e., P (y|λ) ≈ σ[(2y − 1)λ], the stationary points of L, zeroing the gradients over {gm, φm}, have a closed-form expression\ngm =log log [ 1+eφm 1+eφm−1 ] log [ 1+e−φm−1\n1+e−φm\n] , φm=log { ∫ gm+1 gm σ(λ)p(λ)dλ∫ gm+1 gm σ(−λ)p(λ)dλ } ≈log {∑ λn∈Smσ(λn)∑ λn∈Smσ(−λn) } , (5)\nwhere the approximation for φm arises from using the logits in the training set S as an empirical approximation to p(λ) and Sm ∆ = S ∩ [gm, gm+1). So, we can solve the problem by iteratively and alternately updating {gm} and {φm} based on (5) (see Algo. 1 in the appendix for pseudocode). The convergence and initialization of such an iterative method as well as the sigmoid-model assumption are discussed along with the proof of Theorem 1 in Sec. A3.\nAs the iterative method operates under an approximation of the inaccessible ground truth distribution p(y, λ), we synthesize an example, see Fig. 3, to assess its effectiveness. As quantization can only reduce the MI, we evaluate I(y;λ), serving as the upper bound in Fig. 3-a) for I(y;Q(λ)). Among the three realizations of Q, I-Max achieves higher MI than Eq. size and Eq. mass, and more importantly, it approaches the upper bound over the iterations. Next, we assess the performance within the framework of information bottleneck (IB) (Tishby et al., 1999), see Fig. 3-b). In the context of our problem, IB tackles min 1/β × I(λ;Q(λ)) − I(y;Q(λ)) with the weight factor β > 0 to balance between 1) maximizing the information rate I(y;Q(λ)), and 2) minimizing the compression rate I(λ;Q(λ)). By varying β, IB gives the maximal achievable information rate for the given compression rate. Fig. 3-b) shows that I-Max approaches the theoretical limits and provides an information-theoretic perspective on the sub-optimal performance of the alternative binning schemes. Sec. A3.2 has a more detailed discussion on the connection of IB and our problem formulation." }, { "heading": "4 EXPERIMENTS", "text": "Datasets and Models We evaluate post-hoc calibration methods on four benchmark datasets, i.e., ImageNet (Deng et al., 2009), CIFAR 10/100 (Krizhevsky, 2009) and SVHN (Netzer et al., 2011), and across various modern DNNs architectures. More details are reported in Sec. A8.1.\nTraining and Evaluation Details We perform class-balanced random splits of the data test set, unless stated otherwise: the calibration and evaluation set sizes are both 25k for ImageNet, and 5k for\n6\nCIFAR10/100. Different to ImageNet and CIFAR10/100, the test set of SVHN is class imbalanced. We evenly split it into the calibration and evaluation set of size 13k. All reported numbers are the means across 5 random splits; stds can be found in the appendix. Note that some calibration methods only use a subset of the available calibration samples for training, showing their sample efficiency. Further calibrator training details are provided in Sec. A8.1.\nWe empirically evaluate MI, Accuracy (top-1 and 5 ACCs), ECE (class-wise and top-1), Brier and NLL; the latter are shown in the appendix. Analogous to (Nixon et al., 2019), we use thresholding when evaluating the class-wise ECE (CWECEthr). Without thresholding, the empirical class-wise ECE score may be misleading. When a class-k has a small class prior (e.g. 0.01 or 0.001), the empirical class-wise ECE score will be dominated by prediction samples where the class-k is not the ground truth. For these cases, a properly trained classifier will often not rank this class-k among the top classes and instead yield only small calibration errors. While it is good to have many cases with small calibration errors, they should not wash out the calibration errors of the rest of the cases (prone to poor calibration) through performance averaging. These include (1) class-k is the ground truth class and not correctly ranked and (2) the classifier mis-classifies some class-j as class-k. The thresholding remedies the washing out by focusing more on crucial cases (i.e. only averaging across cases where the prediction of the class-k is above a threshold). In all experiments, our primary choice of threshold is to set it according to the class prior for the reason that the class-k is unlikely to be the ground truth if its a-posteriori probability becomes lower than its prior after observing the sample.\nWhile empirical ECE estimation of binning schemes is simple, we resort to HDE with 100 equal size evaluation bins (Wenger et al., 2020) for scaling methods. Sec. A6 also reports the results attained by HDE with additional binning schemes and KDE. For HDE-based ones, we notice that with 100 evaluation bins, the ECE estimate is insensitive to the choice of binning scheme." }, { "heading": "4.1 EQ. SIZE, EQ. MASS VS. I-MAX BINNING", "text": "In Tab. 1, we compare three binning schemes: Eq. size, Eq. mass and I-Max binning. The accuracy performances of the binning schemes are proportional to their MI; Eq. mass binning is highly suboptimal at label information preservation, and thus shows a severe accuracy drop. Eq. size binning accuracy is more similar to that of I-Max binning, but still lower, in particular at Acctop5. Also note that I-Max approaches the MI theoretical limit of I(y;λ)=0.0068. Advantages of I-Max become even more prominent when comparing the NLLs of the binning schemes. For all ECE evalution metrics, I-Max binning improves on the baseline calibration performance, and outperforms Eq. size binning. Eq. mass binning is out of this comparison scope due to its poor accuracy deeming the method impractical. Overall, I-Max successfully mitigates the negative impact of quantization on ACCs while still providing an improved and verifiable ECE performance. Additionally, one-for-all sCW I-Max achieves an even better calibration with only 1k calibration samples, instead of the standard CW binning with 25k calibration samples, highlighting the effectiveness of the sCW strategy.\nFurthermore, it is interesting to note that CWECE of the Baseline classifier is very small, i.e., 0.000442, thus it may appear as the Baseline classifier is well calibrated. However, top1ECE is much larger, i.e., 0.0357. Such inconsistent observations disappear after thresholding the class-wise ECE with the class prior. This example confirms the necessity of thresholding the class-wise ECE.\nIn Sec. A5 we perform additional ablations on the number of bins and calibration samples. Accordingly, a post-hoc analysis investigates how the quantization error of the binning schemes change the ranking order. Observations are consistent with the intuition behind the problem formulation (see Sec. 3.3) and empirical results from Tab. 1 that MI maximization is a proper criterion for multi-class calibration and it maximally mitigates the potential accuracy loss." }, { "heading": "4.2 SCALING VS. I-MAX BINNING", "text": "In Tab. 2, we compare I-Max binning to benchmark scaling methods. Namely, matrix scaling with L2 regularization (Kull et al., 2019) has a large model capacity compared to other parametric scaling methods, while TS (Guo et al., 2017) only uses a single parameter and MnM (Zhang et al., 2020) uses three temperatures as an ensemble of TS (ETS). As a non-parametric method, GP (Wenger et al., 2020) yields state of the art calibration performance. Additional 8 scaling methods can be found in Sec. A10. Benefiting from its model capacity, matrix scaling achieves the best accuracy. I-Max\n7\nbinning achieves the best calibration on CIFAR-100; on ImageNet, it has the best CWECE, and is similar to GP on top1ECE. For a broader scope of comparison, we refer to Sec. A9.\nTo showcase the complementary nature of scaling and binning, we investigate combining binning with GP (a top performing non-parametric scaling method, though with the drawback of high complexity) and TS (a commonly used scaling method). Here, we propose to bin the raw logits and use the GP/TS scaled logits of the samples per bin for setting the bin representatives, replacing the empirical frequency estimates. As GP is then only needed at the calibration learning phase, complexity is no longer an issue. Being mutually beneficial, GP helps improving ACCs and ECEs of binning, i.e., marginal ACC drop 0.16% (0.01%) on Acctop1 for ImageNet (CIFAR100) and 0.24% on Acctop5 for ImageNet; and large ECE reduction 38.27% (49.78%) in CWECEcls−prior and 66.11% (76.07%) in top1ECE of the baseline for ImageNet (CIFAR100)." }, { "heading": "4.3 SHARED CLASS WISE HELPS SCALING METHODS", "text": "Though without quantization loss, some scaling methods, i.e., Beta (Kull et al., 2017), Isotonic regression (Zadrozny & Elkan, 2002), and Platt scaling (Platt, 1999), even suffer from more severe accuracy degradation than I-Max binning. As they also use the one-vs-rest strategy for multi-class calibration, we find that the proposed shared CW binning strategy is beneficial for reducing their accuracy loss and improving their ECE performance, with only 1k calibration samples, see Tab. 3." }, { "heading": "4.4 IMBALANCED MULTI-CLASS SETTING", "text": "Lastly, we turn our experiments to an imbalanced multi-class setting. The adopted SVHN dataset has non-uniform class priors, ranging from 6% (e.g. digit 8) to 19% (e.g. digit 0). We reproduce Tab. 2 for SVHN, yielding Tab. 4. In order to better control the bias caused by the calibration set merging in the imbalanced multi-class setting, the former one-for-all sCW strategy in the balanced multi-class setting changes to sharing I-Max among classes with similar class priors. Despite the class imbalance,\n8\nI-Max and its variants perform best compared to the other calibrators, being similar to Tab. 2. This shows that I-Max and the sCW strategy both can generalize to imbalanced multi-class setting.\nIn Tab. 4, we additionally evaluate the class-wise ECE at multiple threholds. We ablate various thresholds settings, namely, 1) 0 (no thresholding); 2) the class prior; 3) 1/K (any class with prediction probability below 1/K will not be the top-1); and 4) a relatively large number 0.5 (the case when the confidence on class-k outweighs NOT class-k). We observe that I-Max and its variants are consistently top performing across the different thresholds." }, { "heading": "5 CONCLUSION", "text": "We proposed I-Max binning for multi-class calibration, which maximally preserves the labelinformation under quantization, reducing potential accuracy losses. Using the shared class-wise (sCW) strategy, we also addressed the sample-inefficiency issue of binning and scaling methods that rely on one-vs-rest (OvR) for multi-class calibration. Our experiments showed that I-Max yields consistent class-wise and top-1 calibration improvements over multiple datasets and model architectures, outperforming HB and state-of-the-art scaling methods. Combining I-Max with scaling methods offers further calibration performance gains, and more importantly, ECE estimates that can converge to the ground truth in the large sample limit.\nFuture work will investigate extensions of I-Max that jointly calibrate multiple classes, and thereby directly model class correlations. Interestingly, even on datasets such as ImageNet which contain several closely related classes, there is no clear evidence that methods that do model class correlations, e.g. Mtrx. Scal. capture uncertainties better. In fact, I-Max empirically outperforms such methods, although all classes are calibrated independently under the OvR assumption. Non-OvR based methods may fail due to various reasons such as under-parameterized models (e.g. TS), limited data (e.g. Mtrx. Scal.) or complexity constraints (e.g. GP). Joint class calibration therefore strongly relies on new sample efficient evaluation measures that estimate how accurately class correlations are modeled, and which can be included as additional optimization criteria.\n9" }, { "heading": "A1 NO EXTRA NORMALIZATION AFTER K CLASS-WISE CALIBRATIONS", "text": "There is a group of calibration schemes that rely on one-vs-rest conversion to turn multi-class calibration into K class-wise calibrations, e.g., histogram binning (HB), Platt scaling and Isotonic regression. After per-class calibration, the calibrated prediction probabilities of all classes no longer fulfill the constraint, i.e., ∑K k=1 qk 6= 1. An extra normalization step was taken in Guo et al. (2017) to regain the normalization constraint. Here, we note that this extra normalization is unnecessary and partially undoes the per-class calibration effect. For HB, normalization will make its outputs continuous like any other scaling methods, thereby suffering from the same issue at ECE evaluation.\nOne-vs-rest strategy essentially marginalizes the multi-class predictive distribution over each class. After such marginalization, each class and its prediction probability shall be treated independently, thus no longer being constrained by the multi-class normalization constraint. This is analogous to train a CIFAR or ImageNet classifier with sigmoid rather than softmax cross entropy loss, e.g., Ryou et al. (2019). At training and test time, each class prediction probability is individually taken from the respective sigmoid-response without normalization. The class with the largest response is then top-ranked, and normalization itself has no influence on the ranking performance.\nA2 S VS. Sk AS EMPIRICAL APPROXIMATIONS TO p(λk, yk) FOR BIN OPTIMIZATION\nIn Sec. 3.2 of the main paper, we discussed the sample inefficiency issue when there are classes with small class priors. Fig. A1-a) shows an example for ImageNet with 1k classes. The class prior for the class-394 is about 0.001. Among the 10k calibration samples, we can only collect 10 samples with ground truth is the class-394. Estimating the bin representatives from these 10 samples is highly unreliable, resulting into poor calibration performance.\nTo tackle this, we proposed to merge the training sets {Sk} across a selected set of classes (e.g., with similar class priors, belonging to the same class category or all classes) and use the merged S to train a single binning scheme for calibrating these classes, i.e., shared class-wise (sCW) instead of CW binning. Fig. A1-b) shows that after merging over the 1k ImageNet classes, the set S has sufficient numbers from both the positive y = 1 and negative y = 0 class under the one-vs-rest conversion. Tab. 1 showed the benefits of sCW over CW binnings. Tab. 3 showed that our proposal sCW is also beneficial to scaling methods which use one-vs-rest for multi-class calibration.\nAs pointed out in Sec. 3.2, both S and Sk are empirical approximations to the inaccessible ground truth p(λk, yk) for bin optimization. In Fig. A2, we empirically analyze their approximation errors. From the CIFAR10 test set, we take 5k samples to approximate per-class logit distribution p(λk|yk = 1) by means of histogram density estimation, and then use it as the baseline for comparison, i.e., BSk in\nHere, we focus on p(λk|yk = 1) as its empirical estimation suffers from small class priors, being much more challenging than p(λk|yk = 0) as illustrated in Fig. A1.\nA1\n40 30 20 10 0 10 20 30 Logit ( )\n0.00\n0.05\n0.10\n0.15\n0.20\nDe ns\nity p(y=0)=0.999 p(y=1)=0.001\np( |y = 1)(10 samples) p( |y = 0)(9.99k samples)\n(a) train set Sk=394 for CW binning.\n40 30 20 10 0 10 20 30 Logit ( )\n0.00\n0.05\n0.10\n0.15\n0.20\nDe ns\nity\np(y=0)=0.999 p(y=1)=0.001\np( |y = 1)(10k samples) p( |y = 0)(9990k samples)\n(b) train. set S for shared-CW binning.\nFigure A1: Histogram of ImageNet (InceptionResNetv2) logits for (a) CW and (b) sCW training. By means of the set merging strategy to handle the two-class imbalance 1 : 999, S has K=1000 times more class-1 samples than Sk with the same 10k calibration samples from C.\n0 1 2 3 4 5 6 7 8 9 Class\n0.0\n0.2\n0.4\n0.6\n0.8\nJS D\n100 samples\n0 1 2 3 4 5 6 7 8 9 Class\n0.0\n0.2\n0.4\n0.6\n0.8\nJS D\n500 samples\n0 1 2 3 4 5 6 7 8 9 Class\n0.0\n0.2\n0.4\n0.6\n0.8\nJS D\n1000 samples\n0 1 2 3 4 5 6 7 8 9 Class\n0.0\n0.2\n0.4\n0.6\n0.8\nJS D\n2000 samples\n0 1 2 3 4 5 6 7 8 9 Class\n0.0\n0.2\n0.4\n0.6\n0.8\nJS D\n3500 samples\n0 1 2 3 4 5 6 7 8 9 Class\n0.0\n0.2\n0.4\n0.6 0.8 JS D\n5000 samples\nJSD(BSk S) JSD(BSk Sk)\nFigure A2: Empirical approximation error of S vs. Sk, where Jensen-Shannon divergence (JSD) is used to measure the difference between the empirical distributions underlying the training sets for class-wise bin optimization. Overall, the merged set S is a more sample efficient choice over Sk.\nFig. A2. The rest of the 5k samples in the CIFAR10 test set are reserved for constructing Sk and S. For each class, we respectively evaluate the square root of the Jensen-Shannon divergence (JSD) from the baseline BSk to the empirical distribution of S or Sk attained at different numbers of samples.\nIn general, Fig. A2 confirms that variance (due to not enough samples) outweighs bias (due to training set merging). Nevertheless, sCW does not always have smaller JSDs than CW, for instance, the class 7 with the samples larger than 2k (the blue bar \"sCW\" is larger than the orange bar \"CW\"). So, for the class-7, the bias of merging logits starts outweighing the variance when the number of samples is more than 2k. Unfortunately, we don’t have more samples to further evaluate JSDs, i.e., making the variance sufficiently small to reveal the bias impact. Another reason that we don’t observe large JSDs of sCW for CIFAR10 is that the logit distributions of the 10 classes are similar. Therefore, the bias of sCW is small, making CIFAR10 a good use case of sCW. From CIFAR10 to CIFAR100 and ImageNet, there are more classes with even smaller class priors. Therefore, we expect that the sample inefficiency issue of Sk becomes more critical. It will be beneficial to exploit sCW for bin optimization as well as for other methods based on the one-vs-rest conversion for multi-class calibration.\nNote, for JSD evaluation, the histogram estimator sets the bin number as the maximum of ‘sturges’ and ‘fd’ estimators, both of them optimize their bin setting towards the number of samples.\nA2" }, { "heading": "A3 PROOF OF THEOREM 1 AND ALGORITHM DETAILS OF I-MAX", "text": "In this section, we proves Theorem 1 in Sec. 3.3, discuss the connection to the information bottleneck (IB) (Tishby et al., 1999), analyze the convergence behavior of the iterative method derived in Sec. 3.3 and modify the k-means++ algorithm (Arthur & Vassilvitskii, 2007) for initialization. To assist the implementation of the iterative method, we further provide the pseudo code and perform complexity/memory cost analysis." }, { "heading": "A3.1 PROOF OF THEOREM 1", "text": "Theorem 1. The mutual information (MI) maximization problem given as follows:\n{g∗m} = arg max Q: {gm} I(y;m = Q(λ)) (A1)\nis equivalent to\nmax Q: {gm} I(y;m = Q(λ)) ≡ min {gm,φm} L({gm, φm}) (A2)\nwhere the loss L({gm, φm}) is defined as\nL({gm, φm}) ∆ = M−1∑ m=0 ∫ gm+1 gm p(λ) ∑ y′∈{0,1} P (y = y′|λ) log P (y = y ′) Pσ(y = y′;φm) dλ (A3)\nwith Pσ(y;φm) ∆ = σ [(2y − 1)φm] . (A4)" }, { "heading": "As a set of real-valued auxiliary variables, {φm} are introduced here to ease the optimization.", "text": "Proof . Before staring our proof, we note that the upper-case P indicates probability mass functions of discrete random variables, e.g., the label y ∈ {0, 1} and the bin interval index m ∈ {1, . . . ,M}; whereas the lower-case p is reserved for probability density functions of continuous random variables, e.g., the raw logit λ ∈ R. The key to prove the equivalence is to show the inequality\nI(y;m = Q(λ)) ≥ −L({gm, φm}), (A5)\nand the equality is attainable by minimizing L over {φm}. By the definition of MI, we firstly expand I(y;m = Q(λ)) as\nI(y;m = Q(λ)) = M−1∑ i=0 ∫ gm+1 gm p(λ) ∑ y′∈{0,1} P (y = y′|λ) log P (y = y ′|m) P (y = y′) dλ, (A6)\nwhere the conditional distribution P (y|m) is given as\nP (y|m) = P (y|λ ∈ [gm, gm+1)) = P (y)\n∫ gm+1 gm\np(λ|y)dλ∫ gm+1 gm p(λ)dλ =\n∫ gm+1 gm\np(y|λ)P (y)dλ∫ gm+1 gm p(λ)dλ . (A7)\nFrom the above expression, we note that MI maximization effectively only accounts to the bin edges {gm}. The bin representatives can be arbitrary as long as they can indicate the condition λ ∈ [gm, gm+1). So, the bin interval index m is sufficient to serve the role in conditioning the probability mass function of y, i.e., P (y|m). After optimizing the bin edges, we have the freedom to set the bin representatives for the sake of post-hoc calibration.\nA3\nNext, based on the MI expression, we compute its sum with L\nI(y;Q(λ)) + L({gm, φm}) = M−1∑ i=0 ∫ gm+1 gm p(λ) ∑ y′∈{0,1} P (y = y′|λ)dλ log P (y = y ′|m) Pσ(y = y′;φm)\n(a) = M−1∑ i=0 P (m) ∑ y′∈{0,1} P (y = y′|m) log P (y = y ′|m) Pσ(y = y′;φm) (b) =\nM−1∑ i=0 P (m)KLD [P (y = y′|m)‖Pσ(y = y′;φm)]\n(c) ≥ 0. (A8)\nThe equality (a) is based on∫ gm+1 gm\np(λ)P (y = y′|λ)dλ = P (y = y′, λ ∈ [gm, gm+1)) = P (λ ∈ [gm, gm+1))︸ ︷︷ ︸ =P (m) P (y = y′|m).\n(A9)\nFrom the equality (a) to (b), it is simply because of identifying the term in [·] of the equality (a) as the Kullback-Leibler divergence (KLD) between two probability mass functions of y. As the probability mass function P (m) and the KLD both are non-negative, we reach to the inequality at (c), where the equality holds if Pσ(y;φm) = P (y|m). By further noting that L is convex over {φm} and Pσ(y;φm) = P (y|m) nulls out its gradient over {φm}, we then reach to\nI(y;Q(λ)) + min {φm}\nL({gm, φm}) = 0. (A10)\nThe obtained equality then concludes our proof\nmax {gm} I(y;Q(λ)) = max {gm} [ − min {φm} L({gm, φm}) ] = − min {gm,φm} L({gm, φm})\n≡ min {gm,φm} L({gm, φm}). (A11)\nLastly, we note that L({gm, φm}) can reduce to a NLL loss (as P (y) in the log probability ratio is omittable), which is a common loss for calibrators. However, only through this equivalence proof and the MI maximization formulation, can we clearly identify the great importance of bin edges in preserving label information. So even though {gm, φm} are jointly optimized in the equivalent problem, only {gm} play the determinant role in maximizing the MI." }, { "heading": "A3.2 CONNECTION TO INFORMATION BOTTLENECK (IB)", "text": "IB (Tishby et al., 1999) is a generic information-theoretic framework for stochastic quantization design. Viewing binning as quantization, IB aims to find a balance between two conflicting goals: 1) maximizing the information rate, i.e., the mutual information between the label and the quantized logits I(y;Q(λ)); and 2) minimizing the compression rate, i.e., mutual information between the logits and the quantized logits I(λ;Q(λ)). It unifies them by minimizing\nmin p(m|λ)\n1 β I(λ;m = Q(λ))− I(y;m = Q(λ)), (A12)\nwhere m is the bin index assigned to λ and β is the weighting factor (with larger value focusing more on the information rate and smaller value on the compression rate). The compression rate is the bottleneck for maximizing the information rate. Note that IB optimizes the distribution p(m|λ), which describes the probability of λ being assigned to the bin with the indexm. Since it is not a deterministic assignment, IB offers a stochastic rather than deterministic quantizer. Our information maximization formulation is a special case of IB, i.e., β being infinitely large, as we care predominantly about how\nA4\nwell the label can be predicted from a compressed representation (quantized logits), in other words, making the compression rate as small as possible is not a request from the problem. For us, the only bottleneck is the number of bins usable for quantization. Furthermore, with β →∞, stochastic quantization degenerating to a deterministic one. If using stochastic binning for calibration, it outputs a weighted sum of all bin representatives, thereby being continuous and not ECE verifiable. Given that, we do not use it for calibration.\nAs the IB defines the best trade-off between the information rate and compression rate, we use it as the upper limit for assessing the optimality of I-Max in Fig. 3-b). By varying β, IB depicts the maximal achievable information rate for the given compression rate. For binning schemes (Eq. size, Eq. mass and I-Max), we vary the number of bins, and evaluate their achieved information and compression rates. As we can clearly observe from Fig. 3-b), I-Max can approach the upper limit defined by IB. Note that, the compression rate, though being measured in bits, is different to the number of bins used for the quantizer. As quantization is lossy, the compression rate defines the common information between the logits and quantized logits. The number of bins used for quantization imposes an upper limit on the information that can be preserved after quantization." }, { "heading": "A3.3 CONVERGENCE OF THE ITERATIVE METHOD", "text": "For convenience, we recall the update equations for {gm, φm} in Sec. 3.3 of the main paper here\ngm = log log [ 1+eφm 1+e φm−1 ] log [ 1+e −φm−1\n1+e−φm\n] \nφm = log { ∫ gm+1\ngm σ(λ)p(λ)dλ∫ gm+1 gm σ(−λ)p(λ)dλ\n} ≈ log { ∑ λn∈Sm σ(λn)∑ λn∈Sm σ(−λn) } ∀m. (A13) In the following, we show that the updates on {gm} and {φm} according to (A13) continuously decrease the loss L, i.e.,\nL({glm, φlm}) ≥ L({gl+1m , φlm}) ≥ L({gl+1m , φl+1m }). (A14)\nThe second inequality is based on the explained property of L. Namely, it is convex over {φm} and the minimum for any given {gm} is attained by Pσ(y;φm) = P (y|m). As φm is the log-probability ratio of Pσ(y;φm), we shall have\nφl+1m ← log P (y = 1|m) P (y = 0|m)\n(A15)\nwhere P (y = 1|m) in this case is induced by {gl+1m } and P (y|λ) = σ[(2y − 1)λ]. Plugging {gl+1m } and P (y|λ) = σ[(2y − 1)λ] into (A7), the resulting P (y = y′|m) at the iteration l + 1 yields the update equation of φm as given in (A13).\nTo prove the first inequality, we start from showing that {gl+1m } is a local minimum of L({gm, φlm}). The update equation on {gm} is an outcome of solving the stationary point equation of L({gm, φlm}) over {gm} under the condition p(λ = gm) > 0 for any m\n∂L({gm, φlm}) ∂gm\n= p(λ = gm) ∑\ny′∈{0,1}\nP (y = y′|λ = gm) log Pσ(y = y\n′;φlm)\nPσ(y = y′;φlm−1)\n! = 0 ∀m\n(A16)\nBeing a stationary point is the necessary condition of local extremum when the function’s first-order derivative exists at that point, i.e., first-derivative test. To further show that the local extremum is actually a local minimum, we resort to the second-derivative test, i.e., if the Hessian matrix of L({gm, φlm}) is positive definite at the stationary point {gl+1m }. Due to φm > φm−1 with the monotonically increasing function sigmoid in its update equation, we have\n∂2L({gm, φlm}) ∂gm∂gm′ ∣∣∣ gm=g l+1 m ∀m = 0 and ∂2L({gm, φlm}) ∂2gm ∣∣∣ gm=g l+1 m ∀m > 0, (A17)\nimplying that all eigenvalues of the Hessian matrix are positive (equivalently, is positive definite). Therefore, {gl+1m } as the stationary point of L({gm, φlm}) is a local minimum.\nA5\nIt is important to note that from the stationary point equation (A16), {gl+1m } as a local minimum is unique among {gm} with p(λ = gm) > 0 for any m. In other words, the first inequality holds under the condition p(λ = glm) > 0 for any m. Binning is a lossy data processing. In order to maximally preserve the label information, it is natural to exploit all bins in the optimization, not wasting any single bin in the area without mass, i.e., p(λ = gm) = 0. Having said that, it is reasonable to constrain {gm} with p(λ = gm) > 0∀m over iterations, thereby concluding that the iterative method will converge to a local minimum based on the two inequalities (A14).\nA3.4 INITIALIZATION OF THE ITERATIVE METHOD\nWe propose to initialize the iterative method by modifying the k-means++ algorithm (Arthur & Vassilvitskii, 2007) that was developed to initialize the cluster centers for k-means clustering algorithms. It is based on the following identification\nL({gm, φm}) + I(y;λ) = M−1∑ i=0 ∫ gm+1 gm p(λ)KDL [P (y = y′|λ)‖Pσ(y = y′;φm)] dλ (A18)\n≥ ∫ ∞ −∞ p(λ)min m KLD [P (y = y′|λ)‖Pσ(y = y′;φm)] dλ\n≈ 1 |S| ∑ λn∈S min m KLD [P (y = y′|λn)‖Pσ(y = y′;φm)] . (A19)\nAs I(y;λ) is a constant with respect to ({gm, φm}), minimizing L is equivalent to minimizing the term on the RHS of (A18). The last approximation is reached by turning the binning problem into a clustering problem, i.e., grouping the logit samples in the training set S according to the KLD measure, where {φm} are effectively the centers of each cluster. k-means++ algorithm Arthur & Vassilvitskii (2007) initializes the cluster centers based on the Euclidean distance. In our case, we alternatively use the JSD as the distance measure to initialize {φm}. Comparing with KLD, JSD is symmetric and bounded." }, { "heading": "A3.5 A REMARK ON THE ITERATIVE METHOD DERIVATION", "text": "The closed-form update on {gm} in (A13) is based on the sigmoid-model approximation, which has been validated through our empirical experiments. It is expected to work with properly trained classifiers that are not overly overfitting to the cross-entropy loss, e.g., using data augmentation and other regularization techniques at training. Nevertheless, even in corner cases that classifiers are poorly trained, the iterative method can still be operated without the sigmoid-model approximation. Namely, as shown in Fig. 2 of the main paper, we can resort to KDE for an empirical estimation of the ground truth distribution p(λ|y). Using the KDEs, we can compute the gradient of L over {gm} and perform iterative gradient based update on {gm}, replacing the closed-form based update. Essentially, the sigmoid-model approximation is only necessary to find the stationary points of the gradient equations, speeding up the convergence of the method. If attempting to keep the closed-form update on {gm}, an alternative solution could be to use the KDEs for adjusting the sigmoid-model, e.g., p(y|λ) ≈ σ [(2y − 1)(aλ+ ab)], where a and b are chosen to match the KDE based approximation to p(y|λ). After setting a and b, they will be used as a scaling and bias term in the original closed-form update equations\ngm = 1 a log log [ 1+eφm 1+e φm−1 ] log [ 1+e −φm−1\n1+e−φm\n] − b\nφm = log { ∫ gm+1\ngm σ(aλ+ab)p(λ)dλ∫ gm+1 gm σ(−aλ−ab)p(λ)dλ\n} ≈ log { ∑ λn∈Sm σ(aλn+ab)∑ λn∈Sm σ(−aλn−ab) } ∀m. (A20)" }, { "heading": "A3.6 COMPLEXITY AND MEMORY ANALYSIS", "text": "To ease the reproducibility of I-Max, we provide the pseudocode in Algorithm. 1. Based on it, we further analyze the complexity and memory cost of I-Max at training and test time.\nWe simplify this complexity analysis as our algorithm runs completely offline and is purely numpybased. We note that despite the underlying (numpy) operations performed at each step of the\nA6\nAlgorithm 1: I-Max Binning Calibration Input: Number of bins M , logits {λn}N1 and binary labels {yn}N1 Result: bin edges {gm}M0 (g0 = −∞ and gM =∞) and bin representations {φm}M−10 Initialization: {φm} ← Kmeans++({λn}N1 , M ) (see A3.4) ; for iteration = 1, 2, . . . , 200 do\nfor m = 1, 2, . . . ,M − 1 do\ngm ←log log [ 1+eφm 1+e φm−1 ] log [ 1+e −φm−1\n1+e−φm\n] ;\nend for m = 0, 2, . . . ,M − 1 do Sm ∆ = {λn} ∩ [gm, gm+1) ;\nφm ←log {∑\nλn∈Smσ(λn)∑ λn∈Smσ(−λn)\n} ;\nend end\nalgorithm differs, we treat multiplication, division, logarithm and exponential functions each counting as the same unit cost and ignore the costs of the logic operations and add/subtract operators. The initialization has complexity of O(NM), for the one-dimensional logits. We exploit the sklearn implementation of Kmeans++ initialization initially used for Kmeans clustering, but replace the MSE with JSD in the distance measure. Following Algorithm 1, we arrive at the following complexity of O(N ∗M + I ∗ (10 ∗M + 2 ∗M)). Our python codes runs Algorithm. 1 within seconds for classifiers as large as ImageNet and performed purely in Numpy. The largest storage and memory consumption is for keeping the N logits used during the I-Max learning phase.\nAt test time, there is negligible memory and storage constraints, as only (2M − 1) floats need to be saved for the M bin representatives {φm}M−10 and M − 1 bin edges {gm} M−1 1 . The complexity at test time is merely logic operations to compute the bin assignments of each logit and can be done using numpy’s efficient ‘quantize‘ function. I-Max offers a real-time post-hoc calibrator which adds an almost-zero complexity and memory cost relative to the computations of the original classifier.\nWe will release our code soon." }, { "heading": "A4 POST-HOC ANALYSIS ON THE EXPERIMENT RESULTS IN SEC. 4.1", "text": "In Tab. 1 of Sec. 4.1, we compared three different binning schemes by measuring their ACCs and ECEs. The observation on their accuracy performance is aligned with our mutual information maximization viewpoint introduced in Sec. 3.3 and Fig. 2. Here, we re-present Fig. 2 and provide an alternative explanation to strengthen our understanding on how the location of bin edges affects the accuracy, e.g., why Eq. Size binning performed acceptable at the top-1 ACC, but failed at the top-5 ACC. Specifically, Fig. A3 shows the histograms of raw logits that are grouped based on their ranks instead of their labels as in Fig. 2. As expected, the logits with low ranks (i.e., rest below top-5 in Fig. A3) are small and thus take the left hand side of the plot, whereas the top-1 logits are mostly located on the right hand side. Besides sorting logits according to their ranks, we additionally estimate the density of the ground truth (GT) classes associated logits, i.e., GT in Fig. A3). With a properly trained classifier, the histogram of top-1 logits shall largerly overlap with the density curve GT, i.e., top-1 prediction being correct in most cases.\nFrom the bin edge location of Eq. Mass binning, it attempts to attain small quantization errors for logits of low ranks rather than top-5. This will certainly degrade the accuracy performance after binning. On contrary, Eq. Size binning aims at small quantization error for the top-1 logits, but ignores top-5 ones. As a result, we observed its poor top-5 ACCs. I-Max binning nicely distributes its bin edges in the area where the GT logits are likely to locate, and the bin width becomes smaller in the area where the top-5 logits are close by (i.e., the overlap region between the red and blue histograms). Note that, any logit larger than zero must be top-1 ranked, as there can exist at most one\nA7\n0.0\n0.1\nDe ns\nity\n30 20 10 0 10 20 Logit ( )\nI-Max binning Equal size binning Equal mass binning\nGT Top 1 Top 5 (except Top 1) Rest\nFigure A3: Histogram of CIFAR100 (WRN) logits in S constructed from 1k calibration samples, using the same setting as Fig. 2 in the main paper. Instead of categorizing the logits according to their two-class label yk ∈ {0, 1} as in Fig. 2, here we sort them according to their ranks given by the CIFAR100 WRN classifier. As a baseline, we also plot the KDE of logits associated to the ground truth classes, i.e., GT.\nTable A1: Comparison of sCW binning methods in the case of ImageNet - InceptionResNetV2. As sCW binning creates ties at top predictions, the ACCs initially reported in Tab. 1 of Sec. 4.1 use the class index as the secondary sorting criterion. Here, we add Acc*top1 and Acc*top5 which are attained by using the raw logits as the secondary sorting criterion. As the CW ECEs are not affected by this change, here we only report the new top1ECE*.\nBinn. Acctop1 ↑ Acc*top1 ↑ Acctop5 ↑ Acc*top5 ↑ top1ECE ↓ top1ECE* ↓ NLL ↓ Baseline 80.33 - 95.10 - 0.0357 - 0.8406 Eq. Mass 5.02 80.33 26.75 95.10 0.0353 0.7884 3.5272 Eq. Size 80.14 80.21 88.99 95.10 0.0279 0.0277 1.2671 I-Max 80.20 80.33 94.86 95.10 0.0200 0.0202 0.7860\nclass with prediction probability larger than 0.5. Given that, the bins located above zero are no longer to maintain the ranking order, rather to reduce the precision loss of top-1 prediction probability after binning.\nThe second part of our post-hoc analysis is on the sCW binning strategy. When using the same binning scheme for all per-class calibration, the chance of creating ties in top-k predictions is much higher than CW binning, e.g., more than one class are top-1 ranked according to the calibrated prediction probabilities. Our reported ACCs in the main paper are attained by simply returning the first found class, i.e., using the class index as the secondary sorting criterion. This is certainly a suboptimal solution. Here, we investigate on how the ties affect ACCs of sCW binning. To this end, we use raw logits (before binning) as the secondary sorting criterion. The resulting ACC∗top1 and ACC∗top5 are shown in Tab.A1. Interestingly, such a simple change reduces the accuracy loss of Eq. Mass and I-Max binning to zero, indicating that they can preserve the top-5 ranking order of the raw logits but not in a strict monotonic sense, i.e., some > are replaced by =. As opposed to I-Max binning, Eq. Mass binning has a poor performance at calibration, i.e., the really high NLL and ECE. This is because it trivially ranks many classes as top-1, but each of them has a very and same small confidence score. Given that, even though the accuracy loss is no longer an issue, it is still not a good solution for multi-class calibration. For Eq. Size binning, resolving ties only helps restore the baseline top-5 but not top-1 ACC. Its poor bin representative setting due to unreliable empirical frequency estimation over too narrow bins can result in a permutation among the top-5 predictions.\nConcluding from the above, our post-hoc analysis confirms that I-Max binning outperforms the other two binning schemes at mitigating the accuracy loss and multi-class calibration. In particular, there exists a simple solution to close the accuracy gap to the baseline, at the same time still retaining the desirable calibration gains.\nA8\nTable A2: Ablation on the number of bins and calibration samples for sCW I-Max binning, where the basic setting is identical to the Tab. 1 in Sec. 4.1 of the main paper.\nBinn. Bins Acctop1 ↑ Acctop5 ↑ CWECE 1 K ↓ top1ECE ↓ Acctop1 ↑ Acctop5 ↑ CWECE 1 K ↓ top1ECE ↓\nBaseline - 80.33 95.10 0.0486 0.0357 80.33 95.10 0.0486 0.0357 1k Calibration Samples 5k Calibration Samples GP 80.33 95.11 0.0485 0.0186 80.33 95.11 0.0445 0.0177\nI-Max\n10 80.09 94.59 0.0316 0.0156 80.14 94.59 0.0330 0.0107 15 80.20 94.86 0.0302 0.0200 80.21 94.90 0.0257 0.0107 20 80.10 94.94 0.0266 0.0234 80.25 94.98 0.0220 0.0133 30 80.15 94.99 0.0343 0.0266 80.25 95.02 0.0310 0.0150 40 80.11 95.05 0.0365 0.0289 80.24 95.08 0.0374 0.0171 50 80.21 94.95 0.0411 0.0320 80.23 95.06 0.0378 0.0219\nI-Max w. GP 10 80.09 94.59 0.0396 0.0122 80.14 94.59 0.0330 0.0072 15 80.20 94.87 0.0300 0.0121 80.21 94.88 0.0256 0.0080 20 80.23 94.95 0.0370 0.0133 80.25 95.00 0.0270 0.0091 30 80.26 95.04 0.0383 0.0141 80.27 95.02 0.0389 0.0097 40 80.27 95.11 0.0424 0.0145 80.26 95.08 0.0402 0.0108 50 80.30 95.08 0.0427 0.0153 80.28 95.08 0.0405 0.0114" }, { "heading": "A5 ABLATION ON THE NUMBER OF BINS AND CALIBRATION SET SIZE", "text": "In Tab. 1 of Sec. 4.1, sCW I-Max binning is the top performing one at the ACCs, ECEs and NLL measures. In this part, we further investigate on how the number of bins and calibration set size influences its performance. Tab. A2 shows that in order to benefit from more bins we shall accordingly increase the number of calibration samples. More bins help reduce the quantization loss, but increase the empirical frequency estimation error for setting the bin representatives. Given that, we observe a reduced ACCs and increased ECEs for having 50 bins with only 1k calibration samples. By increasing the calibration set size to 5k, then we start seeing the benefits of having more bins to reduce quantization error for better ACCs. Next, we further exploit scaling method, i.e., GP Wenger et al. (2020), for improving the sample efficiency of binning at setting the bin representatives. As a result, the combination is particularly beneficial to improve the ACCs and top-1 ECE. Overall, more bins are beneficial to ACCs, while ECEs favor less number of bins." }, { "heading": "A6 EMPIRICAL ECE ESTIMATION OF SCALING METHODS UNDER MULTIPLE EVALUATION SCHEMES", "text": "As mentioned in the main paper, scaling methods suffer from not being able to provide verifiable ECEs, see Fig. 1. Here, we discuss alternatives to estimate their ECEs. The current literature can be split into two types of ECE evaluation: histogram density estimation (HDE) and kernel density estimation (KDE)." }, { "heading": "A6.1 HDE-BASED ECE EVALUATION", "text": "HDE bins the prediction probabilities (logits) for density modeling. The binning scheme has different variants, where changing the bin edges can give varying measures of the ECE. Two bin edges schemes have been discussed in the literature (Eq. size and Eq. mass) as well as a new scheme was introduced (I-Max). Alternatively, we also evaluate a binning scheme which is based on KMeans clustering to determine the bin edges." }, { "heading": "A6.2 KDE-BASED ECE EVALUATION", "text": "Recent work (Zhang et al., 2020) presented an alternative ECE evaluation scheme which exploits KDEs to estimate the distribution of prediction probabilities {qk} from the test set samples. Using the code provided by Zhang et al. (2020), we observe that the KDE with the setting in their paper can\nA9\n0.0 0.2 0.4 0.6 0.8 1.0 Probabilities (q)\n0\n5\n10\n15\n20\n25 30 De ns ity\npkde(q) pHB(q)\n(a) CIFAR100/WRN Top-1 (Prob. Space)\n0 5 10 15 20 Logit ( )\n0.00\n0.02\n0.04\n0.06\n0.08\n0.10\nDe ns\nity\npkde( ) pHB( )\n(b) CIFAR100/WRN Top-1 (Log Space)\n0.0 0.2 0.4 0.6 0.8 1.0 Probabilities (q)\n0\n1\n2\n3\n4\n5\n6\n7\n8\nDe ns\nity\npkde(q) pHB(q)\n(c) ImageNet/Inceptionresnetv2 Top-1 (Prob. Space)\n5.0 2.5 0.0 2.5 5.0 7.5 10.0 12.5 15.0 Logit ( )\n0.0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\nDe ns\nity\npkde( ) pHB( )\n(d) ImageNet/Inceptionresnetv2 Top-1 (Log Space)\nFigure A4: Distribution of the top-1 predictions and its log-space counterparts, i.e., λ = log q − log(1− q).\nhave a sub-optimal fitting in the probability space. This can be observed from Fig. A4a and Fig. A4c, where the fitting is good for ImageNet/Inceptionresnetv2 though when the distribution is significantly skewed to the right (as in the case of CIFAR100/WRN) the mismatch becomes large. We expect that the case of CIFAR100/WRN is much more common in modern DNNs, due to their high capacities and prone to overfitting.\nEquivalently, we can learn the distribution in its log space by the bijective transformation, i.e., λ = log q − log(1 − q) and q = σ(λ). As we can observe from Fig. A4b and Fig. A4d, the KDE fitting for both models is consistently good.\nZhang et al. (2020) empirically validated their KDE in a toy example, where the ground truth ECE can be analytically computed. By analogy, we reproduce the experiment and further compare it with the log-space KDE evaluation. Using the same settings as in (Zhang et al., 2020), we assess the ECE evaluation error by KDE, i.e., |ECEgt − ECEkde|, in both the log and probability space, achieving prob 0.0020 vs. log 0.0017 for the toy example setting β0 = 0.5;β1 = −1.5. For an even less calibrated setting, β0 = 0.2;β1 = −1.9, we obtain prob 0.0029 vs. log 0.0020. So the log-space KDE-based ECE evaluation (kdeECElog) has lower estimation error than in the probability space." }, { "heading": "A6.3 ALTERNATIVE ECE EVALUATION SCHEMES", "text": "Concluding from the above, Tab. A3 shows the ECE estimates attained by HDEs (from four different bin setting schemes) and KDE (from (Zhang et al., 2020), but in the log space). As we can see, the obtained results are evaluation scheme dependent. On contrary, I-Max binning with and without GP are not affected, and more importantly, their ECEs are better than that of scaling methods, regardless of the evaluation scheme.\nA10\nTable A3: ECEs of scaling methods under various evaluation schemes for ImageNet InceptionResNetV2. Overall, we consider five evaluation schemes, namely (1) dECE: equal size binning; (2) mECE: equal mass binning, (3) kECE: MSE-based KMeans clustering; (4) iECE: I-Max binning; 5) kdeECE: KDE. The HDEs based schemes, i.e., (1)-(4), use 102 bins. Note that, the ECEs of I-Max binning (as a calibrator rather than evaluation scheme) are agnostic to the evaluation scheme. Furthermore, BBQ suffers from severe accuracy degradation.\nCalibrator ACCtop1 CWdECE 1 K ↓ CWmECE 1 K ↓ CWkECE 1 K ↓ CWiECE 1 K ↓ CWkdeECE 1 K ↓ Mean ↓\nBaseline 80.33 0.0486± 0.0003 0.0459± 0.0004 0.0484± 0.0004 0.0521± 0.0004 0.0749± 0.0014 0.0540 25k Calibration Samples BBQ 53.89 0.0287± 0.0009 0.0376± 0.0014 0.0372± 0.0014 0.0316± 0.0008 0.0412± 0.0010 0.0353 Beta 80.47 0.0706± 0.0003 0.0723± 0.0005 0.0742± 0.0005 0.0755± 0.0004 0.0828± 0.0003 0.0751 Isotonic Reg. 80.08 0.0644± 0.0015 0.0646± 0.0015 0.0652± 0.0016 0.0655± 0.0015 0.0704± 0.0014 0.0660 Platt 80.48 0.0597± 0.0007 0.0593± 0.0008 0.0613± 0.0008 0.0634± 0.0008 0.1372± 0.0028 0.0762 Vec Scal. w. L2 reg. 80.53 0.0494± 0.0002 0.0472± 0.0004 0.0498± 0.0003 0.0531± 0.0003 0.0805± 0.0010 0.0560 Mtx Scal. w. L2 reg. 80.78 0.0508± 0.0003 0.0488± 0.0004 0.0512± 0.0005 0.0544± 0.0004 0.0898± 0.0011 0.0590 1k Calibration Samples TS 80.33 0.0559± 0.0015 0.0548± 0.0018 0.0573± 0.0017 0.0598± 0.0015 0.1003± 0.0053 0.0656 GP 80.33 0.0485± 0.0037 0.0450± 0.0040 0.0475± 0.0039 0.0520± 0.0038 0.0580± 0.0052 0.0502 I-Max 80.20 0.0302± 0.0041 I-Max w. GP 80.20 0.0300± 0.0041\nCalibrator ACCtop1 top1dECE ↓ top1mECE ↓ top1kECE ↓ top1iECE ↓ top1kdeECE ↓ Mean ↓\nBaseline 80.33 0.0357± 0.0010 0.0345± 0.0010 0.0348± 0.0012 0.0352± 0.0016 0.0480± 0.0016 0.0376 25k Calibration Samples BBQ 53.89 0.2689± 0.0033 0.2690± 0.0034 0.2690± 0.0034 0.2689± 0.0032 0.2756± 0.0145 0.2703 Beta 80.47 0.0346± 0.0022 0.0360± 0.0017 0.0360± 0.0022 0.0357± 0.0019 0.0292± 0.0023 0.0343 Isotonic Reg. 80.08 0.0468± 0.0020 0.0434± 0.0019 0.0436± 0.0020 0.0468± 0.0015 0.0437± 0.0057 0.0449 Platt 80.48 0.0775± 0.0015 0.0772± 0.0015 0.0771± 0.0016 0.0773± 0.0014 0.0772± 0.0018 0.0773 Vec Scal. w. L2 reg. 80.53 0.0300± 0.0010 0.0298± 0.0012 0.0300± 0.0016 0.0303± 0.0011 0.0365± 0.0023 0.0313 Mtx Scal. w. L2 reg. 80.78 0.0282± 0.0014 0.0287± 0.0011 0.0286± 0.0014 0.0289± 0.0014 0.0324± 0.0019 0.0293 1k Calibration Samples TS 80.33 0.0439± 0.0022 0.0452± 0.0022 0.0454± 0.0020 0.0443± 0.0020 0.0679± 0.0024 0.0493 GP 80.33 0.0186± 0.0034 0.0182± 0.0019 0.0186± 0.0026 0.0190± 0.0022 0.0164± 0.0029 0.0182 I-Max 80.20 0.0200± 0.0033 I-Max w. GP 80.20 0.0121± 0.0048\nA11\nTable A4: ECEs of post-hoc and during-training calibration. A WRN CIFAR100 classifier is trained in three modes: 1) no during-training calibration; 2) using entropy regularization (Pereyra et al., 2017); and 3) using Mixup data augmentation (Zhang et al., 2018; Thulasidasan et al., 2019). Taking each of the trained models as one baseline, we further perform post-hoc calibration. Note the best numbers per training mode is marked in bold and the underlined scores are the best across the three models.\nNo Train Calibration Entr. Reg. Mixup Post-Hoc Cal. CWECEcls−prior ↓ top1ECE ↓ CWECEcls−prior ↓ top1ECE ↓ CWECEcls−prior ↓ top1ECE ↓ Baseline 0.10434 0.06880 0.08860 0.04806 0.10518 0.04972 Mtx Scal. w. L2 0.10308 0.06560 0.08980 0.04650 0.10374 0.04852 ETS-MnM 0.09488 0.04820 0.09050 0.03900 0.09740 0.03676 TS 0.09436 0.05914 0.09376 0.05438 0.10282 0.04536 GP 0.10836 0.03360 0.10520 0.03728 0.10600 0.03514 I-Max 0.04574 0.01834 0.04712 0.02202 0.05534 0.02060 I-Max w. TS 0.05706 0.04342 0.04766 0.04478 0.06156 0.03264 I-Max w. GP 0.06130 0.01428 0.05114 0.01562 0.05992 0.01364" }, { "heading": "A7 POST-HOC VS. DURING-TRAINING CALIBRATION", "text": "To calibrate a DNN-based classifier, there exists two groups of methods. One is to improve the calibration during training, whereas the other is post-hoc calibration. In this paper, we focus on post-hoc calibration because it is simple and does not require re-training of deployed models. In the following, we briefly discuss the advantages and disadvantages of post-hoc and during-training calibration.\nIn general, post-hoc and during-training calibration can be viewed as two orthogonal ways to improve the calibration, as they can be easily combined. Exemplarily, we compare/combine posthoc calibration methods against/with during-training regularization which directly modifies the training objective to encourage less confident predictions through an entropy regularization term (Entr. Reg.) (Pereyra et al., 2017). Additionally, we adopt Mixup (Zhang et al., 2018) which is a data augmentation shown to improve calibration (Thulasidasan et al., 2019). We re-train the CIFAR100 WRN classifier respectively using Entr. Reg. and Mixup. It can be seen in Tab. A4 that compared to the Baseline model (without training calibration Entr. Reg. or Mixup), EntrReg improves the top1 ECE from 0.06880 to 0.04806. Further applying post-hoc calibration, I-Max and I-Max w. GP can reduce the 0.04806 to 0.02202 and 0.01562, respectively. This indicates that their combination is beneficial. In this particular case, we also observed that without Entr. Reg., directly post-hoc calibrating the Baseline model appears to be more effective, e.g., top 1 ECE of 0.01428 and class-wise ECE 0.04574. Switching to Mixup, the best top 1 ECE 0.01364 is attained by combining Mixup with post-hoc I-Max w. GP, while I-Max alone without during-training calibration is still the best at class-wise ECE.\nWhile post-hoc calibrator is simple and effective at calibration, during-training techniques may deliver more than improving calibration, e.g., improving the generalization performance and providing robustness against adversarials. Therefore, instead of choosing either post-hoc or during training technique, we recommend the combination. While during-training techniques improve the generalization and robustness of the Baseline classifier, post-hoc calibration can further boost its calibration at a low computational cost." }, { "heading": "A8 TRAINING DETAILS", "text": "" }, { "heading": "A8.1 PRE-TRAINED CLASSIFICATION NETWORKS", "text": "We evaluate post-hoc calibration methods on four benchmark datasets, i.e., ImageNet Deng et al. (2009), CIFAR-100 Krizhevsky (2009), CIFAR-10 Krizhevsky (2009) and SVHN (Netzer et al., 2011), and across three modern DNNs for each dataset, i.e., InceptionResNetV2 Szegedy et al. (2017), DenseNet161 Huang et al. (2017) and ResNet152 He et al. (2016) for ImageNet, and Wide ResNet (WRN) Zagoruyko & Komodakis (2016) for the two CIFAR datasets and SVHN. Additionally, we\nA12\ntrain DenseNet-BC (L = 190, k = 40) Huang et al. (2017) and ResNext8x64 Xie et al. (2017) for the two CIFAR datasets.\nThe ImageNet and CIFAR models are publicly available pre-trained networks and details are reported at the respective websites, i.e., ImageNet classifiers: https://github.com/ Cadene/pretrained-models.pytorch and CIFAR classifiers: https://github.com/ bearpaw/pytorch-classification." }, { "heading": "A8.2 TRAINING SCALING METHODS", "text": "The hyper-parameters were decided based on the original respective scaling methods publications with some exceptions. We found that the following parameters were the best for all the scaling methods. All scaling methods use the Adam optimizer with batch size 256 for CIFAR and 4096 for ImageNet. The learning rate was set to 10−3 for temperature scaling Guo et al. (2017) and Platt scaling Platt (1999), 0.0001 for vector scaling Guo et al. (2017) and 10−5 for matrix scaling Guo et al. (2017). Matrix scaling was further regularized as suggested by Kull et al. (2019) with a L2 loss on the bias vector and the off-diagonal elements of the weighting matrix. BBQ Naeini et al. (2015), isotonic regression Zadrozny & Elkan (2002) and Beta Kull et al. (2017) hyper-parameters were taken directly from Wenger et al. (2020)." }, { "heading": "A8.3 TRAINING I-MAX BINNING", "text": "The I-Max bin optimization started from k-means++ initialization, which uses JSD instead of Euclidean metric as the distance measure, see Sec. A3.4. Then, we iteratively and alternatively updated {gm} and {φm} according to (5) until 200 iterations. With the attained bin edges {gm}, we set the bin representatives {rm} based on the empirical frequency of class-1. If a scaling method is combined with binning, an alternative setting for {rm} is to take the averaged prediction probabilities based on the scaled logits of the samples per bin, e.g., in Tab. 2 in Sec. 4.2. Note that, for CW binning in 1, the number of samples from the minority class is too few, i.e., 25k/1k = 25. We only have about 25/15 ≈ 2 samples per bin, which are too few to use empirical frequency estimates. Alternatively, we set {rm} based on the raw prediction probabilities. For ImageNet and CIFAR 10/100, which have test sets with uniform class priors, the used sCW setting is to share one binning scheme among all classes. Alternatively, for the imbalanced multi-class SVHN setting, we share binning among classes with similar class priors, and thus use the following class (i.e. digit) groupings: {0− 1}, {2− 4}, {5− 9}." }, { "heading": "A9 EXTEND TAB. 1 FOR MORE DATASETS AND MODELS.", "text": "Tab. 1 in Sec. 4.1 of the main paper is replicated across datasets and models, where the basic setting remains the same. Specifically, three different ImageNet models can be found in Tab. A5, Tab. A6 and Tab. A7. Three models for CIFAR100 can be found in Tab. A8, Tab. A9 and Tab. A10. Similarly, CIFAR10 models can be found in Tab. A11, Tab. A12 and Tab. A13. The accuracy degradation of Eq. Mass reduces as the dataset has less number of classes, e.g., CIFAR10. This is a result of a higher class prior, where the one-vs-rest conversion becomes less critical for CIFAR10 than ImageNet. Nevertheless, its accuracy losses are still much larger than the other binning schemes, i.e., Eq. Size and I-Max binning. Therefore, its calibration performance is not considered for comparison. Overall, the observations of Tab. A5- A13 are similar to Tab. 1, showing the stable performance gains of I-Max binning across datasets and models." }, { "heading": "A10 EXTEND TAB. 2 FOR MORE SCALING METHODS, DATASETS AND MODELS", "text": "Tab. 2 in Sec. 4.2 of the main paper is replicated across datasets and models, and include more scaling methods for comparison. The three binning methods all use the shared CW strategy, therefore 1k calibration samples are sufficient. The basic setting remains the same as Tab. 2. Three different ImageNet models can be found in Tab. A14, Tab. A15 and Tab. A16. Three models for CIFAR100\nA13\nTable A5: Tab. 1 Extension: ImageNet - InceptionResNetV2\nBinn. sCW(?) size Acctop1 ↑ Acctop5 ↑ CWECE 1 K ↓ top1ECE ↓ NLL\nBaseline 7 - 80.33 ± 0.15 95.10 ± 0.15 0.0486 ± 0.0003 0.0357 ± 0.0009 0.8406 ± 0.0095 Eq. Mass 7 25k 7.78 ± 0.15 27.92 ± 0.71 0.0016 ± 0.0001 0.0606 ± 0.0013 3.5960 ± 0.0137 Eq. Mass 3 1k 5.02 ± 0.13 26.75 ± 0.37 0.0022 ± 0.0001 0.0353 ± 0.0012 3.5272 ± 0.0142 Eq. Size 7 25k 78.52 ± 0.15 89.06 ± 0.13 0.1344 ± 0.0005 0.0547 ± 0.0017 1.5159 ± 0.0136 Eq. Size 3 1k 80.14 ± 0.23 88.99 ± 0.12 0.1525 ± 0.0023 0.0279 ± 0.0043 1.2671 ± 0.0130 I-Max 7 25k 80.27 ± 0.17 95.01 ± 0.19 0.0342 ± 0.0006 0.0329 ± 0.0010 0.8499 ± 0.0105 I-Max 3 1k 80.20 ± 0.18 94.86 ± 0.17 0.0302 ± 0.0041 0.0200 ± 0.0033 0.7860 ± 0.0208\nTable A6: Tab. 1 Extension: ImageNet - DenseNet\nBinn. sCW(?) size Acctop1 ↑ Acctop5 ↑ CWECE 1 K ↓ top1ECE ↓ NLL\nBaseline 7 - 77.21 ± 0.12 93.51 ± 0.14 0.0502 ± 0.0006 0.0571 ± 0.0014 0.9418 ± 0.0120 Eq. Mass 7 25k 18.48 ± 0.19 45.12 ± 0.26 0.0017 ± 0.0000 0.1657 ± 0.0020 2.9437 ± 0.0162 Eq. Mass 3 1k 17.21 ± 0.47 45.69 ± 1.22 0.0054 ± 0.0004 0.1572 ± 0.0047 2.9683 ± 0.0561 Eq. Size 7 25k 74.34 ± 0.28 88.27 ± 0.11 0.1272 ± 0.0011 0.0660 ± 0.0018 1.6699 ± 0.0165 Eq. Size 3 1k 77.06 ± 0.28 88.22 ± 0.10 0.1519 ± 0.0016 0.0230 ± 0.0050 1.3948 ± 0.0105 I-Max 7 25k 77.07 ± 0.13 93.40 ± 0.17 0.0334 ± 0.0004 0.0577 ± 0.0008 0.9492 ± 0.0130 I-Max 3 1k 77.13 ± 0.14 93.34 ± 0.17 0.0263 ± 0.0119 0.0201 ± 0.0088 0.9229 ± 0.0103\nTable A7: Tab. 1 Extension: ImageNet - ResNet152\nBinn. sCW(?) size Acctop1 ↑ Acctop5 ↑ CWECE 1 K ↓ top1ECE ↓ NLL\nBaseline 7 - 78.33 ± 0.17 94.00 ± 0.14 0.0500 ± 0.0004 0.0512 ± 0.0018 0.8760 ± 0.0133 Eq. Mass 7 25k 17.45 ± 0.10 44.87 ± 0.37 0.0017 ± 0.0000 0.1555 ± 0.0010 2.9526 ± 0.0168 Eq. Mass 3 1k 16.25 ± 0.54 45.53 ± 0.81 0.0064 ± 0.0004 0.1476 ± 0.0054 2.9471 ± 0.0556 Eq. Size 7 25k 75.50 ± 0.28 88.85 ± 0.19 0.1223 ± 0.0008 0.0604 ± 0.0017 1.6012 ± 0.0252 Eq. Size 3 1k 78.24 ± 0.16 88.81 ± 0.19 0.1480 ± 0.0015 0.0286 ± 0.0053 1.3308 ± 0.0178 I-Max 7 25k 78.24 ± 0.16 93.91 ± 0.17 0.0334 ± 0.0005 0.0521 ± 0.0015 0.8842 ± 0.0135 I-Max 3 1k 78.19 ± 0.21 93.82 ± 0.17 0.0295 ± 0.0030 0.0196 ± 0.0049 0.8638 ± 0.0135\nTable A8: Tab. 1 Extension: CIFAR100 - WRN\nBinn. sCW(?) size Acctop1 ↑ CWECE 1 K ↓ top1ECE ↓ NLL\nBaseline 7 - 81.35 ± 0.13 0.1113 ± 0.0010 0.0748 ± 0.0018 0.7816 ± 0.0076 Eq. Mass 7 5k 60.78 ± 0.62 0.0129 ± 0.0010 0.4538 ± 0.0074 1.1084 ± 0.0117 Eq. Mass 3 1k 62.04 ± 0.53 0.0252 ± 0.0032 0.4744 ± 0.0049 1.1789 ± 0.0308 Eq. Size 7 5k 80.39 ± 0.36 0.1143 ± 0.0013 0.0783 ± 0.0032 1.0772 ± 0.0184 Eq. Size 3 1k 81.12 ± 0.15 0.1229 ± 0.0030 0.0273 ± 0.0055 1.0165 ± 0.0105 I-Max 7 5k 81.22 ± 0.12 0.0692 ± 0.0020 0.0751 ± 0.0024 0.7878 ± 0.0090 I-Max 3 1k 81.30 ± 0.22 0.0518 ± 0.0036 0.0231 ± 0.0067 0.7593 ± 0.0085\nTable A9: Tab. 1 Extension: CIFAR100 - ResNeXt8x64\nBinn. sCW(?) size Acctop1 ↑ CWECE 1 K ↓ top1ECE ↓ NLL\nBaseline 7 - 81.93 ± 0.08 0.0979 ± 0.0015 0.0590 ± 0.0028 0.7271 ± 0.0026 Eq. Mass 7 5k 63.02 ± 0.54 0.0131 ± 0.0012 0.4764 ± 0.0057 1.0535 ± 0.0191 Eq. Mass 3 1k 64.48 ± 0.64 0.0265 ± 0.0011 0.4980 ± 0.0070 1.1232 ± 0.0277 Eq. Size 7 5k 80.81 ± 0.26 0.1070 ± 0.0008 0.0700 ± 0.0030 1.0178 ± 0.0066 Eq. Size 3 1k 81.99 ± 0.21 0.1195 ± 0.0013 0.0230 ± 0.0033 0.9556 ± 0.0071 I-Max 7 5k 81.99 ± 0.08 0.0601 ± 0.0027 0.0627 ± 0.0034 0.7318 ± 0.0026 I-Max 3 1k 81.96 ± 0.14 0.0549 ± 0.0081 0.0205 ± 0.0074 0.7127 ± 0.0040\nA14\nTable A10: Tab. 1 Extension: CIFAR100 - DenseNet\nBinn. sCW(?) size Acctop1 ↑ CWECE 1 K ↓ top1ECE ↓ NLL\nBaseline 7 - 82.36 ± 0.26 0.1223 ± 0.0008 0.0762 ± 0.0015 0.7542 ± 0.0143 Eq. Mass 7 5k 57.23 ± 0.50 0.0117 ± 0.0011 0.4173 ± 0.0051 1.1819 ± 0.0228 Eq. Mass 3 1k 58.11 ± 0.21 0.0233 ± 0.0005 0.4339 ± 0.0024 1.2049 ± 0.0405 Eq. Size 7 5k 81.35 ± 0.23 0.1108 ± 0.0017 0.0763 ± 0.0029 1.0207 ± 0.0183 Eq. Size 3 1k 82.22 ± 0.30 0.1192 ± 0.0024 0.0219 ± 0.0021 0.9482 ± 0.0137 I-Max 7 5k 82.35 ± 0.26 0.0740 ± 0.0007 0.0772 ± 0.0010 0.7618 ± 0.0145 I-Max 3 1k 82.32 ± 0.22 0.0546 ± 0.0122 0.0189 ± 0.0071 0.7022 ± 0.0124\nA15\ncan be found in Tab. A17, Tab. A18 and Tab. A19. Similarly, CIFAR10 models can be found in Tab. A20, Tab. A21 and Tab. A22.\nBeing analogous to Tab. 2, we observe that in most cases matrix scaling performs the best at the accuracy, but fail to provide satisfactory calibration performance measured by ECEs, Brier scores and NLLs. Among the scaling methods, GP (Wenger et al., 2020) is the top performing one. Among the binning schemes, our proposal of I-Max binning outperforms Eq. Mass and Eq. Size at accuracies, ECEs, NLLs and Brier scores. The combination of I-Max binning with GP excels at the ECE performance. Note that, among all methods, Eq. Mass binning suffers from severe accuracy degradation after multi-class calibration. The reason behind Eq. Mass binning was discussed in Sec. 3.3 of the main paper. Given the poor accuracy, it is not in the scope of calibration performance comparison.\nWe also observe that GP performs better at NLL/Brier than the I-Max variants. GP is trained by directly optimizing the NLL as its loss. As a non-parametric Bayesian method, GP has larger model expressive capacity than binning. While achieving better NLL/Brier, it costs significantly more computational complexity and memory. In contrast, I-Max only relies on logic comparisons at test time. Among the binning schemes, I-Max w. GP achieves the best NLL/Brier across the datasets and models. It is noted that I-Max w. GP remains to be a binning scheme. So, the combination does not change the model capacity of I-Max. GP is only exploited during training to improve the optimization on I-Max’s bin representatives. Besides the low complexity benefit, I-Max w. GP as a binning scheme does not suffer from the ECE underestimation issue of scaling methods such as GP.\nWe further note that as a cross entropy measure between two distributions, the NLL would be an ideal metric for calibration evaluation. However, empirical NLL and Brier favor high accuracy and high confident classifiers, as each sample only having one hard label essentially implies the maximum confidence on a single class. For the this reason, during training, the empirical NLL loss will keep pushing the prediction probability to one even after reaching 100% training set accuracy. As a result, the trained classifier showed poor calibration performance at test time (Guo et al., 2017). In contrast to NLL/Brier, empirical ECEs use hard labels differently. The ground truth correctness associated to the prediction confidence p is estimated by averaging over the hard labels of the samples receiving the prediction probability p or close to p. Due to averaging, the empirical ground truth correctness is usually not a hard label. Lastly, we use a small example to show the difference between the NLL/Brier and ECE: for N predictions, all assigned a confidence of 1.0 and containing M mistakes, the calibrated confidence is M/N < 1. Unlike ECE, the NLL/Brier loss is only non-zero only for the M wrong predictions, despite all N predictions being miscalibrated. This example shows that NLL/Brier penalize miscalibration far less than ECE.\nA16\nA17\nA18\nA19\nA20" } ]
2,021
TUAL INFORMATION MAXIMIZATION-BASED BINNING
SP:8be0ea7136590dd63b9a82556995ef1e7b1d644c
[ "The paper investigates the double descent phenomenon. It proposes the augmentation of the dataset via concatenating the covariate x and interpolating the label y, which increases the data size from n to n^2. The paper shows that the phenomenon of double descent can be mitigated via augmenting the input. The idea of investigating double descent from manipulating samples is novel and interesting." ]
The double descent curve is one of the most intriguing properties of deep neural networks. It contrasts the classical bias-variance curve with the behavior of modern neural networks, occurring where the number of samples nears the number of parameters. In this work, we explore the connection between the double descent phenomena and the number of samples in the deep neural network setting. In particular, we propose a construction which augments the existing dataset by artificially increasing the number of samples. This construction empirically mitigates the double descent curve in this setting. We reproduce existing work on deep double descent, and observe a smooth descent into the overparameterized region for our construction. This occurs both with respect to the model size, and with respect to the number epochs.
[]
[ { "authors": [ "Madhu S Advani", "Andrew M Saxe" ], "title": "High-dimensional dynamics of generalization error in neural networks", "venue": "arXiv preprint arXiv:1710.03667,", "year": 2017 }, { "authors": [ "Jimmy Ba", "Murat Erdogdu", "Taiji Suzuki", "Denny Wu", "Tianzong Zhang" ], "title": "Generalization of twolayer neural networks: An asymptotic viewpoint", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Peter L Bartlett", "Philip M Long", "Gabor Lugosi", "Alexander Tsigler" ], "title": "Benign overfitting in linear regression", "venue": "arXiv preprint arXiv:1906.11300,", "year": 2019 }, { "authors": [ "Mikhail Belkin", "Daniel Hsu", "Siyuan Ma", "Soumik Mandal" ], "title": "Reconciling modern machine learning practice and the bias-variance trade-off", "venue": "arXiv preprint arXiv:1812.11118,", "year": 2018 }, { "authors": [ "Mikhail Belkin", "Siyuan Ma", "Soumik Mandal" ], "title": "To understand deep learning we need to understand kernel learning, 2018b", "venue": null, "year": 2018 }, { "authors": [ "Mikhail Belkin", "Daniel Hsu", "Ji Xu" ], "title": "Two models of double descent for weak features, 2019a", "venue": null, "year": 2019 }, { "authors": [ "Mikhail Belkin", "Daniel Hsu", "Ji Xu" ], "title": "Two models of double descent for weak features", "venue": "arXiv preprint arXiv:1903.07571,", "year": 2019 }, { "authors": [ "Koby Bibas", "Yaniv Fogel", "Meir Feder" ], "title": "A new look at an old problem: A universal learning approach to linear regression", "venue": null, "year": 1905 }, { "authors": [ "Emmanuel Caron", "Stephane Chretien" ], "title": "A finite sample analysis of the double descent phenomenon for ridge function estimation, 2020", "venue": null, "year": 2020 }, { "authors": [ "Lin Chen", "Yifei Min", "Mikhail Belkin", "Amin Karbasi" ], "title": "Multiple descent: Design your own generalization curve, 2020", "venue": null, "year": 2020 }, { "authors": [ "Stéphane d’Ascoli", "Levent Sagun", "Giulio Biroli" ], "title": "Triple descent and the two kinds of overfitting", "venue": "Where why do they appear?,", "year": 2020 }, { "authors": [ "Mario Geiger", "Stefano Spigler", "Stéphane d’ Ascoli", "Levent Sagun", "Marco Baity-Jesi", "Giulio Biroli", "Matthieu Wyart" ], "title": "Jamming transition as a paradigm to understand the loss landscape of deep neural networks", "venue": "Physical Review E,", "year": 2019 }, { "authors": [ "Mario Geiger", "Arthur Jacot", "Stefano Spigler", "Franck Gabriel", "Levent Sagun", "Stéphane d’ Ascoli", "Giulio Biroli", "Clément Hongler", "Matthieu Wyart" ], "title": "Scaling description of generalization with number of parameters in deep learning", "venue": "Journal of Statistical Mechanics: Theory and Experiment,", "year": 2020 }, { "authors": [ "Behrooz Ghorbani", "Song Mei", "Theodor Misiakiewicz", "Andrea Montanari" ], "title": "Linearized two-layers neural networks in high dimension, 2019", "venue": null, "year": 2019 }, { "authors": [ "Trevor Hastie", "Andrea Montanari", "Saharon Rosset", "Ryan J Tibshirani" ], "title": "Surprises in high dimensional ridgeless least squares interpolation", "venue": null, "year": 1903 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q. Weinberger" ], "title": "Densely connected convolutional networks", "venue": null, "year": 2017 }, { "authors": [ "Gustav Larsson", "Michael Maire", "Gregory Shakhnarovich" ], "title": "Ultra-deep neural networks without residuals", "venue": "arXiv preprint arXiv:1605.07648,", "year": 2016 }, { "authors": [ "Tengyuan Liang", "Alexander Rakhlin", "Xiyu Zhai" ], "title": "On the multiple descent of minimum-norm interpolants and restricted lower isometry of kernels, 2019", "venue": null, "year": 2019 }, { "authors": [ "Song Mei", "Andrea Montanari" ], "title": "The generalization error of random features regression: Precise asymptotics and double descent curve", "venue": null, "year": 1908 }, { "authors": [ "Partha P. Mitra" ], "title": "Understanding overfitting peaks in generalization error: Analytical risk curves for l2 and l1 penalized interpolation", "venue": null, "year": 1906 }, { "authors": [ "Vidya Muthukumar", "Kailas Vodrahalli", "Anant Sahai" ], "title": "Harmless interpolation of noisy data in regression", "venue": "arXiv preprint arXiv:1903.09139,", "year": 2019 }, { "authors": [ "Preetum Nakkiran" ], "title": "More data can hurt for linear regression: Sample-wise double descent, 2019", "venue": null, "year": 2019 }, { "authors": [ "Preetum Nakkiran", "Gal Kaplun", "Yamini Bansal", "Tristan Yang", "Boaz Barak", "Ilya Sutskever" ], "title": "Deep double descent: Where bigger models and more data hurt", "venue": null, "year": 1912 }, { "authors": [ "Preetum Nakkiran", "Prayaag Venkat", "Sham Kakade", "Tengyu Ma" ], "title": "Optimal regularization can mitigate double descent, 2020", "venue": null, "year": 2020 }, { "authors": [ "Brady Neal", "Sarthak Mittal", "Aristide Baratin", "Vinayak Tantia", "Matthew Scicluna", "Simon LacosteJulien", "Ioannis Mitliagkas" ], "title": "A modern take on the bias-variance tradeoff in neural networks, 2018", "venue": null, "year": 2018 }, { "authors": [ "Ruslan Salakhutdinov" ], "title": "Deep learning tutorial at the simons institute, berkeley", "venue": null, "year": 2017 }, { "authors": [ "Elie Bienenstock Stuart Geman", "Ren Doursat" ], "title": "Neural networks and the bias/variance dilemma", "venue": "Neural Computation,", "year": 1992 }, { "authors": [ "Zitong Yang", "Yaodong Yu", "Chong You", "Jacob Steinhardt", "Yi Ma" ], "title": "Rethinking bias-variance trade-off for generalization of neural networks", "venue": "arXiv preprint arXiv:2002.11328,", "year": 2020 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization, 2016", "venue": null, "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Underparameterization and overparameterization are at the heart of understanding modern neural networks. The traditional notion of underparameterization and overparameterization led to the classic U-shaped generalization error curve (Trevor Hastie & Friedman, 2001; Stuart Geman & Doursat, 1992), where generalization would worsen when the model had either too few (underparameterized) or too many parameters (overparameterized). Correspondingly, it was expected that an underparameterized model would underfit and fail to identify more complex and informative patterns, and an overparameterized model would overfit and identify non-informative patterns. This view no longer holds for modern neural networks.\nIt is widely accepted that neural networks are vastly overparameterized, yet generalize well. There is strong evidence that increasing the number of parameters leads to better generalization (Zagoruyko & Komodakis, 2016; Huang et al., 2017; Larsson et al., 2016), and models are often trained to achieve zero training loss (Salakhutdinov, 2017), while still improving in generalization error, whereas the traditional view would suggest overfitting.\nTo bridge the gap, Belkin et al. (2018a) proposed the double descent curve, where the underparameterized region follows the U-shaped curve, and the overparameterized region smoothly decreases in generalization error, as the number of parameters increases further. This results in a peak in generalization error, where a fewer number of samples would counter-intuitively decrease the error. There has been extensive experimental evidence of the double descent curve in deep learning (Nakkiran et al., 2019; Yang et al., 2020), as well as in models such as random forests, and one layer neural networks (Belkin et al., 2018a; Ba et al., 2020).\nOne recurring theme in the definition of overparameterization and underparameterization lies in the number of neural network parameters relative to the number of samples (Belkin et al., 2018a; Nakkiran et al., 2019; Ba et al., 2020; Bibas et al., 2019; Muthukumar et al., 2019; Hastie et al., 2019). On a high level, a greater number of parameters than samples is generally considered overparameterization, and fewer is considered underparameterization.\nHowever, this leads to the question “What is a sample?” In this paper, we revisit the fundamental underpinnings of overparameterization and underparameterization, and stress test when it means to be overparameterized or underparameterized, through extensive experiments of a cleverly constructed input. We artificially augment existing datasets by simply stacking every combination of inputs, and show the mitigation of the double descent curve in the deep neural network setting. We\nhumbly hypothesize that in deep neural networks we can, perhaps, artificially increase the number of samples without increasing the information contained in the dataset, and by implicitly changing the classification pipeline mitigate the double descent curve. In particular, the narrative of our paper obeys the following:\n• We propose a simple construction to artificially augment existing datasets of sizeO(n) by stacking inputs to produce a dataset of size O(n2).\n• We demonstrate that the construction has no impact on the double descent curve in the linear regression case.\n• We show experimentally that those results on double descent curve do not extend to the case of neural networks.\nConcretely, we reproduce results from recent landmark papers, and present the difference in behavior with respect to the double descent curve." }, { "heading": "2 RELATED WORKS", "text": "The double descent curve was proposed recently in (Belkin et al., 2018a), where the authors define overparameterization and underparameterization as the proportion of parameters to samples. The authors explain the phenomenon through the model capacity class. With more parameters in the overparameterized region, there is larger “capacity” (i.e., the model class contains more candidates), and thus may contain better, simpler models by Occam’s Razor rule. The interpolation region is suggested to exist when the model capacity is capable of fitting the data nearly perfectly by overfitting on non-informative features, resulting in higher test error. Experiments included a one layer neural network, random forests, and others.\nThe double descent curve is also observed in deep neural networks (Nakkiran et al., 2019), with the additional observation of epoch-wise double descent. Experimentation is amplified by label noise. With the observation of unimodel variance (Neal et al., 2018), Yang et al. (2020) also decomposes the risk into bias and variance, and posits that the double descent curve arises due to the bell-shaped variance curve rising faster than the bias decreases.\nThere is substantial theoretical work on double descent, particularly in the least squares regression setting. Advani & Saxe (2017) analyses this linear setting and proves the existence of the interpolation region, where the number of parameters equals the number of samples in the asymptotic limit where samples and parameters tend to infinity. Hastie et al. (2019) follows a similar line of work, and proves that regularization reduces the peak in the interpolation region. Belkin et al. (2019b) requires only finite samples, where the features and target be jointly Gaussian. Other papers with similar setup include (Bartlett et al., 2019; Muthukumar et al., 2019; Bibas et al., 2019; Mitra, 2019; Mei & Montanari, 2019).\nBa et al. (2020) analyses the least squares regression setting for two layer linear neural networks in the asymptotic setting, where the double descent curve is present when only the second layer is optimized. There is also work in proving that optimally tuned `2-norm regularization mitigates the double descent curve for certain linear regression models with isotropic data distribution (Nakkiran, 2019). This setting has also been studied with respect to the variance in the parameter space (Bartlett et al., 2019). Multiple descent has also been studied, and in particular there is work to show in the linear regression setting that multiple descent curves can be directly designed by the user (Chen et al., 2020). Additionally, there is supporting evidence of double descent in the sample-wise perspective (Nakkiran et al., 2020).\nThere is other work in this area, including studying the double descent curve for least squares in random feature models (Belkin et al., 2019a; d’Ascoli et al., 2020; Ghorbani et al., 2019), leveraging the Neural Tangent Kernel to argue for certain number of parameters the output of the neural network diverges (Geiger et al., 2020), characterizing the double descent in non-linear settings (Caron & Chretien, 2020), kernel learning (Belkin et al., 2018b; Liang et al., 2019), and connecting to other fields (Geiger et al., 2019). Lastly, we note here that, in the deep neural network setting, models can be trained to zero training loss even with random labels (Zhang et al., 2016)." }, { "heading": "3 THE CONCATENATED INPUTS CONSTRUCTION", "text": "We introduce the concatenated inputs construction, on which our main hypothesis is based on. The concatenated inputs construction refers to the general idea of concatenating pairs of inputs and element-wise adding and averaging pairs of outputs to produce new inputs and targets. This way the size of a dataset can be artificially –but non-trivially– increased.\nThis construction can be applied both to the regression setting and the classification setting. In the setting of linear regression, for given input pairs, (x1, y1), (x2, y2), an augmented dataset can be constructed as:{(\n[x1, x1], y1+y1\n2\n) , ( [x1, x2], y1+y2 2 ) , ( [x2, x1], y2+y1 2 ) , ( [x2, x2], y2+y2 2 )} ,\nwhere [α, β] represents concatenation of the input α, β. In the setting of classification, the process is identical, where the targets are produced by element-wise addition and then averaged to sum to 1. The averaging is not strictly necessary even in the deep neural network classification case, where the binary cross entropy loss can be used instead of cross entropy. For test data, we concatenate the same input with itself, and the target is the original target. This way a dataset of size O(n) is artificially augmented to size O(n2).\nConcretely, our reasons for the concatenated inputs construction are as follows: i) there is limited injection of information or semantic meaning; ii) the number of samples is significantly increased. For the purposes of understanding underparameterization, overparameterization and the double descent curve, such a construction tries to isolate the definition of a sample. We revisit and assess these implications in the context of extensive experiments in the following sections." }, { "heading": "4 RESULTS", "text": "In this section, we reproduce settings from benchmark double descent papers, add the concatenated inputs construction and analyze the findings. In particular, we begin with the linear regression, move to one hidden layer feedforward neural networks, and then deep neural networks, in both the model parameter-wise double descent and epoch-wise double descent. Finally, we analyze the performance of deep neural networks for the concatenated inputs construction, and the behavior of the double descent curve in the classification setting, when we transfer from the cross entropy to the binary cross entropy loss." }, { "heading": "4.1 LINEAR REGRESSION", "text": "The linear regression setting has been a fruitful testbed for empirical work in double descent, as well as yielding substantial theoretical understanding. The concatenated inputs construction is applied\nsimilarly here, however with different motivation. Namely, we wish to motivate that the concatenated inputs construction is not expected to add any information and is therefore not expected to impact the double descent curve.\nWe reproduce the linear regression setting from Nakkiran (2019), given in Figure 1. For the concatenated inputs construction, we first draw the number of samples before concatenation and construction of the augmented dataset. We observe that, by construction, the concatenated inputs construction does not affect the double descent curve, and the peak occurs in the exact same location. We also make the remark here that it is not surprising that this is the case, and it is not complicated to understand why from a theoretical perspective." }, { "heading": "4.2 ONE HIDDEN LAYER FEEDFORWARD NEURAL NETWORK", "text": "Following linear regression, we move to neural networks. We train a feedforward neural network with one hidden layer and ReLU activations on a subset of the MNIST dataset, reproducing the experimental setup from Belkin et al. (2018a). We vary the number of parameters in the neural network by changing the size of the hidden layer. We use the Cross Entropy loss instead of the original MSE loss due to the prevalence of those losses for image classification tasks. This is shown in Figure 2.\nWe observe the double descent in the loss versus number of parameters, but were unable to produce the double descent in the error. In the rightmost plot, the double descent curve is completely removed in the concatenated inputs construction relative to the other two settings. Namely, a smooth decrease in loss is observed, where there is a clear double descent in the other cases. Furthermore, we provide the extra setting of concatenating each input only by itself, and the double descent curve is present almost exactly in this scenario. This provides further evidence that the disappearance of the double descent is not due to the extra parameters which originate from the larger sized inputs. In this setting, it appears that the behavior of underparameterization and overparameterization can be altered by simply artificially increasing the number of samples through concatenating images.\nIn addition, the model trained on MNIST and one-hot vectors can be concatenated with itself, with all other parameters being zero, to produce a model with two times the number of hidden units which can be applied to the concatenated inputs construction. We consider this setting in the context of a possible explanation of the interpolation region, where the number of parameters nears that of samples. Concretely, it is possible for a neural network with double the hidden units in the concatenated inputs construction to recover the double descent curve by learning two smaller, disconnected networks, where each of the smaller networks are the ones learned in the double descent peak of the standard, one-hot case. However, in practice while the network can do so, it does not appear to, which leads to the smooth descent in the rightmost plot in Figure 2." }, { "heading": "4.3 DEEP NEURAL NETWORKS", "text": "In this section, we investigate deep double descent in several settings, including model size double descent, and epoch-wise double descent. Finally, we decompose the loss into bias and variance to further investigate the effect of the concatenated inputs construction." }, { "heading": "4.3.1 MODEL SIZE DOUBLE DESCENT", "text": "ResNet18 - CIFAR10. We train a ResNet18 architecture on the CIFAR10 dataset in Figure 3, reproducing the experimental setup from Nakkiran et al. (2019). We vary the number of parameters in the neural network by changing the width k. In this experimental setup, we use deep neural networks and reproduce the double descent with respect to the model size. Immediately, we see that the double descent curve is relatively mitigated in the case of the concatenated inputs construction\nwhere from k = 5 to 20. Here, the curve is much smoother, although not entirely smooth. Notably, the concatenated inputs construction retains the test error across k, except in the interpolation region where it achieves significantly lower test error (> 5%). We also note that the conclusion is identical when error is plotted against parameters, since the concatenated inputs construction typically adds a very small number of parameters (< 5%).\nWe also present the plot for loss in Figure 3 (Top row). Here, we observe a clear double descent in the one-hot case, where the model follows the classical U shaped curve prior to the interpolation region, and then steadily decreases in loss after. The concatenated inputs construction exhibits more interesting behavior where the U curve is significantly muted and mitigated, and there is no clear peak of the curve.\nResNet18 - CIFAR100. We train a ResNet18 architecture on the CIFAR100 dataset in Figure 4, reproducing the experimental setup from Nakkiran et al. (2019) and almost identical setup in ResNet18 - CIFAR10. The results are similar and slightly clearer here, where the double descent curve in error entirely disappears with the concatenated inputs construction, and test error is decreased in the interpolation region. Again, the test error difference is significant (> 5%)." }, { "heading": "4.3.2 EPOCH-WISE DOUBLE DESCENT", "text": "ResNet18 - CIFAR10. We use the same setting as in the previous section, except we train models for an additional 600 epochs, for a total of 1000 epochs. In Figure 5, we plot test error against epochs. In the one hot setting, as expected we observe a U shape for medium sized models and a\ndouble descent for larger models (Nakkiran et al., 2019). For the concatenated inputs construction, a flat U shape and double descent is indeed observed, although they are significantly mitigated. The mitigation allows a 5% improvement at the end of training for medium sized models and a 10% improvement in the middle of training for large sized models.\nResNet18 - CIFAR100. Similarly to above, we use the same setting as in the previous section, except we train models for a total of 1000 epochs. Results are given in Figure 6. The one hot results mirror ResNet18 - CIFAR10, as expected (Nakkiran et al., 2019). For the concatenated input construction, we observe complete mitigation of the double descent curve for medium and large sized models. For large models, this is more than a 10% gap." }, { "heading": "4.3.3 BIAS VARIANCE DECOMPOSITION", "text": "In this section, we follow Yang et al. (2020) and decompose the loss into bias and variance. Namely, let CE denote the cross entropy loss, T a random variable representing the training set, π is the true one-hot label, π̄ is the average log-probability after normalization, and π̂ is the output of the neural network. Then,\nET [CE(π, π̂)] = DKL(π||π̄) + ET [DKL(π̄||π̂)]\nwhere the first component is the bias and the second component is the variance. On a high level, the variance can then be estimated by training separate same capacity models on different splits of the dataset, and then measuring the difference in outputs on the same test set. The bias is then computed by subtracting the empirical variance from the empirical risk. For finer details, see Yang et al. (2020)\nFor training, we follow Yang et al. (2020) and train a ResNet-34 (He et al., 2016) on the CIFAR10 dataset, with stochastic gradient descent (learning rate = 0.1, momentum = 0.9). The learning rate is decayed a factor of 0.1 every 200 epochs, and a weight decay of 5e-4 is used. The width k of the network is varied suitably between 1 and 64. We also make 5 splits of 10,000 training samples for the calculation of bias and variance.\nWe present results in Figure 7. The concatenated inputs construction significantly delays and smoothens the increase in variance relative to the standard case, where the unimodal variance is significantly sharper. This impacts the shape of the test error, where in this setting we see a shifted bump in test error for the concatenated inputs construction. One possible explanation is in the case of deep neural networks the concatenated inputs construction is a form of implicit regularization for small models, which controls overfitting and leads to a smoother variance curve." }, { "heading": "5 DISCUSSION", "text": "We revisit the topic of overparameterization, underparameterization and the double descent curve. The understanding of overparameterization, underparameterization, and the double descent curve is strongly tied to the number of samples. It appears that given a fixed number of unique samples it is possible to manipulate overparameterization and underparameterization by artificially boosting the number of samples by the concatenated inputs construction. This is based on the observation of removing or shifting the double descent curve through the concatenated inputs construction.\nA similar topic on double descent is the possible explanation that the model is being forced to fit the training data as perfectly as possible, and at some model capacity it is possible to fit the training data perfectly by overfitting on non-existent, or weakly present, features. This results in overfitting and the double descent curve. Interestingly, the concatenated inputs construction generally removes the double descent curve, even though it is possible to build models for the concatenated inputs construction from models for the standard setting. This suggests a possible route to improve on the understanding of the relationship to the model capacity.\nFurthermore, we consider the topic of underfitting. It is well know that overparameterized neural networks don’t exhibit strong overfitting in practice, even though they can memorize the dataset (Zhang et al., 2016). The experimental results in this work regarding underparameterization, overparameterization and the double descent curve support that the behavior of the neural network can change with respect to the number of samples, even if the majority of samples add limited information, via the concatenated inputs construction. In this view, the concatenated inputs construction creates possibly a huge dataset, for example 50, 0002 samples for the originally 50, 000 samples CIFAR10 dataset where 50, 0002 = 2, 500, 000, 000 is far larger than any neural network for the CIFAR10 dataset. Yet, there is no noticeable underfitting. Namely, the concatenated inputs construction very quickly achieves comparable performance compared to the standard one-hot vector deep learning setting. This suggests we may need to rethink the relationship between underfitting and the number of parameters, samples, and model capacity." }, { "heading": "6 CONCLUSION", "text": "In this paper, we examine the double descent phenomena through the concatenated inputs construction. The concatenated inputs construction is designed to add limited information whilst artificially augmenting the size of the dataset. As constructed, in the linear regression setting there is no impact on the location of the peak. Yet, in the neural network setting, the concatenated inputs construction generally removes the double descent curve. Finally, we explore this phenomena through bias variance decomposition, and draw connections with samples, model capacity, and underfitting in discussion." } ]
2,020
null
SP:496ef52f5094a12fe59e9966848b69b54c7763fd
[ "The paper proposes combining model-based RL with high-level skill learning and composition through hierarchical RL, into a single reinforcement learning framework. More specifically, the proposed approach leverages planning and composing skills in the low-dimensional, high-level representation, and learn low-level skills conditioned on the high-level skills. Only the low-level policies are executed in the environment to generate experiences. A mutual information objective is used to learn low-level policies conditioned on high-level skills, and this was shown to improve sample efficiency as the low-level policies do not learn to ignore the high-level skills they are conditioned on. " ]
To quickly solve new tasks in complex environments, intelligent agents need to build up reusable knowledge. For example, a learned world model captures knowledge about the environment that applies to new tasks. Similarly, skills capture general behaviors that can apply to new tasks. In this paper, we investigate how these two approaches can be integrated into a single reinforcement learning agent. Specifically, we leverage the idea of partial amortization for fast adaptation at test time. For this, actions are produced by a policy that is learned over time while the skills it conditions on are chosen using online planning. We demonstrate the benefits of our design decisions across a suite of challenging locomotion tasks and demonstrate improved sample efficiency in single tasks as well as in transfer from one task to another, as compared to competitive baselines. Videos are available at: https://sites.google.com/view/latent-skill-planning/
[ { "affiliations": [], "name": "Kevin Xie" }, { "affiliations": [], "name": "Homanga Bharadhwaj" }, { "affiliations": [], "name": "Danijar Hafner" }, { "affiliations": [], "name": "Animesh Garg" }, { "affiliations": [], "name": "Florian Shkurti" } ]
[ { "authors": [ "Alexander A Alemi", "Ian Fischer", "Joshua V Dillon", "Kevin Murphy" ], "title": "Deep variational information bottleneck", "venue": "arXiv preprint arXiv:1612.00410,", "year": 2016 }, { "authors": [ "Brandon Amos", "Denis Yarats" ], "title": "The differentiable cross-entropy method", "venue": "arXiv preprint arXiv:1909.12830,", "year": 2019 }, { "authors": [ "Pierre-Luc Bacon", "Jean Harb", "Doina Precup" ], "title": "The option-critic architecture", "venue": "In Thirty-First AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Andrew G Barto", "Sridhar Mahadevan" ], "title": "Recent advances in hierarchical reinforcement learning", "venue": "Discrete event dynamic systems,", "year": 2003 }, { "authors": [ "Homanga Bharadhwaj", "Kevin Xie", "Florian Shkurti" ], "title": "Model-predictive planning via cross-entropy and gradient-based optimization", "venue": "In Learning for Dynamics and Control,", "year": 2020 }, { "authors": [ "Arunkumar Byravan", "Jost Tobias Springenberg", "Abbas Abdolmaleki", "Roland Hafner", "Michael Neunert", "Thomas Lampe", "Noah Siegel", "Nicolas Heess", "Martin Riedmiller" ], "title": "Imagined value gradients: Model-based policy optimization with tranferable latent dynamics models", "venue": "In Conference on Robot Learning,", "year": 2020 }, { "authors": [ "Kurtland Chua", "Roberto Calandra", "Rowan McAllister", "Sergey Levine" ], "title": "Deep reinforcement learning in a handful of trials using probabilistic dynamics models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Benjamin Eysenbach", "Abhishek Gupta", "Julian Ibarz", "Sergey Levine" ], "title": "Diversity is all you need: Learning skills without a reward function", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Carlos Florensa", "Yan Duan", "Pieter Abbeel" ], "title": "Stochastic neural networks for hierarchical reinforcement learning", "venue": "arXiv preprint arXiv:1704.03012,", "year": 2017 }, { "authors": [ "Erin Grant", "Chelsea Finn", "Sergey Levine", "Trevor Darrell", "Thomas Griffiths" ], "title": "Recasting gradientbased meta-learning as hierarchical bayes", "venue": "arXiv preprint arXiv:1801.08930,", "year": 2018 }, { "authors": [ "Karol Gregor", "Danilo Jimenez Rezende", "Daan Wierstra" ], "title": "Variational intrinsic control", "venue": "arXiv preprint arXiv:1611.07507,", "year": 2016 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "arXiv preprint arXiv:1801.01290,", "year": 2018 }, { "authors": [ "Danijar Hafner", "Timothy Lillicrap", "Ian Fischer", "Ruben Villegas", "David Ha", "Honglak Lee", "James Davidson" ], "title": "Learning latent dynamics for planning from pixels", "venue": "arXiv preprint arXiv:1811.04551,", "year": 2018 }, { "authors": [ "Danijar Hafner", "Timothy Lillicrap", "Jimmy Ba", "Mohammad Norouzi" ], "title": "Dream to control: Learning behaviors by latent imagination", "venue": "arXiv preprint arXiv:1912.01603,", "year": 2019 }, { "authors": [ "Jean Harb", "Pierre-Luc Bacon", "Martin Klissarov", "Doina Precup" ], "title": "When waiting is not an option: Learning options with a deliberation cost", "venue": "arXiv preprint arXiv:1709.04571,", "year": 2017 }, { "authors": [ "Michael Janner", "Justin Fu", "Marvin Zhang", "Sergey Levine" ], "title": "When to trust your model: Model-based policy optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Vijay R Konda", "John N Tsitsiklis" ], "title": "Actor-critic algorithms. In Advances in neural information processing", "venue": null, "year": 2000 }, { "authors": [ "Sergey Levine" ], "title": "Reinforcement learning and control as probabilistic inference: Tutorial and review", "venue": "arXiv preprint arXiv:1805.00909,", "year": 2018 }, { "authors": [ "Ofir Nachum", "Shixiang Shane Gu", "Honglak Lee", "Sergey Levine" ], "title": "Data-efficient hierarchical reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Masashi Okada", "Tadahiro Taniguchi" ], "title": "Variational inference mpc for bayesian model-based reinforcement learning", "venue": "arXiv preprint arXiv:1907.04202,", "year": 2019 }, { "authors": [ "Kate Rakelly", "Aurick Zhou", "Chelsea Finn", "Sergey Levine", "Deirdre Quillen" ], "title": "Efficient off-policy meta-reinforcement learning via probabilistic context variables", "venue": "In International conference on machine learning,", "year": 2019 }, { "authors": [ "Reuven Y Rubinstein" ], "title": "Optimization of computer simulation models with rare events", "venue": "European Journal of Operational Research,", "year": 1997 }, { "authors": [ "Ramanan Sekar", "Oleh Rybkin", "Kostas Daniilidis", "Pieter Abbeel", "Danijar Hafner", "Deepak Pathak" ], "title": "Planning to explore via self-supervised world models", "venue": "arXiv preprint arXiv:2005.05960,", "year": 2020 }, { "authors": [ "Archit Sharma", "Shixiang Gu", "Sergey Levine", "Vikash Kumar", "Karol Hausman" ], "title": "Dynamics-aware unsupervised discovery of skills", "venue": "arXiv preprint arXiv:1907.01657,", "year": 2019 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement learning: An introduction", "venue": "MIT press,", "year": 2018 }, { "authors": [ "Richard S Sutton", "Doina Precup", "Satinder Singh" ], "title": "Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning", "venue": "Artificial intelligence,", "year": 1999 }, { "authors": [ "Naftali Tishby", "Fernando C Pereira", "William Bialek" ], "title": "The information bottleneck method", "venue": "arXiv preprint physics/0004057,", "year": 2000 }, { "authors": [ "Emanuel Todorov" ], "title": "General duality between optimal control and estimation", "venue": "IEEE Conference on Decision and Control,", "year": 2008 }, { "authors": [ "Jane X Wang", "Zeb Kurth-Nelson", "Dhruva Tirumala", "Hubert Soyer", "Joel Z Leibo", "Remi Munos", "Charles Blundell", "Dharshan Kumaran", "Matt Botvinick" ], "title": "Learning to reinforcement learn", "venue": "arXiv preprint arXiv:1611.05763,", "year": 2016 }, { "authors": [ "Tingwu Wang", "Jimmy Ba" ], "title": "Exploring model-based planning with policy networks", "venue": "arXiv preprint arXiv:1906.08649,", "year": 2019 }, { "authors": [ "Tingwu Wang", "Xuchan Bao", "Ignasi Clavera", "Jerrick Hoang", "Yeming Wen", "Eric Langlois", "Shunshi Zhang", "Guodong Zhang", "Pieter Abbeel", "Jimmy Ba" ], "title": "Benchmarking model-based reinforcement learning", "venue": null, "year": 1907 }, { "authors": [ "Grady Williams", "Paul Drews", "Brian Goldfain", "James M Rehg", "Evangelos A Theodorou" ], "title": "Aggressive driving with model predictive path integral control", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2016 }, { "authors": [ "Zhongwen Xu", "Hado P van Hasselt", "David Silver" ], "title": "Meta-gradient reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Tianhe Yu", "Deirdre Quillen", "Zhanpeng He", "Ryan Julian", "Karol Hausman", "Chelsea Finn", "Sergey Levine" ], "title": "Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning", "venue": null, "year": 1910 } ]
[ { "heading": null, "text": "1 INTRODUCTION\nHumans can effortlessly compose skills, where skills are a sequence of temporally correlated actions, and quickly adapt skills learned from one task to another. In order to build re-usable knowledge about the environment, Model-based Reinforcement Learning (MBRL) (Wang et al., 2019) provides an intuitive framework which holds the promise of training agents that generalize to different situations, and are sample efficient with respect to number of environment interactions required for training. For temporally composing behaviors, hierarchical reinforcement learning (HRL) (Barto & Mahadevan, 2003) seeks to learn behaviors at different levels of abstraction explicitly.\nA simple approach for learning the environment dynamics is to learn a world model either directly in the observation space (Chua et al., 2018; Sharma et al., 2019; Wang & Ba, 2019) or in a latent space (Hafner et al., 2019; 2018). World models summarize an agent’s experience in the form of learned transition dynamics, and reward models, which are used to learn either parametric policies by amortizing over the entire training experience (Hafner et al., 2019; Janner et al., 2019), or perform online planning as done in Planet (Hafner et al., 2018), and PETS (Chua et al., 2018). Amortization here refers to learning a parameterized policy, whose parameters are updated using samples during the training phase, and which can then be directly queried at each state to output an action, during evaluation.\nFully online planning methods such as PETS (Chua et al., 2018) only learn the dynamics (and reward) model and rely on an online\n∗Kevin and Homanga contributed equally to this work.\nsearch procedure such as Cross-Entropy Method (CEM; Rubinstein, 1997) on the learned models to determine which action to execute next. Since rollouts from the learned dynamics and reward models are not executed in the actual environment during training, these learned models are sometimes also referred to as imagination models (Hafner et al., 2018; 2019). Fully amortized methods such as Dreamer (Hafner et al., 2019), train a reactive policy with many rollouts from the imagination model. They then execute the resulting policy in the environment.\nThe benefit of the amortized method is that it becomes better with experience. Amortized policies are also faster. An action is computed in one forward pass of the reactive policy as opposed to the potentially expensive search procedure used in CEM. Additionally, the performance of the amortized method is more consistent as CEM relies on drawing good samples from a random action distribution. On the other hand, the shortcoming of the amortized policy is generalization. When attempting novel tasks unseen during training, CEM will plan action sequences for the new task, as per the new reward function while a fully amortized method would be stuck with a behaviour optimized for the training tasks. Since it is intractable to perform fully online random shooting based planning in high-dimensional action spaces (Bharadhwaj et al., 2020; Amos & Yarats, 2019), it motivates the question: can we combine online search with amortized policy learning in a meaningful way to learn useful and transferable skills for MBRL?\nTo this end, we propose a partially amortized planning algorithm that temporally composes high-level skills through the Cross-Entropy Method (CEM) (Rubinstein, 1997), and uses these skills to condition a low-level policy that is amortized over the agent’s experience. Our world model consists of a learned latent dynamics model, and a learned latent reward model. We have a mutual information (MI) based intrinsic reward objective, in addition to the predicted task rewards that are used to train the low level-policy, while the high level skills are planned through CEM using the learned task rewards. We term our approach Learning Skills for Planning (LSP).\nThe key idea of LSP is that the high-level skills are able to abstract out essential information necessary for solving a task, while being agnostic to irrelevant aspects of the environment, such that given a new task in a similar environment, the agent will be able to meaningfully compose the learned skills with very little fine-tuning. In addition, since the skill-space is low dimensional, we can leverage the benefits of online planning in skill space through CEM, without encountering intractability of using CEM for planning directly in the higher dimensional action space and especially for longer time horizons (Figure 1).\nIn summary, our main contributions are developing a partially amortized planning approach for MBRL, demonstrating that high-level skills can be temporally composed using this scheme to condition low level policies, and experimentally demonstrating the benefit of LSP over challenging locomotion tasks that require composing different behaviors to solve the task, and benefit in terms of transfer from one quadruped locomotion task to another, with very little adaptation in the target task." }, { "heading": "2 BACKGROUND", "text": "We discuss learning latent dynamics for MBRL, and mutual information skill discovery, that serve as the basic theoretical tools for our approach." }, { "heading": "2.1 LEARNING LATENT DYNAMICS AND BEHAVIORS IN IMAGINATION", "text": "Latent dynamics models are special cases of world models used in MBRL, that project observations into a latent representation, amenable for planning (Hafner et al., 2019; 2018). This framework is general as it can model both partially observed environments where sensory inputs can be pixel observations, and fully observable environments, where sensory inputs can be proprioceptive state features. The latent dynamics models we consider in this work, consist of four key components, a representation module pθ(st|st−1, at−1, ot) and an observation module qθ(ot|sT ) that encode observations and actions to continuous vector-valued latent states st, a latent forward dynamics module qθ(st|st−1, at−1) that predicts future latent states given only the past states and actions, and a task reward module qθ(rt|st), that predicts the reward from the environment given the current latent state. To learn this model, the agent interacts with the environment and maximizes the following\nexpectation under the dataset of environment interactions D = {(ot, at, rt)} J .= ED (∑\nt\n( J tO + J tR + J tD )) + const J tO . = ln q(ot | st)\nJ tR . = ln q(rt | st) J tD . = −βKL ( p(st | st−1, at−1, ot) ∥∥ q(st | st−1, at−1)). (1) For optimizing behavior under this latent dynamics model, the agent rolls out trajectories in imagination and estimates the value V (·) of the imagined trajectories {sτ , aτ , rτ}t+Hτ=t through TD(λ) estimates as described by Sutton & Barto (2018); Hafner et al. (2019). The agent can either learn a fully amortized policy qφ(a|s) as done in Dreamer, by backpropagating through the learned value network vψ(·) or plan online through CEM, for example as in Planet." }, { "heading": "2.2 MUTUAL INFORMATION SKILL DISCOVERY", "text": "Some methods for skill discovery have adopted a probabilistic approach that uses the mutual information between skills and future states as an objective (Sharma et al., 2019). In this approach, skills are represented through a latent variable z upon which a low level policy π(a|s, z) is conditioned. Given the current state s0, skills are sampled from some selection distribution p(z|s0). The skill conditioned policy is executed under the environment dynamics pd(st+1|st, a) resulting in a series of future states abbreviated s′ := {s}. Mutual information is defined as:\nMI(z, {s}|s0) = H(z|s0)−H(z|{s}, s0) = H({s}|s0)−H({s}|s0, z)\nIt quantifies the reduction in uncertainty about the future states given the skill and vice versa. By maximizing the mutual information with respect to the low level policy, the skills are encouraged to produce discernible future states." }, { "heading": "3 PARTIAL AMORTIZATION THROUGH HIERARCHY", "text": "Our aim is to learn behaviors suitable for solving complex control tasks, and amenable to transfer to different tasks, with minimal fine-tuning. To achieve this, we consider the setting of MBRL, where the agent builds up re-usable knowledge of the environment dynamics. For planning, we adopt a partial amortization strategy, such that some aspects of the behavior are re-used over the entire training experience, while other aspects are learned online. We achieve partial amortization by forming high level latent plans and learning a low level policy conditioned on the latent plan. The three different forms of amortization in planning are described visually through probabilistic graphical models in Figure 2 and Figure 3.\nWe first describe the different components of our model, motivate the mutual information based auxiliary objective, and finally discuss the complete algorithm.\nWorld model. Our world model is a latent dynamics model consisting of the components described in section 2.\nLow level policy. The low-level policy qφ(at|st, z) is used to decide which action to execute given the current latent state st and the currently active skill z. Similar to Dreamer (Hafner et al., 2019), we also train a value model vψ(st) to estimate the expected rewards the action model achieves from each state st. We estimate value the same way as in equation 6 of Dreamer, balancing bias and variance. The action model is trained to maximize the estimate of the value, while the value model is trained to fit the estimate of the value that alters as the action model is updated, as done in a typical actor-critic setup (Konda & Tsitsiklis, 2000).\nHigh level skills. In our framework high level skills are continuous random variables that are held for a fixed number K steps. The high-level skills z are sampled from a skill selection distribution p(z1:dH/Ke|ζ) = N (µ,Σ) which is optimized for task performance through CEM. Here, H denotes the planning horizon. For the sake of notational convenience we denote z1:dH/Ke as z. Let (j) denote the jth CEM iteration. We first sample G skills {z(g)}Gg=1 ∼ p(z|ζ(j)), execute G parallel imaginary rollouts of horizon H in the learned model with the skill-conditioned policy qφ(at|st, z(g)). Instead of evaluating rollouts based only on the sum of rewards, we utilize the value network and compute value estimates {Vg}Gg=1. We sort {Vg}Gg=1, choose the top M values, and use the corresponding skills to update the sampling distribution parameters as ζ(j+1) = (µ(j+1),Σ(j+1))\nµ(j+1) = Mean({z(m)}Mm=1) Σ(j+1) = Variance({z(m)}Mm=1)" }, { "heading": "3.1 OVERALL ALGORITHM", "text": "Our overall algorithm consists of the three phases typical in a MBRL pipeline, that are performed iteratively. The complete algorithm is shown in Algorithm 1. The sub-routine for CEM planning that gets called in Algorithm 1 is described in Algorithm 2.\nModel Learning. We sample a batch of tuples from the dataset of environment interactions {(at, ot, rt)}k+Lt=k ∼ D, compute the latent states st ∼ pθ(st | st−1, at−1, ot), and use the resulting data to update the models pθ(st | st-1, at-1, ot), qθ(st | st-1, at-1), and qθ(rt | st) through the variational information bottleneck (VIB) (Tishby et al., 2000; Alemi et al., 2016) objective as in equation 13 of Hafner et al. (2019) and as described in section 2.\nBehavior Learning. Here, we roll out the low-level policy qφ(at|st, z) in the world model and use the state transitions and predicted rewards to optimize the parameters of the policy φ, the skill distribution ζ, the value model ψ, and the backward skill predictor χ. The backward skill predictor predicts the skill z given latent rollouts {s}.\nEnvironment Interaction. This step is to collect data in the actual environment for updating the model parameters. Using Model-Predictive Control (MPC), we re-sample high-level skills from the optimized pζ(z) every K steps, and execute the low-level policy qφ(at|st, z), conditioned on the currently active skill z. Hence, the latent plan has a lower temporal resolution as compared to the low level policy. This helps us perform temporally abstracted exploration easily in the skill space. We store the (observation, action, reward) tuples in the dataset D." }, { "heading": "3.2 MUTUAL INFORMATION SKILL OBJECTIVE", "text": "Merely conditioning the low level policy qφ(at|st, z) on the skill z is not sufficient as it is prone to ignoring it. Hence, we incorporate maximization of the mutual information (MI) between the latent skills z and the sequence of states {s} as an auxiliary objective. In this paper, we make use of imagination rollouts to estimate the mutual information under the agent’s learned dynamics model. We decompose the mutual information in terms of skill uncertainty reductionMI(z, {s}|s0) = H(z|s0)−H(z|{s}, s0).\nEstimatingMI(z, {s}|s0). Explicitly writing out the entropy terms, we have MI(z, {s}|s0) = H(z|s0)−H(z|{s}, s0) = ∫ p(z, {s}, s0) log\np(z|{s}, s0) p(z|s0)\nIn this case we need a tractable approximation to the skill posterior p(z|s0, s′).\nMI(z, {s}|s0) = ∫ p(z, {s}, s0) ( log\nq(z|{s}, s0) p(z|s0) + log p(z|{s}, s0) q(z|{s}, s0) ) Here the latter term is a KL divergence and must hence be positive, providing a lower bound forMI .\nMI(z, {s}|s0) ≥ ∫ p(z, {s}, s0) log\nq(z|{s}, s0) p(z|s0)\n= ∫ p(z, {s}, s0) log q(z|{s}, s0)− ∫ p(z|s0)p(s0) log p(z|s0)\n= Ep(z,{s},s0)[log q(z|{s}, s0)] + Es0 [H[p(z|s0)]]\nWe parameterize q(z|{s}, s0) with χ, i.e. qχ(z|{s}, s0), and call it the backward skill predictor, as it predicts the skill z given latent rollouts {s}. It is trained through standard supervised learning to maximize the likelihood of imagined rollouts Ep(z,{s},s0)[log q(z|{s}, s0)]. This mutual information objective is only a function of the policy through the first term and hence we use it as the intrinsic reward for the agent ri = log q(z|{s}, s0). The second term Es0 [H[p(z|s0)]] is the entropy of the skill selection distribution. When skills begin to specialize, the CEM distribution will naturally decrease in entropy and so we add Gaussian noise to the mean of the CEM-based skill distribution, µ← µ+ where ∼ N (0, Iσ). By doing this we lower bound the entropy of the skill selection distribution." }, { "heading": "4 EXPERIMENTS", "text": "We perform experimental evaluation over locomotion tasks based on the DeepMind Control Suite framework (Tassa et al., 2018) to understand the following questions:\n• Does LSP learn useful skills and compose them appropriately to succeed in individual tasks? • Does LSP adapt to a target task with different environment reward functions quickly, after\nbeing pre-trained on another task?\nTo answer these, we perform experiments on locomotion tasks, using agents with different dynamics - Quadruped, Walker, Cheetah, and Hopper, and environments where either pixel observations or proprioceptive features are available to the agent. Our experiments consist of evaluation in single tasks, in transfer from one task to another, ablation studies, and visualization of the learned skills." }, { "heading": "4.1 SETUP", "text": "Baselines. We consider Dreamer (Hafner et al., 2019), which is a state of the art model-based RL algorithm with fully amortized policy learning, as the primary baseline, based on its open-source tensorflow2 implementation. We consider a Hierarchical RL (HRL) baseline, HIRO (Nachum et al., 2018) that trains a high level amortized policy (as opposed to high level planning). For consistency, we use the same intrinsic reward for HIRO as our method. We consider two other baselines, named\nRandom Skills that has a hierarchical structure exactly as LSP but the skills are sampled randomly and there is no planning at the level of skills, and RandSkillsInit that is similar to Random Skills but does not include the intrinsic rewards (this is essentially equivalent to Dreamer with an additional random skill input) These two baselines are to help understand the utility of the learned skills. For the transfer experiments, we consider an additional baseline, a variant of our method that keeps the low-level policy fixed in the transfer environment. All results are over three random seeds.\nEnvironments. We consider challenging locomotion environments from DeepMind Control Suite (Tassa et al., 2018) for evaluation, that require learning walking, running, and hopping gaits which can be achieved by temporally composing skills. In the Quadruped GetUp Walk task, a quadruped must learn to stand up from a randomly initialized position that is sometimes upside down, and walk on a plane without toppling, while in Quadruped Reach, the quadruped agent must walk in order to reach a particular goal location. In Quadruped Run, the same quadruped agent must run as fast as possible, with higher rewards for faster speed. In the Quadruped Obstacle environments (Fig. 6a), the quadruped agent must reach a goal while circumventing multiple cylindrical obstacles. In Cheetah Run, and Walker Run, the cheetah and walker agents must run as fast as possible. In Hopper Hop, a one legged hopper must hop in the environment without toppling. It is extremely challenging to maintain stability of this agent." }, { "heading": "4.2 SOLVING SINGLE LOCOMOTION TASKS.", "text": "In Figure 4 we evaluate our approach LSP in comparison to the fully amortized baseline Dreamer, and the Random Skills baseline on a suite of challenging locomotion tasks. Although the environments have a single task objective, in order to achieve high rewards, the agents need to learn different walking gaits (Quadruped Walk, Walker Walk), running gaits (Quadruped Run, Walker Run), and hopping gaits (Hopper Hop) and compose learned skills appropriately for locomotion.\nFrom the results, it is evident that LSP either outperforms Dreamer or is competitive to it on all the environments. This demonstrates the benefit of the hierarchical skill-based policy learning approach of LSP. In addition, we observe that LSP significantly outperforms the Random Skills and RandSkillsInp baselines, indicating that learning skills and planning over them is important to succeed in these locomotion tasks. In order to tease out the benefits of hierarchy and partial amortization separately, we consider another hierarchical RL baseline, HIRO (Nachum et al., 2018), which is a state-of-the-art HRL algorithm that learns a high level amortized policy. LSP outperforms HIRO in all the tasks suggesting the utility of temporally composing the learned skills through planning as opposed to amortizing over them with a policy. HIRO has not been shown to work from images\nin the original paper (Nachum et al., 2018), and so we do not have learning curves for HIRO in the image-based environments, as it cannot scale directly to work with image based observations." }, { "heading": "4.3 TRANSFER FROM ONE TASK TO ANOTHER.", "text": "In Figure 5 we show results for a quadruped agent that is pre-trained on one task and must transfer to a different task with similar environment dynamics but different task description.\nQuadruped GetUp Walk → Reach Goal. The quadruped agent is pre-trained on the task of standing up from a randomly initialized position that is sometimes upside down, and walking on a plane without toppling. The transfer task consists of walking to reach a goal, and environment rewards are specified in terms of distance to goal. The agent is randomly initialized and is sometimes initialized upside down, such that it must learn to get upright and then start walking towards the goal. We see that LSP can adapt much quickly to the transfer task, achieving a reward of 500 only after 70, 000 steps, while Dreamer requires 130, 000 steps to achieve the same reward, indicating sample efficient transfer of learned skills.\nWe observe that the variant of our method LSP with a fixed policy in the transfer environment performs as well as or slightly better than LSP. This suggests that while transferring from the GetUp Walk to the Reach task, low level control is useful to be directly transferred while planning over high level skills which have changed is essential. As the target task is different, so it requires composition of different skills.\nQuadruped Walk→ Reach Goal. The quadruped agent is randomly initialized, but it is ensured that it is upright at initialization. In this setting, after pre-training, we re-label the value of rewards in the replay buffer of both the baseline Dreamer, and LSP with the reward function of the target Quadruped Reach Goal environment. To do this, we consider each tuple in the replay buffer of imagined trajectories during pre-training, and change the reward labels to the reward value obtained by querying the reward function of the target task at the corresponding state and action of the tuple. From the plot in Figure 5, we see that LSP is able to quickly bootstrap learning from the re-labeled Replay Buffer and achieve better target adaptation than the baseline.\nFrom the figure it is evident that for HIRO, the transfer task rewards converge at a much lower value than our method LSP and Dreamer, suggesting that the learned skills by an amortized high level policy overfits to the source task, and cannot be efficiently adapted to the target task. Similar to the previous transfer task, we also observe that the variant of our method LSP with a fixed policy in the transfer environment performs as well as or slightly better than LSP. This provides further evidence that since the underlying dynamics of the Quadruped agent is similar across both the tasks, low level control is useful to be directly transferred while the high level skills need to be adapted through planning." }, { "heading": "4.4 MUTUAL INFORMATION ABLATION STUDY", "text": "In order to better understand the benefit of the mutual information skill objective, we compare performance against a baseline that is equivalent to LSP but does not use the intrinsic reward to train the low level policy. We call this ablation baseline that does not have mutual information maximization between skills and states, as no_MI. We show the respective reward curves for the Quadruped GetUp Walk task in Figure 6b. Without the mutual information objective, LSP learns less quickly but still faster than the baseline Dreamer. This emphasizes the necessity of the MI skill objective in section 3.2 and suggests that merely conditioning the low-level policy on the learned skills is still effective to some extent but potentially suffers from the low-level policy learning to ignore them." }, { "heading": "4.5 VISUALIZATION OF THE LEARNED SKILLS", "text": "In Figure 7, we visualize learned skills of LSP, while transferring from the Quadruped Walk to the Quadruped Reach task. Each sub-figure (with composited images) corresponds to a different trajectory rolled out from the same initial state. It is evident that the learned skills are reasonably diverse and useful in the transfer task." }, { "heading": "5 RELATED WORK", "text": "Skill discovery. Some RL algorithms explicitly try to learn task decompositions in the form of re-usable skills, which are generally formulated as temporally abstracted actions (Sutton et al., 1999). Most recent skill discovery algorithms seek to maximize the mutual information between skills and input observations (Gregor et al., 2016; Florensa et al., 2017), sometimes resulting in an unsupervised diversity maximization objective (Eysenbach et al., 2018; Sharma et al., 2019). DADS (Sharma et al., 2019) is an unsupervised skill discovery algorithm for learning diverse skills with a skill-transition dynamics model, but does not learn a world model for low-level actions and observations, and hence cannot learn through imagined rollouts and instead requiring many environment rollouts with different sampled skills.\nHierarchical RL. Hierarchical RL (HRL) (Barto & Mahadevan, 2003) methods decompose a complex task to sub-tasks and solve each task by optimizing a certain objective function. HIRO (Nachum et al., 2018) learns a high-level policy and a low-level policy and computes intrinsic rewards for training the low-level policy through sub-goals specified as part of the state-representation the agent observes. Some other algorithms follow the options framework (Sutton et al., 1999; Bacon et al., 2017), where options correspond to temporal abstractions that need specifying some termination conditions. In practice, it is difficult to learn meaningful termination conditions without additional regularization (Harb et al., 2017). These HRL approaches are inherently specific to the tasks being trained on, and do not necessarily transfer to new domains, even with similar dynamics.\nTransfer in RL. Multiple previous works have investigated the problem of transferring policies to different environments. Progressive Networks (Rusu et al., 2016) bootstrap knowledge from previously learned tasks by avoiding catastrophic forgetting of the learned models, (Byravan et al., 2020) perform model-based value estimation for learning an amortized policy and transfer to tasks with different specified reward functions keeping the dynamics the same, while Plan2Explore (Sekar et al., 2020) first learns a global world model without task rewards through a self-supervised objective, and given a user-specified reward function at test time, quickly adapts to it. In contrast to these, several meta RL approaches learn policy parameters that generalize well with little fine-tuning (often in the form of gradient updates) to target environments (Finn et al., 2017; Xu et al., 2018; Wang et al., 2016; Rakelly et al., 2019; Yu et al., 2019).\nAmortization for planning. Most current MBRL approaches use some version of the ‘CrossEntropy Method’ (CEM) or Model-Predictive Path Integral (MPPI) for doing a random population based search of plans given the current model (Wang & Ba, 2019; Hafner et al., 2018; Williams et al., 2016; Sharma et al., 2019). These online non-amortized planning approaches are typically very expensive in high-dimensional action spaces. Although (Wang & Ba, 2019) introduces the idea of performing the CEM search in the parameter space of a distilled policy, it still is very costly and requires a lot of samples for convergence. To mitigate these issues, some recent approaches have combined gradient-descent based planning with CEM (Bharadhwaj et al., 2020; Amos & Yarats, 2019). In contrast, (Janner et al., 2019; Hafner et al., 2019) fully amortize learned policies over the entire training experience, which is fast even for high-dimensional action spaces, but cannot directly transfer to new environments with different dynamics and reward functions. We combined the best of both approaches by using CEM to plan online for high-level skills (of low dimensionality) and amortize the skill conditioned policy for low-level actions (of higher dimensionality)." }, { "heading": "6 DISCUSSION", "text": "In this paper, we analyzed the implications of partial amortization with respect to sample efficiency and overall performance on a suite of locomotion and transfer tasks. We specifically focused on the setting where partial amortization is enforced through a hierarchical planning model consisting of a fully amortized low-level policy and a fully online high level skill planner. Through experiments in both state-based and image-based environments we demonstrated the efficacy of our approach in terms of planning for useful skills and executing high-reward achieving policies conditioned on those skills, as evaluated by sample efficiency (measured by number of environment interactions) and asymptotic performance (measured by cumulative rewards and success rate).\nOne key limitation of our algorithm is that CEM planning is prohibitive in high-dimensional action spaces, and so we cannot have a very high dimensional skill-space for planning with CEM, that might be necessary for learning more expressive/complex skills in real-world robot control tasks. One potential direction of future work is to incorporate amortized learning of skill policies during training, and use CEM for online planning of skills only during inference. Another direction could be to incorporate gradient-descent based planning instead of a random search procedure as CEM, but avoiding local optima in skill planning would be a potential challenge for gradient descent planning." }, { "heading": "ACKNOWLEDGEMENT", "text": "We thank Vector Institute Toronto for compute support. We thank Mayank Mittal, Irene Zhang, Alexandra Volokhova, Dylan Turpin, Arthur Allshire, Dhruv Sharma and other members of the UofT CS Robotics group for helpful discussions and feedback on the draft." }, { "heading": "CONTRIBUTIONS", "text": "All the authors were involved in designing the algorithm, shaping the design of experiments, and in writing the paper. Everyone participated in the weekly meetings and brainstorming sessions.\nKevin and Homanga led the project by deciding the problem to work on, setting up the coding infrastructure, figuring out details of the algorithm, running experiments, and logging results.\nDanijar guided the setup of experiments, and helped provide detailed insights when we ran into bottlenecks and helped figure out how to navigate challenges, both with respect to implementation, and algorithm design.\nAnimesh and Florian provided valuable insights on what contributions to focus on, and helped us in understanding the limitations of the algorithm at different stages of its development. They also motivated the students to keep going during times of uncertainty and stress induced by the pandemic." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 ALGORITHM", "text": "Algorithm 1: Learning Skills for Planning\nInitialize dataset D with S random seed episodes. Initialize neural network parameters θ, φ, ψ Define skill duration K, plan horizon H , CEM population size G, MaxCEMiter, skill noise while not converged do\nfor update step c = 1..C do // Model learning\nDraw B data sequences {(at, ot, rt)}k+Lt=k ∼ D. Compute model states st ∼ pθ(st | st−1, at−1, ot). S ← S ∪ {st} Update θ using representation learning.\n// Behavior learning\nζ = (µ,Σ)←CEM(S, MaxCEMiter, G, H , K, ζ(0)); Add noise µ = µ+ Compute rewards R = E ( qθ(rτ | sτ ) ) with the optimized CEM distribution ζ Compute the corresponding intrinsic rewards ri Store the corresponding states into K−sized sequences {{s}k}dH/Kek=1 (for χ) Use total rewards R+ ri to form value estimates, and update φ, ψ, χ\n// Environment interaction o1 ← env.reset() for time step t = 0..T − 1 do\n// MPC in z space. Resample skill every K timesteps if t%K == 0 then\nSample skill z1:dH/Ke ∼ p(z1:dH/Ke|ζ). Choose the first skill, z = z1 Compute st ∼ pθ(st | st−1, at−1, ot) from history, choose at ∼ qφ(at | st, z) rt, ot+1 ← env.step(at).\nAdd experience to dataset D ← D ∪ {(ot, at, rt)Tt=1}." }, { "heading": "A.2 QUADRUPED OBSTACLE TRANSFER EXPERIMENT", "text": "We evaluate the ability of our method to transfer to more complex tasks. Here the source task is to walk forward at a constant speed in a random obstacle environment. The policy is trained in this source task for 500k steps before transferring to the target task which is a pure sparse reward task. The obstacles are arranged in a cove like formation where the straight line to the sparse target leads into a local minima, being stuck against the obstacles. To be able to solve this task, the agent needs to be able to perform long term temporally correlated exploration. We keep all other settings the same but increase the skill length K to 30 time steps and skill horizon H to 120 time steps after transfer in the target task to make the skills be held for longer and let the agent plan further ahead. We see in Figure 10 that the trajectories explored by Dreamer are restricted to be near the initialization and it does not explore beyond the obstacles. In contrast, LSP is able to fully explore the environment and reach the sparse goal multiple times. By transferring skills, LSP is able to explore at the level of skills and hence produces much more temporally coherent trajectories." }, { "heading": "A.3 SETTINGS AND HYPERPARAMETERS", "text": "Our method is based on the tensorflow2 implementation of Dreamer (Hafner et al., 2019) and retains most of the original hyperparameters for policy and model learning. Training is performed more frequently, in particular 1 training update of all modules is done every 5 environment steps. This marginally improves the training performance and is used in the dreamer baseline as well as our method. For feature-based experiments we replace the convolutional encoder and decoder with 2-layer MLP networks with 256 units each and ELU activation. Additionally, the dense decoder network\noutputs both the mean and log standard deviation of the Gaussian distribution over observations. The standard deviation is softly clamped between 0.1 and 1.5 as described in (Chua et al., 2018).\nFor LSP, skill vectors are 3-dimensional and are held for K = 10 steps before being updated. The CEM method has a planning horizon of H = 10, goes through MaxCEMiter = 4 iterations, proposes G = 16 skills and uses the top M = 4 proposals to recompute statistics in each iteration. The additional noise added to the CEM optimized distribution is Normal(0, 0.1).\nThe backwards skill predictor shares the same architecture and settings as the feature-based decoder module described above. It is trained with Adam with learning rate of 8e− 5." }, { "heading": "A.4 CONTROL AS INFERENCE", "text": "The control as inference framework (Todorov, 2008; Levine, 2018) provides a heuristic to encourage exploration. It performs inference in a surrogate probabilistic graphical model, where the likelihood of a trajectory being optimal (an indicator random variable O) is a hand-designed (monotonically increasing) function of the reward, log p(O|τ) = f(r). The induced surrogate posterior p(τ |O) places higher probability on higher reward trajectories.\nSampling from p(τ |O) is in general intractable and so a common solution is to employ variational inference for obtaining an approximation qθ(τ). The objective is to minimize the KL-divergence with respect to the true posterior:\nmin θ KL(qθ(τ)||p(τ |O)) = max θ −KL(qθ(τ)||p(τ |O)) (2)\n= max θ Eqθ(τ)[log p(O|τ)− log qθ(a)] (3)\n= max θ Eqθ(τ)[f(r)] +H[qθ(a)] (4)\nNote that this objective boils down to maximizing a function of the reward and an entropy termH(·). State of the art model-free reinforcement learning approaches can be interpreted as instances of this framework (Haarnoja et al., 2018). The connection has also been made in model-based RL (Okada & Taniguchi, 2019). Specifically they show how the ubiquitous MPPI/CEM planning methods can be derived from this framework when applying variational inference to simple Gaussian action distributions. Though they show that more sophisticated distributions can be used in the framework as well (in principle), the only other model they investigate in the paper is a mixture Gaussians distribution." }, { "heading": "A.5 HIERARCHICAL INFERENCE", "text": "We consider hierarchical action distributions that combine amortizing low level behaviours and online planning by temporally composing these low level behaviours. Specifically we use the following variational distribution:\nqφ,θ(τ) = T∏ t=1 p(st+1|st, at)qφ(at|st, zk(t)) K∏ k=1 qθi(zk)\nHere z1:K are latent variables defining a high level plan that modulates the behaviour of the low level policy qφ(at|st, zk(t)). k(t) is an assignment defining which of the K z’s will act at time t. For all our experiments, we chose a fixed size window assignment such that k(t) = bt(K/T )c.\nHere p(T ) is the task distribution over which we wish to amortize. For example using p(s0) would amortize over plans starting from different initial states. Note that the minimization over φ is outside the expectation which means that it is amortized, whereas the minimization over θ is inside such that it is inferred online.\nmin φ Ep(T )[min θ KL(qθ(τ)||p(τ |O))] (5)\n= max φ Ep(T )[max θ Eqφ,θ(τ)[log p(O|τ)p0(z)− log qφ(a|s, z)qθi(z)]] (6)\nThis formulation also directly relates to some meta-learning approaches that can be interpreted as hierarchical Bayes (Grant et al., 2018). There are different options for the inner and outer optimization in this type of meta-optimization." }, { "heading": "A.6 NOTE ON MUTUAL INFORMATION", "text": "In the paper we used the reverse skill predictor approach to estimate the mutual information objective. The alternate decomposition in terms of reduction in future state uncertainty is given by:\nH({s}|s0)−H({s}|z, s0) = ∫ p(z, {s}, s0) log\np({s}|z, s0) p({s}|s0)\nIn this case tractable approximations of the the skill dynamics p(s′|z, s0) and marginal dynamics p(s′|s0) are required. In theory, these can be formed exactly as compositions of the policy, skill selection distribution and dynamics model:\nlog p({s}|z, s0) = log ∏ t p(st+1|st, a)π(a|st, z)\nlog p({s}|s0) = logEz′|s0 [p({s}|z ′, s0)]\nThe difficulty in using this formulation in our case is due to the marginal future state distribution p({s}|s0). To form the marginal we rely on a finite number K monte carlo samples which is biased and our lower bound on mutual information will be reduced by logK. This means that for every given initial state we wish to train on, we must also sample a sufficiently large number of skills from the skill selection distribution to form an acceptable marginal approximation, greatly increasing the computation cost compared to using the reverse skill predictor." }, { "heading": "A.7 MUTUAL INFORMATION SKLL PREDICTOR NOISE", "text": "Although mutual information is a popular objective for skill discovery, it is important to understand its limitations. When the underlying dynamics of the system are deterministic, a powerful world model may be able to make future predictions with high certainty. Consider if the agent is then able to communicate the skill identity to the skill predictor through a smaller subspace of the state\nspace. For example, imagine that in a quadruped walking task the height of a specific end effector at a certain future frame can be very confidently predicted by the dynamics model. Whether to lift the end effector up 9cm or 10cm may be largely negligible in terms of the dynamics of the motion, but the policy may elect to map the skill variable to the heights of a specific end effector at a certain future frame. When this occurs, diverse skills can become quite indistinguishable visually yet be very distinguishable by the skill predictor. In this failure case, the mutual information may be minimized without truly increasing the diversity of the motion. We use a simple idea to combat this by artificially adding noise to the input of the skill predictor. The higher the skill predictor input noise is, the harder it is to distinguish similar trajectories which forces the skills to produce more distinct outcomes.\nFigure 10: Example of collapsed skill policy. Here we show skill sequences sampled from a policy where input noise is not added to the skill predictor and subsequently the skill policy seems to perform the same motion for different skills (turning clockwise), despite inclusion of the mutual information objective. Each row is a different latent variable.\nAlgorithm 2: CEM subroutine for Algorithm 1 Function CEM(S, MaxCEMiter, G, H , K, ζ(0)):\nSample G initial skills {z(g)1:dH/Ke} G g=1 from prior p(z|ζ(0)) for j in range(MaxCEMiter) do for g in range(G) do\nfor each st do τ = t // Imagine trajectories {(sτ , aτ )}t+Hτ=t from each st. while τ < H , τ + + do\nif (τ − t)%K == 0 then Sample skill z1:dH/Ke ∼ p(z1:dH/Ke|ζ(j)) Sample action aτ ∼ qφ(aτ | sτ , z1) // Choose z1 via MPC Observe new state sτ+1 and compute new intrinsic reward ri\nCompute value estimates Vg from rewards of each imagined rollout Sort {Vg}Gg=1 and use the top M sequences to update ζ(j) = (µ(j),Σ(j)) Sample G skills {z(g)}Gg=1 from p(z|ζ(j))\nreturn ζ(MaxCEMiter)" } ]
2,021
null
SP:0e32b047c35f57579f4eb935720e6a4a61c33116
[ "The paper provides an evaluation of different hyperparameter settings for adversarial training. Specifically, it evaluates combinations of warmup, early stopping, weight decay, batch size and other parameters on adversarially trained models. The paper states that its overarching goal is to ``investigate how the implementation details affect the performance of the adversarial trained models''. " ]
Adversarial training (AT) is one of the most effective strategies for promoting model robustness. However, recent benchmarks show that most of the proposed improvements on AT are less effective than simply early stopping the training procedure. This counter-intuitive fact motivates us to investigate the implementation details of tens of AT methods. Surprisingly, we find that the basic settings (e.g., weight decay, training schedule, etc.) used in these methods are highly inconsistent. In this work, we provide comprehensive evaluations on CIFAR-10, focusing on the effects of mostly overlooked training tricks and hyperparameters for adversarially trained models. Our empirical observations suggest that adversarial robustness is much more sensitive to some basic training settings than we thought. For example, a slightly different value of weight decay can reduce the model robust accuracy by more than 7%, which is probable to override the potential promotion induced by the proposed methods. We conclude a baseline training setting and re-implement previous defenses to achieve new state-of-the-art results1. These facts also appeal to more concerns on the overlooked confounders when benchmarking defenses.
[ { "affiliations": [], "name": "ADVERSARIAL TRAINING" }, { "affiliations": [], "name": "Tianyu Pang" }, { "affiliations": [], "name": "Xiao Yang" }, { "affiliations": [], "name": "Yinpeng Dong" }, { "affiliations": [], "name": "Hang Su" }, { "affiliations": [], "name": "Jun Zhu" } ]
[ { "authors": [ "Jean-Baptiste Alayrac", "Jonathan Uesato", "Po-Sen Huang", "Alhussein Fawzi", "Robert Stanforth", "Pushmeet Kohli" ], "title": "Are labels required for improving adversarial robustness", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Maksym Andriushchenko", "Nicolas Flammarion" ], "title": "Understanding and improving fast adversarial training", "venue": "In Advances in neural information processing systems (NeurIPS),", "year": 2020 }, { "authors": [ "Maksym Andriushchenko", "Francesco Croce", "Nicolas Flammarion", "Matthias Hein" ], "title": "Square attack: a query-efficient black-box adversarial attack via random search", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Matan Atzmon", "Niv Haim", "Lior Yariv", "Ofer Israelov", "Haggai Maron", "Yaron Lipman" ], "title": "Controlling neural level sets", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Yogesh Balaji", "Tom Goldstein", "Judy Hoffman" ], "title": "Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets", "venue": "arXiv preprint arXiv:1910.08051,", "year": 2019 }, { "authors": [ "Battista Biggio", "Igino Corona", "Davide Maiorca", "Blaine Nelson", "Nedim Šrndić", "Pavel Laskov", "Giorgio Giacinto", "Fabio Roli" ], "title": "Evasion attacks against machine learning at test time", "venue": "In Joint European Conference on Machine Learning and Knowledge Discovery in Databases,", "year": 2013 }, { "authors": [ "Wieland Brendel", "Jonas Rauber", "Matthias Bethge" ], "title": "Decision-based adversarial attacks: Reliable attacks against black-box machine learning models", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Wieland Brendel", "Jonas Rauber", "Alexey Kurakin", "Nicolas Papernot", "Behar Veliqi", "Sharada P Mohanty", "Florian Laurent", "Marcel Salathé", "Matthias Bethge", "Yaodong Yu" ], "title": "Adversarial vision challenge", "venue": "In The NeurIPS’18 Competition,", "year": 2020 }, { "authors": [ "Qi-Zhi Cai", "Chang Liu", "Dawn Song" ], "title": "Curriculum adversarial training", "venue": "In International Joint Conference on Artificial Intelligence (IJCAI),", "year": 2018 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "In IEEE Symposium on Security and Privacy (S&P),", "year": 2017 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Adversarial examples are not easily detected: Bypassing ten detection methods", "venue": "In ACM Workshop on Artificial Intelligence and Security (AISec),", "year": 2017 }, { "authors": [ "Nicholas Carlini", "Anish Athalye", "Nicolas Papernot", "Wieland Brendel", "Jonas Rauber", "Dimitris Tsipras", "Ian Goodfellow", "Aleksander Madry", "Alexey Kurakin" ], "title": "On evaluating adversarial robustness", "venue": null, "year": 1902 }, { "authors": [ "Yair Carmon", "Aditi Raghunathan", "Ludwig Schmidt", "Percy Liang", "John C Duchi" ], "title": "Unlabeled data improves adversarial robustness", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Jinghui Chen", "Quanquan Gu" ], "title": "Rays: A ray searching method for hard-label adversarial attack", "venue": "arXiv preprint arXiv:2006.12792,", "year": 2020 }, { "authors": [ "Kejiang Chen", "Yuefeng Chen", "Hang Zhou", "Xiaofeng Mao", "Yuhong Li", "Yuan He", "Hui Xue", "Weiming Zhang", "Nenghai Yu" ], "title": "Self-supervised adversarial training", "venue": "In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Pin-Yu Chen", "Huan Zhang", "Yash Sharma", "Jinfeng Yi", "Cho-Jui Hsieh" ], "title": "Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models", "venue": "In ACM Workshop on Artificial Intelligence and Security (AISec). ACM,", "year": 2017 }, { "authors": [ "Pin-Yu Chen", "Yash Sharma", "Huan Zhang", "Jinfeng Yi", "Cho-Jui Hsieh" ], "title": "Ead: elastic-net attacks to deep neural networks via adversarial examples", "venue": "In AAAI Conference on Artificial Intelligence (AAAI),", "year": 2018 }, { "authors": [ "Tianlong Chen", "Sijia Liu", "Shiyu Chang", "Yu Cheng", "Lisa Amini", "Zhangyang Wang" ], "title": "Adversarial robustness: From self-supervised pre-training to fine-tuning", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Yunpeng Chen", "Jianan Li", "Huaxin Xiao", "Xiaojie Jin", "Shuicheng Yan", "Jiashi Feng" ], "title": "Dual path networks", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "Minhao Cheng", "Thong Le", "Pin-Yu Chen", "Jinfeng Yi", "Huan Zhang", "Cho-Jui Hsieh" ], "title": "Query-efficient hard-label black-box attack: An optimization-based approach", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Minhao Cheng", "Qi Lei", "Pin-Yu Chen", "Inderjit Dhillon", "Cho-Jui Hsieh" ], "title": "Cat: Customized adversarial training for improved robustness", "venue": "arXiv preprint arXiv:2002.06789,", "year": 2020 }, { "authors": [ "Shuyu Cheng", "Yinpeng Dong", "Tianyu Pang", "Hang Su", "Jun Zhu" ], "title": "Improving black-box adversarial attacks with a transfer-based prior", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Jeremy M Cohen", "Elan Rosenfeld", "J Zico Kolter" ], "title": "Certified adversarial robustness via randomized smoothing", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Francesco Croce", "Matthias Hein" ], "title": "Minimally distorted adversarial examples with a fast adaptive boundary attack", "venue": "In International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Francesco Croce", "Matthias Hein" ], "title": "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2009 }, { "authors": [ "Zhijie Deng", "Yinpeng Dong", "Tianyu Pang", "Hang Su", "Jun Zhu" ], "title": "Adversarial distributional training for robust deep learning", "venue": "arXiv preprint arXiv:2002.05999,", "year": 2020 }, { "authors": [ "Gavin Weiguang Ding", "Yash Sharma", "Kry Yik Chau Lui", "Ruitong Huang" ], "title": "Mma training: Direct input space margin maximization through adversarial training", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Yinpeng Dong", "Fangzhou Liao", "Tianyu Pang", "Hang Su", "Jun Zhu", "Xiaolin Hu", "Jianguo Li" ], "title": "Boosting adversarial attacks with momentum", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition", "year": 2018 }, { "authors": [ "Yinpeng Dong", "Tianyu Pang", "Hang Su", "Jun Zhu" ], "title": "Evading defenses to transferable adversarial examples by translation-invariant attacks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition", "year": 2019 }, { "authors": [ "Yinpeng Dong", "Qi-An Fu", "Xiao Yang", "Tianyu Pang", "Hang Su", "Zihao Xiao", "Jun Zhu" ], "title": "Benchmarking adversarial robustness on image classification", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Krishnamurthy Dvijotham", "Sven Gowal", "Robert Stanforth", "Relja Arandjelovic", "Brendan O’Donoghue", "Jonathan Uesato", "Pushmeet Kohli" ], "title": "Training verified learners with learned verifiers", "venue": "arXiv preprint arXiv:1805.10265,", "year": 2018 }, { "authors": [ "Krishnamurthy Dvijotham", "Robert Stanforth", "Sven Gowal", "Timothy Mann", "Pushmeet Kohli" ], "title": "A dual approach to scalable verification of deep networks", "venue": "In Annual Conference on Uncertainty in Artificial Intelligence (UAI),", "year": 2018 }, { "authors": [ "Logan Engstrom", "Brandon Tran", "Dimitris Tsipras", "Ludwig Schmidt", "Aleksander Madry" ], "title": "A rotation and a translation suffice: Fooling cnns with simple transformations", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Reuben Feinman", "Ryan R Curtin", "Saurabh Shintre", "Andrew B Gardner" ], "title": "Detecting adversarial samples from artifacts", "venue": "arXiv preprint arXiv:1703.00410,", "year": 2017 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Sven Gowal", "Chongli Qin", "Jonathan Uesato", "Timothy Mann", "Pushmeet Kohli" ], "title": "Uncovering the limits of adversarial training against norm-bounded adversarial examples", "venue": "arXiv preprint arXiv:2010.03593,", "year": 2020 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch sgd: Training imagenet in 1 hour", "venue": "arXiv preprint arXiv:1706.02677,", "year": 2017 }, { "authors": [ "Chuan Guo", "Mayank Rana", "Moustapha Cisse", "Laurens Van Der Maaten" ], "title": "Countering adversarial images using input transformations", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Minghao Guo", "Yuzhe Yang", "Rui Xu", "Ziwei Liu", "Dahua Lin" ], "title": "When nas meets robustness: In search of robust architectures against adversarial attacks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2016 }, { "authors": [ "Matthias Hein", "Maksym Andriushchenko" ], "title": "Formal guarantees on the robustness of a classifier against adversarial manipulation", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "Dan Hendrycks", "Thomas Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Kimin Lee", "Mantas Mazeika" ], "title": "Using pre-training can improve model robustness and uncertainty", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Jie Hu", "Li Shen", "Gang Sun" ], "title": "Squeeze-and-excitation networks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Lang Huang", "Chao Zhang", "Hongyang Zhang" ], "title": "Self-adaptive training: beyond empirical risk minimization", "venue": "arXiv preprint arXiv:2002.10319,", "year": 2020 }, { "authors": [ "Andrew Ilyas", "Logan Engstrom", "Anish Athalye", "Jessy Lin" ], "title": "Black-box adversarial attacks with limited queries and information", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Haoming Jiang", "Zhehui Chen", "Yuyang Shi", "Bo Dai", "Tuo Zhao" ], "title": "Learning to defense by learning to attack", "venue": "arXiv preprint arXiv:1811.01213,", "year": 2018 }, { "authors": [ "Linxi Jiang", "Xingjun Ma", "Zejia Weng", "James Bailey", "Yu-Gang Jiang" ], "title": "Imbalanced gradients: A new cause of overestimated adversarial robustness", "venue": "arXiv preprint arXiv:2006.13726,", "year": 2020 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial examples in the physical world", "venue": "In The International Conference on Learning Representations (ICLR) Workshops,", "year": 2017 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio", "Yinpeng Dong", "Fangzhou Liao", "Ming Liang", "Tianyu Pang", "Jun Zhu", "Xiaolin Hu", "Cihang Xie" ], "title": "Adversarial attacks and defences competition", "venue": "arXiv preprint arXiv:1804.00097,", "year": 2018 }, { "authors": [ "Saehyung Lee", "Hyungyu Lee", "Sungroh Yoon" ], "title": "Adversarial vertex mixup: Toward better adversarially robust generalization", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Bai Li", "Shiqi Wang", "Suman Jana", "Lawrence Carin" ], "title": "Towards understanding fast adversarial training", "venue": "arXiv preprint arXiv:2006.03089,", "year": 2020 }, { "authors": [ "Pengcheng Li", "Jinfeng Yi", "Bowen Zhou", "Lijun Zhang" ], "title": "Improving the robustness of deep neural networks via adversarial training with triplet loss", "venue": "In International Joint Conference on Artificial Intelligence (IJCAI),", "year": 2019 }, { "authors": [ "Guanxiong Liu", "Issa Khalil", "Abdallah Khreishah" ], "title": "Using single-step adversarial training to defend iterative adversarial examples", "venue": "arXiv preprint arXiv:2002.09632,", "year": 2020 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Decoupled weight decay regularization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Xingjun Ma", "Bo Li", "Yisen Wang", "Sarah M Erfani", "Sudanthi Wijewickrema", "Michael E Houle", "Grant Schoenebeck", "Dawn Song", "James Bailey" ], "title": "Characterizing adversarial subspaces using local intrinsic dimensionality", "venue": "arXiv preprint arXiv:1801.02613,", "year": 2018 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Chengzhi Mao", "Ziyuan Zhong", "Junfeng Yang", "Carl Vondrick", "Baishakhi Ray" ], "title": "Metric learning for adversarial robustness", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Jan Hendrik Metzen", "Tim Genewein", "Volker Fischer", "Bastian Bischoff" ], "title": "On detecting adversarial perturbations", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Jonathan Uesato", "Pascal Frossard" ], "title": "Robustness via curvature regularization, and vice versa", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition", "year": 2019 }, { "authors": [ "Norman Mu", "Justin Gilmer" ], "title": "Mnist-c: A robustness benchmark for computer vision", "venue": "arXiv preprint arXiv:1906.02337,", "year": 2019 }, { "authors": [ "Muzammal Naseer", "Salman Khan", "Munawar Hayat", "Fahad Shahbaz Khan", "Fatih Porikli" ], "title": "A selfsupervised approach for adversarial robustness", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Anh Nguyen", "Jason Yosinski", "Jeff Clune" ], "title": "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2015 }, { "authors": [ "Tianyu Pang", "Chao Du", "Yinpeng Dong", "Jun Zhu" ], "title": "Towards robust detection of adversarial examples", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Tianyu Pang", "Chao Du", "Jun Zhu" ], "title": "Max-mahalanobis linear discriminant analysis networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Tianyu Pang", "Kun Xu", "Chao Du", "Ning Chen", "Jun Zhu" ], "title": "Improving adversarial robustness via promoting ensemble diversity", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Tianyu Pang", "Kun Xu", "Yinpeng Dong", "Chao Du", "Ning Chen", "Jun Zhu" ], "title": "Rethinking softmax crossentropy loss for adversarial robustness", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Tianyu Pang", "Kun Xu", "Jun Zhu" ], "title": "Mixup inference: Better exploiting mixup to defend adversarial attacks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Tianyu Pang", "Xiao Yang", "Yinpeng Dong", "Kun Xu", "Hang Su", "Jun Zhu" ], "title": "Boosting adversarial training with hypersphere embedding", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2020 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Somesh Jha", "Matt Fredrikson", "Z Berkay Celik", "Ananthram Swami" ], "title": "The limitations of deep learning in adversarial settings", "venue": "In IEEE European Symposium on Security and Privacy (EuroS&P),", "year": 2016 }, { "authors": [ "Chongli Qin", "James Martens", "Sven Gowal", "Dilip Krishnan", "Krishnamurthy Dvijotham", "Alhussein Fawzi", "Soham De", "Robert Stanforth", "Pushmeet Kohli" ], "title": "Adversarial robustness through local linearization", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Ilija Radosavovic", "Raj Prateek Kosaraju", "Ross Girshick", "Kaiming He", "Piotr Dollár" ], "title": "Designing network design spaces", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Edward Raff", "Jared Sylvester", "Steven Forsyth", "Mark McLean" ], "title": "Barrage of random transforms for adversarially robust defense", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Leslie Rice", "Eric Wong", "J Zico Kolter" ], "title": "Overfitting in adversarially robust deep learning", "venue": "In International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Ali Shafahi", "Amin Ghiasi", "Furong Huang", "Tom Goldstein" ], "title": "Label smoothing and logit squeezing: A replacement for adversarial training", "venue": "arXiv preprint arXiv:1910.11585,", "year": 2019 }, { "authors": [ "Ali Shafahi", "Mahyar Najibi", "Amin Ghiasi", "Zheng Xu", "John Dickerson", "Christoph Studer", "Larry S Davis", "Gavin Taylor", "Tom Goldstein" ], "title": "Adversarial training for free! In Advances in Neural Information Processing Systems (NeurIPS), 2019b", "venue": null, "year": 2019 }, { "authors": [ "Dawn Song", "Kevin Eykholt", "Ivan Evtimov", "Earlence Fernandes", "Bo Li", "Amir Rahmati", "Florian Tramer", "Atul Prakash", "Tadayoshi Kohno" ], "title": "Physical adversarial examples for object detectors", "venue": "In USENIX Workshop on Offensive Technologies,", "year": 2018 }, { "authors": [ "Yang Song", "Taesup Kim", "Sebastian Nowozin", "Stefano Ermon", "Nate Kushman" ], "title": "Pixeldefend: Leveraging generative models to understand and defend against adversarial examples", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "David Stutz", "Matthias Hein", "Bernt Schiele" ], "title": "Confidence-calibrated adversarial training: Generalizing to unseen attacks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Dong Su", "Huan Zhang", "Hongge Chen", "Jinfeng Yi", "Pin-Yu Chen", "Yupeng Gao" ], "title": "Is robustness the cost of accuracy? – a comprehensive study on the robustness of 18 deep image classification models", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Cecilia Summers", "Michael J Dinneen" ], "title": "Logit regularization methods for adversarial robustness", "venue": "ICLR submission,", "year": 2018 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "Christian Szegedy", "Wei Liu", "Yangqing Jia", "Pierre Sermanet", "Scott Reed", "Dragomir Anguelov", "Dumitru Erhan", "Vincent Vanhoucke", "Andrew Rabinovich" ], "title": "Going deeper with convolutions", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2015 }, { "authors": [ "Pedro Tabacof", "Eduardo Valle" ], "title": "Exploring the space of adversarial images", "venue": "In 2016 International Joint Conference on Neural Networks (IJCNN),", "year": 2016 }, { "authors": [ "Florian Tramèr", "Dan Boneh" ], "title": "Adversarial training and robustness for multiple perturbations", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Florian Tramèr", "Alexey Kurakin", "Nicolas Papernot", "Dan Boneh", "Patrick McDaniel" ], "title": "Ensemble adversarial training: Attacks and defenses", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Florian Tramer", "Nicholas Carlini", "Wieland Brendel", "Aleksander Madry" ], "title": "On adaptive attacks to adversarial example defenses", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2020 }, { "authors": [ "Jonathan Uesato", "Brendan O’Donoghue", "Aaron van den Oord", "Pushmeet Kohli" ], "title": "Adversarial risk and the dangers of evaluating against weak attacks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "S Vivek B", "R Venkatesh Babu" ], "title": "Single-step adversarial training with dropout scheduling", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Huaxia Wang", "Chun-Nam Yu" ], "title": "A direct approach to robust deep learning using adversarial networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Yisen Wang", "Xingjun Ma", "James Bailey", "Jinfeng Yi", "Bowen Zhou", "Quanquan Gu" ], "title": "On the convergence and robustness of adversarial training", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Eric Wong", "Zico Kolter" ], "title": "Provable defenses against adversarial examples via the convex outer adversarial polytope", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Eric Wong", "Leslie Rice", "J. Zico Kolter" ], "title": "Fast is better than free: Revisiting adversarial training", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Dongxian Wu", "Yisen Wang", "Shu-Tao Xia", "James Bailey", "Xingjun Ma" ], "title": "Skip connections matter: On the transferability of adversarial examples generated with resnets", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Cihang Xie", "Alan Yuille" ], "title": "Intriguing properties of adversarial training at scale", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Cihang Xie", "Jianyu Wang", "Zhishuai Zhang", "Zhou Ren", "Alan Yuille" ], "title": "Mitigating adversarial effects through randomization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Cihang Xie", "Yuxin Wu", "Laurens van der Maaten", "Alan Yuille", "Kaiming He" ], "title": "Feature denoising for improving adversarial robustness", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition", "year": 2019 }, { "authors": [ "Cihang Xie", "Mingxing Tan", "Boqing Gong", "Alan Yuille", "Quoc V Le" ], "title": "Smooth adversarial training", "venue": "arXiv preprint arXiv:2006.14536,", "year": 2020 }, { "authors": [ "Saining Xie", "Ross Girshick", "Piotr Dollár", "Zhuowen Tu", "Kaiming He" ], "title": "Aggregated residual transformations for deep neural networks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Zheng Xu", "Ali Shafahi", "Tom Goldstein" ], "title": "Exploring model robustness with adaptive networks and improved adversarial training", "venue": "arXiv preprint arXiv:2006.00387,", "year": 2020 }, { "authors": [ "Hongwei Yong", "Jianqiang Huang", "Xiansheng Hua", "Lei Zhang" ], "title": "Gradient centralization: A new optimization technique for deep neural networks", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "In The British Machine Vision Conference (BMVC),", "year": 2016 }, { "authors": [ "Runtian Zhai", "Tianle Cai", "Di He", "Chen Dan", "Kun He", "John Hopcroft", "Liwei Wang" ], "title": "Adversarially robust generalization just requires more unlabeled data", "venue": null, "year": 1906 }, { "authors": [ "Dinghuai Zhang", "Tianyuan Zhang", "Yiping Lu", "Zhanxing Zhu", "Bin Dong" ], "title": "You only propagate once: Accelerating adversarial training via maximal principle", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Haichao Zhang", "Jianyu Wang" ], "title": "Defense against adversarial attacks using feature scatteringbased adversarial training", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric P Xing", "Laurent El Ghaoui", "Michael I Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Jingfeng Zhang", "Xilie Xu", "Bo Han", "Gang Niu", "Lizhen Cui", "Masashi Sugiyama", "Mohan Kankanhalli" ], "title": "Attacks which do not kill training make adversarial learning stronger", "venue": "In International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Cheng" ], "title": "2019b), quasi-gradient attacks (Chen et al., 2017a; Uesato et al., 2018; Ilyas et al., 2018), and decision-based attacks (Brendel et al., 2018; Cheng et al., 2019a). Adversarial attacks can be also realized in the physical world (Kurakin et al., 2017", "venue": "Song et al.,", "year": 2018 }, { "authors": [ "AutoAttack. Croce", "Hein" ], "title": "2020b) first propose the Auto-PGD (APGD", "venue": null, "year": 2020 }, { "authors": [ "Xie et al", "Raff" ], "title": "2019) or specified inference principle (Pang et al., 2020b). On the other hand, detection-based methods aim to filter out adversarial examples and resort to higher-level intervention. Although detection is a suboptimal strategy compared to classification, it can avoid over-confident wrong decisions. These efforts include training auxiliary classifiers to detect adversarial inputs (Metzen et al., 2017), designing detection statistics (Feinman et al., 2017", "venue": null, "year": 2017 }, { "authors": [ "Pang" ], "title": "2018a), or basing on additional probabilistic models (Song et al., 2018b). A.5 CONCURRENT WORK Gowal et al. (2020) also provide a comprehensive study on different training tricks of AT, and push forward the state-of-the-art performance of adversarially trained models on MNIST, CIFAR-10 and CIFAR-100", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Adversarial training (AT) has been one of the most effective defense strategies against adversarial attacks (Biggio et al., 2013; Szegedy et al., 2014; Goodfellow et al., 2015). Based on the primary AT frameworks like PGD-AT (Madry et al., 2018), many improvements have been proposed from different perspectives, and demonstrate promising results (detailed in Sec. 2). However, the recent benchmarks (Croce & Hein, 2020b; Chen & Gu, 2020) find that simply early stopping the training procedure of PGD-AT (Rice et al., 2020) can attain the gains from almost all the previously proposed improvements, including the state-of-the-art TRADES (Zhang et al., 2019b).\nThis fact is somewhat striking since TRADES also executes early stopping (one epoch after decaying the learning rate) in their code implementation. Besides, the reported robustness of PGD-AT in Rice et al. (2020) is much higher than in Madry et al. (2018), even without early-stopping. This paradox motivates us to check the implementation details of these seminal works. We find that TRADES uses weight decay of 2× 10−4 , Gaussian PGD initialization as δ0 ∼ N (0, αI), and eval mode of batch normalization (BN) when crafting adversarial examples, while Rice et al. (2020) use weight decay of 5× 10−4, uniform PGD initialization as δ0 ∼ U(− , ), and train mode of BN to generate adversarial examples. In our experiments on CIFAR-10 (e.g., Table 8), the two slightly different settings can differ the robust accuracy by ∼ 5%, which is significant according to the reported benchmarks. To have a comprehensive study, we further investigate the implementation details of tens of papers working on the AT methods, some of which are summarized in Table 1. We find that even using the same model architectures, the basic hyperparameter settings (e.g., weight decay, learning rate schedule, etc.) used in these papers are highly inconsistent and customized, which could affect the model performance and may override the gains from the methods themselves. Under this situation, if we directly benchmark these methods using their released code or checkpoints, some actually effective improvements would be under-estimated due to the improper hyperparameter settings.\nOur contributions. We evaluate the effects of a wide range of basic training tricks (e.g., warmup, early stopping, weight decay, batch size, BN mode, etc.) on the adversarially trained models. Our empirical results suggest that improper training settings can largely degenerate the model performance,\n∗Corresponding author. 1Code is available at https://github.com/P2333/Bag-of-Tricks-for-AT\nwhile this degeneration may be mistakenly ascribed to the methods themselves. We provide a baseline recipe for PGD-AT on CIFAR-10 as an example, and demonstrate the generality of the recipe on training other frameworks like TRADES. As seen in Table 16, the retrained TRADES achieve new state-of-the-art performance on the AutoAttack benchmark (Croce & Hein, 2020b).\nAlthough our empirical conclusions may not generalize to other datasets or tasks, we reveal the facts that adversarially trained models could be sensitive to certain training settings, which are usually neglected in previous work. These results also encourage the community to re-implement the previously proposed defenses with fine-tuned training settings to better explore their potentials." }, { "heading": "2 RELATED WORK", "text": "In this section, we introduce related work on the adversarial defenses and recent benchmarks. We detail on the adversarial attacks in Appendix A.1." }, { "heading": "2.1 ADVERSARIAL DEFENSES", "text": "To alleviate the adversarial vulnerability of deep learning models, many defense strategies have been proposed, but most of them can eventually be evaded by adaptive attacks (Carlini & Wagner, 2017b; Athalye et al., 2018). Other more theoretically guaranteed routines include training provably robust networks (Dvijotham et al., 2018a;b; Hein & Andriushchenko, 2017; Wong & Kolter, 2018) and obtaining certified models via randomized smoothing (Cohen et al., 2019). While these methods are promising, they currently do not match the state-of-the-art robustness under empirical evaluations.\nThe idea of adversarial training (AT) stems from the seminal work of Goodfellow et al. (2015), while other AT frameworks like PGD-AT (Madry et al., 2018) and TRADES (Zhang et al., 2019b) occupied the winner solutions in the adversarial competitions (Kurakin et al., 2018; Brendel et al., 2020). Based on these primary AT frameworks, many improvements have been proposed via encoding the mechanisms inspired from other domains, including ensemble learning (Tramèr et al., 2018; Pang et al., 2019), metric learning (Mao et al., 2019; Li et al., 2019; Pang et al., 2020c), generative modeling (Jiang et al., 2018; Pang et al., 2018b; Wang & Yu, 2019; Deng et al., 2020), semisupervised learning (Carmon et al., 2019; Alayrac et al., 2019; Zhai et al., 2019), and self-supervised\nlearning (Hendrycks et al., 2019; Chen et al., 2020a;b; Naseer et al., 2020). On the other hand, due to the high computational cost of AT, many efforts are devoted to accelerating the training procedure via reusing the computations (Shafahi et al., 2019b; Zhang et al., 2019a), adaptive adversarial steps (Wang et al., 2019; Zhang et al., 2020) or one-step training (Wong et al., 2020; Liu et al., 2020; Vivek B & Venkatesh Babu, 2020). The following works try to solve the side effects (e.g., catastrophic overfitting) caused by these fast AT methods (Andriushchenko & Flammarion, 2020; Li et al., 2020)." }, { "heading": "2.2 ADVERSARIAL BENCHMARKS", "text": "Due to the large number of proposed defenses, several benchmarks have been developed to rank the adversarial robustness of existing methods. Dong et al. (2020) perform large-scale experiments to generate robustness curves, which are used for evaluating typical defenses. Croce & Hein (2020b) propose AutoAttack, which is an ensemble of four selected attacks. They apply AutoAttack on tens of previous defenses and provide a comprehensive leader board. Chen & Gu (2020) propose the black-box RayS attack, and establish a similar leader board for defenses. In this paper, we mainly apply PGD attack and AutoAttack as two common ways to evaluate the models.\nExcept for the adversarial robustness, there are other efforts that introduce augmented datasets for accessing the robustness against general corruptions or perturbations. Mu & Gilmer (2019) introduce MNIST-C with a suite of 15 corruptions applied to the MNIST test set, while Hendrycks & Dietterich (2019) introduce ImageNet-C and ImageNet-P with common corruptions and perturbations on natural images. Evaluating robustness on these datasets can reflect the generality of the proposed defenses, and avoid overfitting to certain attacking patterns (Engstrom et al., 2019; Tramèr & Boneh, 2019)." }, { "heading": "3 BAG OF TRICKS", "text": "Our overarching goal is to investigate how the usually overlooked implementation details affect the performance of the adversarially trained models. Our experiments are done on CIFAR-10 (Krizhevsky & Hinton, 2009) under the `∞ threat model of maximal perturbation = 8/255, without accessibility to additional data. We evaluate the models under 10-steps PGD attack (PGD-10) (Madry et al., 2018) and AutoAttack (AA) (Croce & Hein, 2020b). For the PGD attack, we apply untargeted mode using ground truth labels, step size of 2/255, and 5 restarts for evaluation / no restart for training. For the AutoAttack2, we apply the standard version, with no restart for AutoPGD and FAB, compared to 5 restarts for plus version. We consider some basic training tricks and perform ablation studies on each of them, based on the default training setting as described below:\nDefault setting. Following Rice et al. (2020), in the default setting, we apply the primary PGD-AT framework and the hyperparameters including batch size 128; SGD momentum optimizer with the initial learning rate of 0.1; weight decay 5× 10−4; ReLU activation function and no label smoothing; train mode for batch normalization when crafting adversarial examples. All the models are trained for 110 epochs with the learning rate decaying by a factor of 0.1 at 100 and 105 epochs, respectively. We report the results on the checkpoint with the best PGD-10 accuracy.\nNote that our empirical observations and conclusions may not always generalize to other datasets or AT frameworks, but we emphasize the importance of using consistent implementation details (not only the same model architectures) to enable fair comparisons among different AT methods.\n2https://github.com/fra31/auto-attack" }, { "heading": "256 83.33 52.20 82.24 52.52", "text": "" }, { "heading": "512 83.40 50.69 82.16 53.36", "text": "" }, { "heading": "256 86.21 52.90 85.89 56.09", "text": "" }, { "heading": "3.1 EARLY STOPPING AND WARMUP", "text": "Early stopping training epoch. The trick of early stopping w.r.t. the training epoch was first applied in the implementation of TRADES (Zhang et al., 2019b), where the learning rate decays at the 75th epoch and the training is stopped at the 76th epoch. Later Rice et al. (2020) provide a comprehensive study on the overfitting phenomenon in AT, and advocate early stopping the training epoch as a general strategy for preventing adversarial overfitting, which could be triggered according to the PGD accuracy on a split validation set. Due to its effectiveness, we regard this trick as a default choice.\nEarly stopping adversarial intensity. Another level of early stopping happens on the adversarial intensity, e.g., early stopping PGD steps when crafting adversarial examples for training. This trick was first applied by the runner-up of the defense track in NeurIPS 2018 adversarial vision challenge (Brendel et al., 2020). Later efforts are devoted to formalizing this early stopping mechanism with different trigger rules (Wang et al., 2019; Zhang et al., 2020). Balaji et al. (2019) early stop the adversarial perturbation, which has a similar effect on the adversarial intensity. In the left part of Table 2, we evaluate the method proposed by Zhang et al. (2020) due to its simplicity. As seen, this kind of early stopping can improve the performance on clean data while keeping comparable accuracy under PGD-10. However, the performance under the stronger AutoAttack is degraded.\nWarmup w.r.t. learning rate. Warmup w.r.t. learning rate is a general trick for training deep learning models (Goodfellow et al., 2016). In the adversarial setting, Wong et al. (2020) show that the one cycle learning rate schedule is one of the critical ingredients for the success of FastAT. Thus, we evaluate the effect of this trick for the piecewise learning rate schedule and PGD-AT framework. We linearly increase the learning rate from zero to the preset value in the first 10 / 15 / 20 epochs. As shown in the middle part of Table 2, the effect of warming up learning rate is marginal.\nWarmup w.r.t. adversarial intensity. In the AT procedure, warmup can also be executed w.r.t. the adversarial intensity. Cai et al. (2018) propose the curriculum AT process to gradually increase the adversarial intensity and monitor the overfitting trend. Qin et al. (2019) increase the maximal\nEpochs Epochs\nFigure 1: (a) Test accuracy w.r.t. different values of weight decay. The reported checkpoints correspond to the best PGD-10 accuracy (Rice et al., 2020). We test on two model architectures, and highlight (with red circles) three most commonly used weight decays in previous work; (b) Curves of test accuracy w.r.t. training epochs, where the model is WRN-34-10. We set weight decay be 1× 10−4, 2× 10−4, and 5× 10−4, respectively. We can observe that smaller weight decay can learn faster but also more tend to overfit w.r.t. the robust accuracy. In Fig. 4, we early decay the learning rate before the models overfitting, but weight decay of 5× 10−4 still achieve better robustness.\nperturbation from zero to 8/255 in the first 15 epochs. In the right part of Table 2, we linearly increase the maximal perturbation in the first 10 / 15 / 20 epochs, while the effect is still limited." }, { "heading": "3.2 TRAINING HYPERPARAMETERS", "text": "Batch size. On the large-scale datasets like ImageNet (Deng et al., 2009), it has been recognized that the mini-batch size is an important factor influencing the model performance (Goyal et al., 2017), where larger batch size traverses the dataset faster but requires more memory usage. In the adversarial setting, Xie et al. (2019) use a batch size of 4096 to train a robust model on ImageNet, which achieves state-of-the-art performance under adversarial attacks. As to the defenses reported on the CIFAR-10 dataset, the mini-batch sizes are usually chosen between 128 and 256, as shown in Table 1. To evaluate the effect, we test on two model architectures and four values of batch size in Table 3. Since the number of training epochs is fixed to 110, we also consider applying the linear scaling rule introduced in Goyal et al. (2017), i.e., when the mini-batch size is multiplied by k, multiply the learning rate by k. We treat the batch size of 128 and the learning rate of 0.1 as a basic setting to obtain the factor k. We can observe that the batch size of 128 works well on CIFAR-10, while the linear scaling rule can benefit the cases with other batch sizes.\nLabel smoothing (LS). Shafahi et al. (2019a) propose to utilize LS to mimic adversarial training. Pang et al. (2019) also find that imposing LS on the ensemble prediction can alleviate the adversarial transferability among individual members. Unfortunately, combing LS with standard training cannot prevent the models from evaded by adaptive attacks (Tramer et al., 2020) or larger iteration steps (Summers & Dinneen, 2018). Beyond previous observations, we further evaluate the effect of LS on adversarial training. As shown in Table 4 and Table 17, mild LS can improve 0.5 ∼ 1% robust accuracy under the strong attacks we evaluated, including AutoAttack and PGD-1000, without affecting the clean performance. This can be regarded as the effect induced by calibrating the confidence (Stutz et al., 2020) of adversarially trained models (80% ∼ 85% accuracy on clean data). In contrast, excessive LS could degrade the robustness (e.g., LS = 0.3 vs. LS = 0.4 on ResNet-18), which is consistent with the recent observations in Jiang et al. (2020) (they use LS = 0.5). However, since LS is known for its potential gradient masking effect, we advocate careful evaluations when applying this trick on the proposed defenses, following the suggestions in Carlini et al. (2019).\nOptimizer. Most of the AT methods apply SGD with momentum as the optimizer. The momentum factor is usually set to be 0.9 with zero dampening. In other cases, Carmon et al. (2019) apply SGD with Nesterov, and Rice et al. (2020) apply Adam for cyclic learning rate schedule. We test some commonly used optimizers in Table 5, as well as the decoupled AdamW (Loshchilov & Hutter, 2019) and the recently proposed gradient centralization trick SGD-GC / SGD-GCC (Yong et al., 2020). We can find that SGD-based optimizers (e.g., Mom, Nesterov, SGD-GC / SGD-GCC) have similar performance, while Adam / AdamW performs worse for piecewise learning rate schedule.\nWeight decay. As observed in Table 1, three different values of weight decay are used in previous defenses, including 1× 10−4, 2× 10−4, and 5× 10−4. While 5× 10−4 is a fairly widely used value for weight decay in deep learning, the prevalence of the value 2 × 10−4 should stem from Madry et al. (2018) in the adversarial setting. In Fig. 1(a), we report the best test accuracy under different values of weight decay3. We can see that the gap of robust accuracy can be significant due to slightly\n3Note that Rice et al. (2020) also investigate the effect of different weight decay (i.e., `2 regularization), but they focus on a coarse value range of {5× 10k}, where k ∈ {−4,−3,−2,−1, 0}.\ndifferent values of weight decay (e.g., up to ∼ 7% for 1× 10−4 vs. 5× 10−4). Besides, in Fig. 1(b) we plot the learning curves of test accuracy w.r.t. training epochs. Note that smaller values of weight decay make the model learn faster in the initial phase, but the overfitting phenomenon also appears earlier. In Fig. 3, we visualize the cross sections of the decision boundary. We can see that proper values of weight decay (e.g., 5× 10−4) can enlarge margins from decision boundary and improve robustness. Nevertheless, as shown in the left two columns, this effect is less significant on promoting clean accuracy. As a result, weight decay is a critical and usually neglected ingredient that largely influences the robust accuracy of adversarially trained models. In contrast, the clean accuracy is much less sensitive to weight decay, for both adversarially and standardly (shown in Fig. 5) trained models.\nActivation function. Most of the previous AT methods apply ReLU as the non-linear activation function in their models, while Xie et al. (2020) empirically demonstrate that smooth activation functions can better improve model robustness on ImageNet. Following their settings, we test if a similar conclusion holds on CIFAR-10. By comparing the results on ReLU and Softplus in Table 6 (for PGD-AT) and Table 13 (for TRADES), we confirm that smooth activation indeed benefits model robustness for ResNet-18. However, as shown in Table 8 (for PGD-AT) and Table 9 (for TRADES), this benefit is less significant on larger models like WRN. Thus we deduce that smaller model capacity can benefit more from the smoothness of activation function. Besides, as shown in Table 6, models trained on CIFAR-10 seem to prefer activation function σ(x) with zero truncation, i.e., σ(x) ≥ 0. Those with negative return values like ELU, LeakyReLU, Tanh have worse performance than ReLU.\nModel architecture. Su et al. (2018) provide a comprehensive study on the robustness of standardly trained models, using different model architectures. For the adversarially trained models, it has been generally recognized that larger model capacity can usually lead to better robustness (Madry et al., 2018). Recently, Guo et al. (2020) blend in the technique of AutoML to explore robust architectures. In Fig. 2, we perform similar experiments on more hand-crafted model architectures. The selected models have comparable numbers of parameters. We can observe that DenseNet can achieve both the best clean and robust accuracy, while being memory-efficient (but may require longer inference time). This is consistent with the observation in Guo et al. (2020) that residual connections can benefit the AT procedure. Interestingly, Wu et al. (2020) demonstrate that residual connections allow easier generation of highly transferable adversarial examples, while in our case this weakness for the standardly trained models may turn out to strengthen the adversarially trained models.\nBatch normalization (BN) mode. When crafting adversarial examples in the training procedure, Zhang et al. (2019b) use eval mode for BN, while Rice et al. (2020) and Madry et al. (2018) use train mode for BN. Since the parameters in the BN layers are not updated in this progress, the difference between these two modes is mainly on the recorded moving average BN mean and variance used in the test phase. As pointed out in Xie & Yuille (2020), properly dealing with BN layers is critical to obtain a well-performed adversarially trained model. Thus in Table 7, we employ the train or eval mode of BN for crafting adversarial examples during training, and report the results on different model architectures to dig out general rules. As seen, using eval mode for BN can increase clean accuracy, while keeping comparable robustness. We also advocate for the eval mode, because if we apply train mode for multi-step PGD attack, the BN mean and variance will be recorded for every intermediate step, which could blur the adversarial distribution used by BN layers during inference.\nTakeaways: (i) Slightly different values of weight decay could largely affect the robustness of trained models; (ii) Moderate label smoothing and linear scaling rule on l.r. for different batch sizes are beneficial; (iii) Applying eval BN mode to craft training adversarial examples can avoid blurring the distribution; (iv) Early stopping the adversarial steps or perturbation may degenerate worst-case robustness; (v) Smooth activation benefits more when the model capacity is not enough for adversarial training." }, { "heading": "3.3 COMBINATION OF TRICKS", "text": "In the above, we separately evaluate the effect of each training trick in the AT procedure. Now we investigate combining the selected useful tricks, which involve label smoothing, weight decay, activation function and BN mode. As demonstrated in Table 8, the improvements are not ideally additive by combining different tricks, while label smoothing and smooth activation function are helpful, but not significant, especially when we apply model architectures with a larger capacity.\nWe also find that the high performance of the models trained by Rice et al. (2020) partially comes from its reasonable training settings, compared to previous work. Based on these, we provide a trick list for training robust models on CIFAR-10 for reference." }, { "heading": "Baseline setting (CIFAR-10):", "text": "Batch size 128; SGD momentum optimizer; weight decay 5×10−4; eval mode BN for generating adversarial examples; warmups are not necessary; moderate label smoothing (0.1 ∼ 0.2) and smooth activation function could be beneficial; model architecture with residual connections." }, { "heading": "3.4 RE-IMPLEMENTATION OF TRADES", "text": "As a sanity check, we re-implement TRADES to see if our conclusions derived from PGD-AT can generalize and provide the results in Table 9. We can observe that after simply changing the weight decay from 2× 10−4 to 5× 10−4, the clean accuracy of TRADES improves by ∼ 1% and the AA accuracy improves by ∼ 4%, which make the trained model surpass the previously state-of-theart models reported by the AutoAttack benchmark, as listed in Table 16. This fact highlights the importance of employing a standardized training setting for fair comparisons of different AT methods." }, { "heading": "3.5 EVALUATIONS ON OTHER AT FRAMEWORKS", "text": "To examine the universality of our observations on PGD-AT and TRADES, we further evaluate on other AT frameworks, including FastAT (Wong et al., 2020) and FreeAT (Shafahi et al., 2019b). We base on the FastAT code4 to implement the methods. Specifically, for FastAT, we use cyclic learning rate schedule with lmin = 0 and lmax = 0.2, training for 15 epochs. For FreeAT, we also use cyclic learning rate schedule with lmin = 0 and lmax = 0.04, training for 24 epochs with mini-batch replays be 4. The results are provided in Table 10. We can find that our observations generalize well to other AT frameworks, which verifies that the proposed baseline setting could be a decent default choice for adversarial training on CIFAR-10." }, { "heading": "4 CONCLUSION", "text": "In this work, we take a step in examining how the usually neglected implementation details impact the performance of adversarially trained models. Our empirical results suggest that compared to clean accuracy, robustness is more sensitive to some seemingly unimportant differences in training settings. Thus when building AT methods, we should more carefully fine-tune the training settings (on validation sets), or follow certain long-tested setup in the adversarial setting.\n4https://github.com/locuslab/fast_adversarial" }, { "heading": "ACKNOWLEDGEMENTS", "text": "This work was supported by the National Key Research and Development Program of China (Nos. 2020AAA0104304, 2017YFA0700904), NSFC Projects (Nos. 61620106010, 62076147, U19B2034, U19A2081), Beijing Academy of Artificial Intelligence (BAAI), Tsinghua-Huawei Joint Research Program, a grant from Tsinghua Institute for Guo Qiang, Tiangong Institute for Intelligent Computing, and the NVIDIA NVAIL Program with GPU/DGX Acceleration. Tianyu Pang was supported by MSRA Fellowship and Baidu Scholarship." }, { "heading": "A TECHNICAL DETAILS", "text": "In this section we introduce more related backgrounds and technical details for reference." }, { "heading": "A.1 ADVERSARIAL ATTACKS", "text": "Since the seminal L-BFGS and FGSM attacks (Szegedy et al., 2014; Goodfellow et al., 2015), a large amount of attacking methods on generating adversarial examples have been introduced. In the white-box setting, gradient-based methods are popular and powerful, which span in the `∞ threat model (Nguyen et al., 2015; Madry et al., 2018), `2 threat model (Carlini & Wagner, 2017a), `1 threat model (Chen et al., 2018), and `0 threat model (Papernot et al., 2016). In the black-box setting, the attack strategies are much more diverse. These include transfer-based attacks (Dong et al., 2018; 2019; Cheng et al., 2019b), quasi-gradient attacks (Chen et al., 2017a; Uesato et al., 2018; Ilyas et al., 2018), and decision-based attacks (Brendel et al., 2018; Cheng et al., 2019a). Adversarial attacks can be also realized in the physical world (Kurakin et al., 2017; Song et al., 2018a). Below we formulate the PGD attack and AutoAttack that we used in our evaluations.\nPGD attack. One of the most commonly studied adversarial attack is the projected gradient descent (PGD) method (Madry et al., 2018). Let x0 be a randomly perturbed sample in the neighborhood of the clean input x, then PGD iteratively crafts the adversarial example as\nxi = clipx, (xi−1 + i · sign(∇xi−1L(xi−1, y))), (1)\nwhere clipx, (·) is the clipping function and L is the adversarial objective. The accuracy under PGD attack has been a standard metric to evaluate the model robustness.\nAutoAttack. Croce & Hein (2020b) first propose the Auto-PGD (APGD) algorithm, where the main idea is to automatically tune the adversarial step sizes according to the optimization trend. As to the adversarial objective, except for the traditional cross-entropy (CE) loss, they develop a new difference of logits ratio (DLR) loss as\nDLR(x, y) = −zy −maxi6=y zi zπ1 − zπ3 , (2)\nwhere z is the logits and π is the ordering which sorts the components of z. Finally, the authors propose to group APGDCE and APGDDLR with FAB (Croce & Hein, 2020a) and square attack (Andriushchenko et al., 2020) to form the AutoAttack (AA)." }, { "heading": "A.2 REFERENCE CODES", "text": "In Table 11, we provide the code links for the referred defenses. The summarized training settings are either described in their papers or manually retrieved by us in their code implementations." }, { "heading": "A.3 MODEL ARCHITECTURES", "text": "We select some typical hand-crafted model architectures as the objects of study, involving DenseNet (Huang et al., 2017), GoogleNet (Szegedy et al., 2015), (PreAct) ResNet (He et al., 2016), SENet (Hu et al., 2018), WRN (Zagoruyko & Komodakis, 2016), DPN (Chen et al., 2017b), ResNeXt (Xie et al., 2017), and RegNetX (Radosavovic et al., 2020). The models are implemented by https://github.com/kuangliu/pytorch-cifar.\nA.4 INFERENCE-PHASE ADVERSARIAL DEFENSES\nExcept for enhancing the models in the training phase, there are other methods that intend to improve robustness in the inference phase. These attempts include performing local linear transformation like adding Gaussian noise (Tabacof & Valle, 2016), different operations of image processing (Guo et al., 2018; Xie et al., 2018; Raff et al., 2019) or specified inference principle (Pang et al., 2020b). On the other hand, detection-based methods aim to filter out adversarial examples and resort to higher-level intervention. Although detection is a suboptimal strategy compared to classification, it can avoid over-confident wrong decisions. These efforts include training auxiliary classifiers to detect adversarial inputs (Metzen et al., 2017), designing detection statistics (Feinman et al., 2017; Ma et al., 2018; Pang et al., 2018a), or basing on additional probabilistic models (Song et al., 2018b)." }, { "heading": "A.5 CONCURRENT WORK", "text": "Gowal et al. (2020) also provide a comprehensive study on different training tricks of AT, and push forward the state-of-the-art performance of adversarially trained models on MNIST, CIFAR-10 and CIFAR-100. While they analyze some properties that we also analyze in this paper (such as training batch size, label smoothing, weight decay, activation functions), they also complement our analyses with experiments on, e.g., weight moving average and data quality. Both of our works reveal the importance of training details in the process of AT, and contribute to establishing more justified perspectives for evaluating AT methods." }, { "heading": "B ADDITIONAL RESULTS", "text": "In this section, we provide additional results to further support the conclusions in the main text." }, { "heading": "B.1 EARLY DECAYS LEARNING RATE", "text": "As shown in Fig. 1, smaller values of weight decay make the training faster but also more tend to overfit. So in Fig. 4, we early decay the learning rate at 40 and 45 epochs, rather than 100 and 105 epochs. We can see that the models can achieve the same clean accuracy, but the weight decay of 5 × 10−4 can still achieve better robustness. Besides, in Fig. 5, we use different values of weight decay for standard training, where the models can also achieve similar clean accuracy. These results demonstrate that adversarial robustness is a more difficult target than clean performance, and is more sensitive to the training hyperparameters, both for standardly and adversarially trained models." }, { "heading": "B.2 THE EFFECT OF SMOOTH ACTIVATION FUNCTION", "text": "In Table 13 we test the effect of Softplus and BN mode on ResNet-18." }, { "heading": "B.3 RESULTS OF EARLY STOPPING, WARMUP, AND OPTIMIZERS ON WRN-34-10", "text": "In Table 14 and Table 15, we provide the results on WRN-34-10." }, { "heading": "B.4 RANK IN THE AUTOATTACK BENCHMARK", "text": "The models evaluated in this paper are all retrained based on the released codes (Zhang et al., 2019b; Rice et al., 2020). Now we compare our trained models with the AutoAttack public benchmark, where the results of previous work are based on the released pretrained models. In Table 16, we retrieve our results in Table 9 on the TRADES model where we simply change the weight decay from 2× 10−4 to 5× 10−4. We can see that this seemingly unimportant difference sends the TRADES model back to the state-of-the-art position in the benchmark." }, { "heading": "B.5 MORE EVALUATIONS ON LABEL SMOOTHING", "text": "In Table 17 we further investigate the effect of label smoothing on adversarial training." } ]
2,021
null
SP:4d41be9a2f6e949a140b7a81dd85cadaabba63ef
[ "The paper presents a framework to reduce internal redundancy in the video recognition model. To do so, given the input frames, the framework predicts two scaling factors to conduct temporal and channel dimension reduction. The remaining part is reconstructed by cheap operations. The authors show that the framework achieves favorable results on several benchmarks." ]
Performing inference on deep learning models for videos remains a challenge due to the large amount of computational resources required to achieve robust recognition. An inherent property of real-world videos is the high correlation of information across frames which can translate into redundancy in either temporal or spatial feature maps of the models, or both. The type of redundant features depends on the dynamics and type of events in the video: static videos have more temporal redundancy while videos focusing on objects tend to have more channel redundancy. Here we present a redundancy reduction framework, termed VA-RED, which is input-dependent. Specifically, our VA-RED framework uses an input-dependent policy to decide how many features need to be computed for temporal and channel dimensions. To keep the capacity of the original model, after fully computing the necessary features, we reconstruct the remaining redundant features from those using cheap linear operations. We learn the adaptive policy jointly with the network weights in a differentiable way with a shared-weight mechanism, making it highly efficient. Extensive experiments on multiple video datasets and different visual tasks show that our framework achieves 20% − 40% reduction in computation (FLOPs) when compared to state-of-the-art methods without any performance loss. Project page: http://people.csail.mit.edu/bpan/va-red/.
[ { "affiliations": [], "name": "Bowen Pan" }, { "affiliations": [], "name": "Rameswar Panda" }, { "affiliations": [], "name": "Camilo Fosco" }, { "affiliations": [], "name": "Chung-Ching Lin" }, { "affiliations": [], "name": "Alex Andonian" }, { "affiliations": [], "name": "Yue Meng" }, { "affiliations": [], "name": "Kate Saenko" }, { "affiliations": [], "name": "Aude Oliva" }, { "affiliations": [], "name": "Rogerio Feris" } ]
[ { "authors": [ "Emmanuel Bengio", "Pierre-Luc Bacon", "Joelle Pineau", "Doina Precup" ], "title": "Conditional computation in neural networks for faster models", "venue": "arXiv preprint arXiv:1511.06297,", "year": 2015 }, { "authors": [ "Yoshua Bengio", "Nicholas Léonard", "Aaron Courville" ], "title": "Estimating or propagating gradients through stochastic neurons for conditional computation", "venue": "arXiv preprint arXiv:1308.3432,", "year": 2013 }, { "authors": [ "Han Cai", "Ligeng Zhu", "Song Han" ], "title": "Proxylessnas: Direct neural architecture search on target task and hardware", "venue": "arXiv preprint arXiv:1812.00332,", "year": 2018 }, { "authors": [ "Joao Carreira", "Andrew Zisserman" ], "title": "Quo vadis, action recognition? a new model and the kinetics dataset", "venue": "In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Guilhem Chéron", "Ivan Laptev", "Cordelia Schmid" ], "title": "P-cnn: Pose-based cnn features for action recognition", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Ali Diba", "Vivek Sharma", "Luc Van Gool", "Rainer Stiefelhagen" ], "title": "Dynamonet: Dynamic action and motion network", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Thomas Elsken", "Jan Hendrik Metzen", "Frank Hutter" ], "title": "Efficient multi-objective neural architecture search via lamarckian evolution", "venue": "arXiv preprint arXiv:1804.09081,", "year": 2018 }, { "authors": [ "Linxi Fan", "Shyamal Buch", "Guanzhi Wang", "Ryan Cao", "Yuke Zhu", "Juan Carlos Niebles", "Li Fei-Fei" ], "title": "Rubiksnet: Learnable 3d-shift for efficient video action recognition", "venue": "In European Conference on Computer Vision,", "year": 2020 }, { "authors": [ "Quanfu Fan", "Chun-Fu Richard Chen", "Hilde Kuehne", "Marco Pistoia", "David Cox" ], "title": "More is less: Learning efficient video representations by big-little network and depthwise temporal aggregation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Christoph Feichtenhofer" ], "title": "X3d: Expanding architectures for efficient video recognition", "venue": "arXiv preprint arXiv:2004.04730,", "year": 2020 }, { "authors": [ "Christoph Feichtenhofer", "Axel Pinz", "Richard P Wildes" ], "title": "Spatiotemporal multiplier networks for video action recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Christoph Feichtenhofer", "Haoqi Fan", "Jitendra Malik", "Kaiming He" ], "title": "Slowfast networks for video recognition, 2018", "venue": null, "year": 2018 }, { "authors": [ "Michael Figurnov", "Maxwell D Collins", "Yukun Zhu", "Li Zhang", "Jonathan Huang", "Dmitry Vetrov", "Ruslan Salakhutdinov" ], "title": "Spatially adaptive computation time for residual networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Ruohan Gao", "Tae-Hyun Oh", "Kristen Grauman", "Lorenzo Torresani" ], "title": "Listen to look: Action recognition by previewing audio", "venue": "arXiv preprint arXiv:1912.04487,", "year": 2019 }, { "authors": [ "Georgia Gkioxari", "Jitendra Malik" ], "title": "Finding action tubes", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Alex Graves" ], "title": "Adaptive computation time for recurrent neural networks", "venue": "arXiv preprint arXiv:1603.08983,", "year": 2016 }, { "authors": [ "Yunhui Guo", "Honghui Shi", "Abhishek Kumar", "Kristen Grauman", "Tajana Rosing", "Rogerio Feris" ], "title": "Spottune: transfer learning through adaptive fine-tuning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Song Han", "Huizi Mao", "William J Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "arXiv preprint arXiv:1510.00149,", "year": 2015 }, { "authors": [ "Song Han", "Jeff Pool", "John Tran", "William Dally" ], "title": "Learning both weights and connections for efficient neural network", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Kensho Hara", "Hirokatsu Kataoka", "Yutaka Satoh" ], "title": "Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet", "venue": "In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Andrew G Howard", "Menglong Zhu", "Bo Chen", "Dmitry Kalenichenko", "Weijun Wang", "Tobias Weyand", "Marco Andreetto", "Hartwig Adam" ], "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications", "venue": "arXiv preprint arXiv:1704.04861,", "year": 2017 }, { "authors": [ "Jie Hu", "Li Shen", "Gang Sun" ], "title": "Squeeze-and-excitation networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Weizhe Hua", "Yuan Zhou", "Christopher M De Sa", "Zhiru Zhang", "G Edward Suh" ], "title": "Channel gating neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Hueihan Jhuang", "Juergen Gall", "Silvia Zuffi", "Cordelia Schmid", "Michael J Black" ], "title": "Towards understanding action recognition", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2013 }, { "authors": [ "Andrej Karpathy", "George Toderici", "Sanketh Shetty", "Thomas Leung", "Rahul Sukthankar", "Li Fei-Fei" ], "title": "Large-scale video classification with convolutional neural networks", "venue": "In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition,", "year": 2014 }, { "authors": [ "Okan Köpüklü", "Xiangyu Wei", "Gerhard Rigoll" ], "title": "You only watch once: A unified cnn architecture for real-time spatiotemporal action localization", "venue": null, "year": 2019 }, { "authors": [ "Bruno Korbar", "Du Tran", "Lorenzo Torresani" ], "title": "Scsampler: Sampling salient clips from video for efficient action recognition", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Hildegard Kuehne", "Hueihan Jhuang", "Estı́baliz Garrote", "Tomaso Poggio", "Thomas Serre" ], "title": "Hmdb: a large video database for human motion recognition", "venue": "In 2011 International Conference on Computer Vision,", "year": 2011 }, { "authors": [ "Xinyu Li", "Bing Shuai", "Joseph Tighe" ], "title": "Directional temporal modeling for action recognition", "venue": "In European Conference on Computer Vision,", "year": 2020 }, { "authors": [ "Ji Lin", "Chuang Gan", "Song Han" ], "title": "Tsm: Temporal shift module for efficient video understanding", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "Darts: Differentiable architecture search", "venue": "arXiv preprint arXiv:1806.09055,", "year": 2018 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Sgdr: Stochastic gradient descent with warm restarts", "venue": "arXiv preprint arXiv:1608.03983,", "year": 2016 }, { "authors": [ "Mason McGill", "Pietro Perona" ], "title": "Deciding how to decide: Dynamic routing in artificial neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Yue Meng", "Chung-Ching Lin", "Rameswar Panda", "Prasanna Sattigeri", "Leonid Karlinsky", "Aude Oliva", "Kate Saenko", "Rogerio Feris" ], "title": "Ar-net: Adaptive frame resolution for efficient action", "venue": null, "year": 2020 }, { "authors": [ "Yue Meng", "Rameswar Panda", "Chung-Ching Lin", "Prasanna Sattigeri", "Leonid Karlinsky", "Kate Saenko", "Aude Oliva", "Rogerio Feris" ], "title": "Adafuse: Adaptive temporal fusion network for efficient action recognition", "venue": "In International Conference on Learning Representations,", "year": 2021 }, { "authors": [ "Mathew Monfort", "Alex Andonian", "Bolei Zhou", "Kandan Ramakrishnan", "Sarah Adel Bargal", "Tom Yan", "Lisa Brown", "Quanfu Fan", "Dan Gutfreund", "Carl Vondrick" ], "title": "Moments in time dataset: one million videos for event understanding", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2019 }, { "authors": [ "Bowen Pan", "Wuwei Lin", "Xiaolin Fang", "Chaoqin Huang", "Bolei Zhou", "Cewu Lu" ], "title": "Recurrent residual module for fast inference in videos", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Prajit Ramachandran", "Barret Zoph", "Quoc V Le" ], "title": "Searching for activation functions", "venue": "arXiv preprint arXiv:1710.05941,", "year": 2017 }, { "authors": [ "Mark Sandler", "Andrew Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Two-stream convolutional networks for action recognition in videos", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Mingxing Tan", "Quoc V Le" ], "title": "Efficientnet: Rethinking model scaling for convolutional neural networks", "venue": "arXiv preprint arXiv:1905.11946,", "year": 2019 }, { "authors": [ "Mingxing Tan", "Bo Chen", "Ruoming Pang", "Vijay Vasudevan", "Mark Sandler", "Andrew Howard", "Quoc V Le" ], "title": "Mnasnet: Platform-aware neural architecture search for mobile", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Surat Teerapittayanon", "Bradley McDanel", "Hsiang-Tsung Kung" ], "title": "Branchynet: Fast inference via early exiting from deep neural networks", "venue": "In 2016 23rd International Conference on Pattern Recognition (ICPR),", "year": 2016 }, { "authors": [ "Du Tran", "Lubomir Bourdev", "Rob Fergus", "Lorenzo Torresani", "Manohar Paluri" ], "title": "Learning spatiotemporal features with 3d convolutional networks", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Du Tran", "Heng Wang", "Lorenzo Torresani", "Jamie Ray", "Yann LeCun", "Manohar Paluri" ], "title": "A closer look at spatiotemporal convolutions for action recognition", "venue": "In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Du Tran", "Heng Wang", "Lorenzo Torresani", "Matt Feiszli" ], "title": "Video classification with channelseparated convolutional networks", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Andreas Veit", "Serge Belongie" ], "title": "Convolutional networks with adaptive inference graphs", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Heng Wang", "Du Tran", "Lorenzo Torresani", "Matt Feiszli" ], "title": "Video modeling with correlation networks", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Limin Wang", "Yuanjun Xiong", "Zhe Wang", "Yu Qiao", "Dahua Lin", "Xiaoou Tang", "Luc Van Gool" ], "title": "Temporal segment networks: Towards good practices for deep action recognition", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Xin Wang", "Fisher Yu", "Zi-Yi Dou", "Trevor Darrell", "Joseph E Gonzalez" ], "title": "Skipnet: Learning dynamic routing in convolutional networks", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Zuxuan Wu", "Tushar Nagarajan", "Abhishek Kumar", "Steven Rennie", "Larry S Davis", "Kristen Grauman", "Rogerio Feris" ], "title": "Blockdrop: Dynamic inference paths in residual networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Zuxuan Wu", "Caiming Xiong", "Chih-Yao Ma", "Richard Socher", "Larry S Davis" ], "title": "Adaframe: Adaptive frame selection for fast video recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Saining Xie", "Chen Sun", "Jonathan Huang", "Zhuowen Tu", "Kevin Murphy" ], "title": "Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Ceyuan Yang", "Yinghao Xu", "Jianping Shi", "Bo Dai", "Bolei Zhou" ], "title": "Temporal pyramid network for action recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Serena Yeung", "Olga Russakovsky", "Greg Mori", "Li Fei-Fei" ], "title": "End-to-end learning of action detection from frame glimpses in videos", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Fisher Yu", "Vladlen Koltun", "Thomas Funkhouser" ], "title": "Dilated residual networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Hengshuang Zhao", "Jianping Shi", "Xiaojuan Qi", "Xiaogang Wang", "Jiaya Jia" ], "title": "Pyramid scene parsing network", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Bolei Zhou", "Hang Zhao", "Xavier Puig", "Sanja Fidler", "Adela Barriuso", "Antonio Torralba" ], "title": "Scene parsing through ade20k dataset", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Bolei Zhou", "Alex Andonian", "Aude Oliva", "Antonio Torralba" ], "title": "Temporal relational reasoning in videos", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Bolei Zhou", "Hang Zhao", "Xavier Puig", "Tete Xiao", "Sanja Fidler", "Adela Barriuso", "Antonio Torralba" ], "title": "Semantic understanding of scenes through the ade20k dataset", "venue": "International Journal on Computer Vision,", "year": 2018 }, { "authors": [ "Yizhou Zhou", "Xiaoyan Sun", "Zheng-Jun Zha", "Wenjun Zeng" ], "title": "Mict: Mixed 3d/2d convolutional tube for human action recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Mohammadreza Zolfaghari", "Kamaljeet Singh", "Thomas Brox" ], "title": "Eco: Efficient convolutional network for online video understanding", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Zisserman" ], "title": "site, resulting in some difference of results. C REDUNDANCY ANALYSIS To motivate our redundancy reduction approach, we measure and visualize the internal redundancy of well known pretrained networks. We analyze the internal feature maps of existing pre-trained I3DInceptionV2 and R(2+1)D networks on Moments in Time and Kinetics", "venue": "For each model-dataset pair,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Large computationally expensive models based on 2D/3D convolutional neural networks (CNNs) are widely used in video understanding (Tran et al., 2015; Carreira & Zisserman, 2017; Tran et al., 2018). Thus, increasing computational efficiency is highly sought after (Feichtenhofer, 2020; Zhou et al., 2018c; Zolfaghari et al., 2018). However, most of these efficient approaches focus on architectural changes in order to maximize network capacity while maintaining a compact model (Zolfaghari et al., 2018; Feichtenhofer, 2020) or improving the way that the network consumes temporal information (Feichtenhofer et al., 2018; Korbar et al., 2019). Despite promising results, it is well known that CNNs perform unnecessary computations at some levels of the network (Han et al., 2015a; Howard et al., 2017; Sandler et al., 2018; Feichtenhofer, 2020; Pan et al., 2018), especially for video models since the high appearance similarity between consecutive frames results in a large amount of redundancy.\nIn this paper, we aim at dynamically reducing the internal computations of popular video CNN architectures. Our motivation comes from the existence of highly similar feature maps across both time and channel dimensions in video models. Furthermore, this internal redundancy varies depending on the input: for instance, static videos will have more temporal redundancy whereas videos depicting a single large object moving tend to produce a higher number of redundant feature maps. To reduce the varied redundancy across channel and temporal dimensions, we introduce an input-dependent redundancy reduction framework called VA-RED2 (Video Adaptive REDundancy REDuction) for efficient video recognition (see Figure 1 for an illustrative example). Our method is model-agnostic and hence can be applied to any state-of-the-art video recognition networks.\nThe key mechanism that VA-RED2 uses to increase efficiency is to replace full computations of some redundant feature maps with cheap reconstruction operations. Specifically, our framework avoids computing all the feature maps. Instead, we choose to only calculate those non-redundant part of feature maps and reconstruct the rest using cheap linear operations from the non-redundant\nfeatures maps. In addition, VA-RED2 makes decisions on a per-input basis: our framework learns an input-dependent policy that defines a ”full computation ratio” for each layer of a 2D/3D network. This ratio determines the amount of features that will be fully computed at that layer, versus the features that will be reconstructed from the non-redundant feature maps. Importantly, we apply this strategy on both time and channel dimensions. We show that for both traditional video models such as I3D (Carreira & Zisserman, 2017), R(2+1)D (Tran et al., 2018), and more advanced models such as X3D (Feichtenhofer, 2020), this method significantly reduces the total floating point operations (FLOPs) on common video datasets without accuracy degradation.\nThe main contributions of our work includes: (1) A novel input-dependent adaptive framework for efficient video recognition, VA-RED2, that automatically decides what feature maps to compute per input instance. Our approach is in contrast to most current video processing networks, where feature redundancy across both time and channel dimensions is not directly mitigated. (2) An adaptive policy jointly learned with the network weights in a fully differentiable way with a sharedweight mechanism, that allows us to make decisions on how many feature maps to compute. Our approach is model-agnostic and can be applied to any backbones to reduce feature redundancy in both time and channel domains. (3) Striking results of VA-RED2 over baselines, with a 30% reduction in computation in comparison to R(2+1)D (Tran et al., 2018), a 40% over I3D-InceptionV2 (Carreira & Zisserman, 2017), and about 20% over the recently proposed X3D-M (Feichtenhofer, 2020) without any performance loss, for video action recognition task. The superiority of our approach is extensively tested on three video recognition datasets (Mini-Kinetics-200, Kinetics-400 (Carreira & Zisserman, 2017), and Moments-In-Time (Monfort et al., 2019)) and one spatio-temporal action localization dataset (J-HMDB-21 (Jhuang et al., 2013)). (4) A generalization of our framework to video action recognition, spatio-temporal localization, and semantic segmentation tasks, achieving promising results while offering significant reduction in computation over competing methods." }, { "heading": "2 RELATED WORK", "text": "Efficiency in Video Understanding Models. Video understanding has made significant progress in recent years, mainly due to the adoption of convolutional neural networks, in form of 2D CNNs (Karpathy et al., 2014; Simonyan & Zisserman, 2014; Chéron et al., 2015; Feichtenhofer et al., 2017; Gkioxari & Malik, 2015; Wang et al., 2016; Zhou et al., 2018a; Lin et al., 2019; Fan et al., 2019) or 3D CNNs (Tran et al., 2015; Carreira & Zisserman, 2017; Hara et al., 2018; Tran et al., 2018). Despite promising results on common benchmarks, there is a significant interest in developing more efficient techniques and smaller models with reasonable performance. Previous works have shown reductions in computational complexity by using hybrid 2D-3D architectures (Xie et al., 2018; Zhou et al., 2018c; Zolfaghari et al., 2018), group convolution (Tran et al., 2019) or selecting salient clips (Korbar et al., 2019). Feichtenhofer et al., (Feichtenhofer et al., 2018) propose a dedicated low-framerate pathway. Expansion of 2D architectures through a stepwise expansion approach over the key variables such as temporal duration, frame rate, spatial resolution, network width, is\nrecently proposed in (Feichtenhofer, 2020). Diba et al. (Diba et al., 2019) learn motion dynamic of videos with a self-supervised task for video understanding. Fan et al. (Fan et al., 2020) incorporate a efficient learnable 3D-shift module into a 3D video network. Wang et al. (Wang et al., 2020) devise a correlation module to learn correlation along temporal dimension. Li et al. (Li et al., 2020) encode the clip-level ordered temporal information with a CIDC network. While these approaches bring considerable efficiency improvements, none of them dynamically calibrates the required feature map computations on a per-input basis. Our framework achieves substantial improvements in average efficiency by avoiding redundant feature map computation depending on the input. Adaptive Inference. Many adaptive computation methods have been recently proposed with the goal of improving efficiency (Bengio et al., 2015; 2013; Veit & Belongie, 2018; Wang et al., 2018; Graves, 2016; Meng et al., 2021). Several works have been proposed that add decision branches to different layers of CNNs to learn whether to exit the network for faster inference (Yu et al., 2018; Figurnov et al., 2017; McGill & Perona, 2017; Teerapittayanon et al., 2016) Wang et al. (Wang et al., 2018) propose to skip convolutional blocks on a per input basis using reinforcement learning and supervised pre-training. Veit et al. (Veit & Belongie, 2018) propose a block skipping method controlled by samples from a Gumbel softmax, while Wu et al. (Wu et al., 2018) develop a reinforcement learning approach to achieve this goal. Adaptive computation time for recurrent neural networks is also presented in (Graves, 2016). SpotTune (Guo et al., 2019) learns to adaptively route information through finetuned or pre-trained layers. A few works have been recently proposed for selecting salient frames conditioned on the input (Yeung et al., 2016; Wu et al., 2019; Korbar et al., 2019; Gao et al., 2019) while recognizing actions in long untrimmed videos. Different from adaptive data sampling (Yeung et al., 2016; Wu et al., 2019; Korbar et al., 2019; Gao et al., 2019), in this paper, our goal is to remove feature map redundancy by deciding how many features need to be computed for temporal and channel dimensions per input basis, for efficient video recognition. AR-Net (Meng et al., 2020) recently learns to adaptively choose the resolution of input frames with several individual backbone networks for video inference. In contrast, our method focuses on reducing the redundancy in both temporal and channel dimension, and is applicable to both 3D and 2D models, while AR-Net is only for 2D model and it is focused on spatial resolution. Moreover, our method integrates all the inference routes into a single model which is in the almost same size to the original base model. Thus our model is significantly smaller than AR-Net in terms number of model parameters. Neural Architecture Search. Our network learns the best internal redundancy reduction scheme, which is similar to previous work on automatically searching architectures (Elsken et al., 2018). Liu et al. (Liu et al., 2018) formulate the architecture search task in a differentiable manner; Cai et al. (Cai et al., 2018) directly learn architectures for a target task and hardware, Tan et al. (Tan & Le, 2019) design a compound scaling strategy that searches through several key dimensions for CNNs (depth, width, resolution). Finally, Tan et al. (Tan et al., 2019) incorporate latency to find efficient networks adapted for mobile use. In contrast, our approach learns a policy that chooses over full or reduced convolutions at inference time, effectively switching between various discovered subnetworks to minimize redundant computations and deliver high accuracy." }, { "heading": "3 VIDEO ADAPTIVE REDUNDANCY REDUCTION", "text": "Our main goal is to automatically decide which feature maps to compute for each input video in order to classify it correctly with the minimum computation. The intuition behind our proposed method is that there are many similar feature maps along the temporal and channel dimensions. For each video instance, we estimate the ratio of feature maps that need to be fully computed along the temporal dimension and channel dimension. Then, for the other feature maps, we reconstruct them from those pre-computed feature maps using cheap linear operations.\nApproach Overview. Without loss of generality, we start from a 3D convolutional network G, and denote its lth 3D convolution layer as fl, and the corresponding input and output feature maps as Xl and Yl respectively. For each 3D convolution layer, we use a very lightweight policy layer pl denoted as soft modulation gate to decide the ratio of feature maps along the temporal and channel dimensions which need to be computed. As shown in Figure 2, for temporal-wise dynamic inference, we reduce the computation of 3D convolution layer by dynamically scaling the temporal stride of the 3D filter with a factor R = 2pl(Xl)[0]. Thus the shape of output Y ′l becomes Cout × To/R ×Ho ×Wo. To keep the same output shape, we reconstruct the remaining features based on Y ′l as\nYl[j + iR] =\n{ Φti,j(Y ′ l [i]) if j ∈ {1, ..., R− 1}\nY ′l [i] if j = 0 , i ∈ {0, 1, ..., To/R− 1}, (1)\nwhere Yl[j + iR] represents the (j + iR)th feature map of Yl along the temporal dimension, Y ′l [i] denotes the ith feature map of Y ′l , and Φ t i,j is the cheap linear operation along the temporal dimension. The total computational cost of this process can be written as:\nC(f tl ) = 1\nR · C(fl) + ∑ i,j C(Φti,j) ≈ 1 R · C(fl), (2)\nwhere the function C(·) returns the computation cost for a specific operation, and f tl represents our dynamic convolution process along temporal dimension. Different from temporal-wise dynamic inference, we reduce the channel-wise computation by dynamically controlling the number of output channels. We scale the output channel number with a factor r = ( 12 )\npl(Xl)[1]. In this case, the shape of output Y ′l is rCout × To × Ho × Wo. Same as before, we reconstruct the remaining features via cheap linear operations, which can be formulated as Yl = [Y ′l ,Φ\nc(Y ′l )], where Φc(Y ′l ) ∈ R(1−r)Cout×To×Ho×Wo represents the cheaply generated feature maps along the channel dimension, and Yl ∈ RCout×To×Ho×Wo is the output of the channel-wise dynamic inference. The total computation cost of joint temporal-wise and channel-wise dynamic inference is:\nC(f t,cl ) ≈ r\nR · C(fl), (3)\nwhere f t,cl is the adjunct process of temporal-wise and channel-wise dynamic inference.\nSoft Modulation Gate for Differentiable Optimization. We adopt an extremely lightweight policy layer pl called soft modulation gate for each convolution layer fl to modulate the ratio of features which need to be computed. Specifically, the soft modulation gate takes the input feature maps Xl as input and learns two probability vectors V lt ∈ RSt and V lc ∈ RSc , where St and Sc are the temporal search space size and the channel search space size respectively. The V lt and V l c are learned by:\n[V lt , V l c ] = pl(Xl) = φ(F(ωp,2, δ(N (F(ωp,1, G(Xl))))) + βlp), (4)\nwhere F(·, ·) denotes the fully-connected layer, N is the batch normalization, δ(·) represents the tanh(·) function, G is the global pooling operation whose output shape is Cin · T × 1× 1, φ(·) is the output activation function, here we just use max(tanh(·), 0) whose output range is [0, 1), and ωp,1 ∈ R(St+Sc)×Dh , ωp,2 ∈ RDh×Cin·T are the weights of their corresponding layers, Dh is the hidden dimension number. V lt and V l c will then be used to modulate the ratio of the feature maps to be computed in temporal-wise dynamic convolution and channel-wise dynamic convolution. During training, we obtain the final output of the dynamic convolution by weighted sum of all the feature\nmaps which contains different ratio of fully-computed features as follows:\nY lc = Sc∑ i=1 V lc [i] · f cl (Xl, r = ( 1 2 )(i−1)), Yl = St∑ j=1 V lt [j] · f tl (Y lc , R = 2(j−1)), (5)\nwhere f cl (·, r) is the channel-wise dynamic convolution with the channel scaling factor r, and f tl (·, R) it the temporal-wise dynamic convolution with the temporal stride scaling factor R. During the inference phase, only the dynamic convolutions whose weights are not zero will be computed.\nShared-weight Training and Inference. Many works in adaptive computation and neural architecture search suffer from very heavy computational cost and memory usage during training stage due to the large search space. In our case, under the naive implementation, the training computational cost and parameter size would linearly grow as the search space size increases. To train our model efficiently, we utilize a weight-sharing mechanism to reduce the computational cost and training memory. To be specific, we first compute all the possible necessary features using a big kernel. Then, for each dynamic convolution with different scaling factor, we sample its corresponding ratio of necessary features and reconstruct the rest features by cheap operations to get the final output. Though this, we are able to keep the computational cost at a constant value invariant to the search space. More details on this are included in Section B of the Appendix.\nEfficiency Loss. To encourage our network to output a computational efficient subgraph, we introduce the efficiency loss Lc during the training process, which can be formulated as\nLe = (µ0 L∑\nl=1 C(fl)∑L k=1 C(fk) · r s l Rsl )2, µ0 =\n{ 1 if correct 0 otherwise , (6)\nwhere rsl is channel scaling factor of the largest filter in the series of channel-wise dynamic convolutions, and Rsl is stride scaling factor of the largest filter of temporal-wise dynamic convolutions. Overall, the loss function of our whole framework can be written as L = La + λeLe, where La is the accuracy loss of the whole network and λe is the weight of efficiency loss which can be used to balance the importance of the optimization of prediction accuracy and computational cost." }, { "heading": "4 EXPERIMENTS", "text": "Datasets. We conduct our video action recognition experiments on three standard benchmarks: Mini-Kinetics-200, Kinetics-400, and Moments-In-Time. Mini-Kinetics-200 (assembled by (Meng et al., 2020)) is a subset of full Kinetics dataset (Carreira & Zisserman, 2017) containing 121k videos for training and 10k videos for testing across 200 action classes. Moments-In-Time dataset has 802,244 videos in training and 33,900 videos in validation across 339 categories. To show the generalization ability to different task, we also conduct the video spatio-temporal action localization on J-HMDB-21 (Jhuang et al., 2013). J-HMDB-21 is a subset of HMDB dataset (Kuehne et al., 2011) which has 928 short videos with 21 action categories. We report results on the first split. For semantic segmentation experiments, we use ADE20K dataset (Zhou et al., 2017; 2018b), containing 20k images for training and 2k images for validation. ADE20K is a densely labeled image dataset where objects and object parts are segmented down to pixel level. We report results on validation set.\nModel Architectures. We evaluate our method on three most widely-used model architectures: I3D (Carreira & Zisserman, 2017), R(2+1)D (Tran et al., 2018), and the recent efficient model X3D (Feichtenhofer, 2020). We consider I3D-InceptionV2 (denoted as I3D below) and R(2+1)D-18 (denoted as R(2+1)D below) as our base model. In our implementation of X3D, we remove all the swish non-linearity (Ramachandran et al., 2017) except those in SE layer (Hu et al., 2018) to save training memory and speed up the inference speed on GPU. We choose X3D-M (denote as X3D below) as our base model and demonstrate that our method is generally effective across datasets.\nImplementation Details. We train and evaluate our baseline models by mainly following the settings in their original papers (Tran et al., 2018; Xie et al., 2018; Feichtenhofer, 2020). We train all our base and dynamic models for 120 epochs on mini-Kinetics-200, Kinetics-400, and 60 epochs on Moments-In-Time dataset. We use a mini-batch size of 12 clips per GPU and adopt synchronized SGD with cosine learning rate decaying strategy (Loshchilov & Hutter, 2016) to train all our models. Dynamic models are finetuned with efficiency loss for 40/20 epochs to reduce density of inference\ngraph while maintaining the accuracy. During finetuning, we set λc to 0.8 and learning rate to 0.01 for R(2+1)D and 0.1 for I3D and X3D. For testing, we adopt K-LeftCenterRight strategy: K temporal clips are uniformly sampled from the whole video, on which we sample the left, center and right crops along the longer spatial axis, the final prediction is obtained by averaging these 3×K clip predictions. We set K = 10 on Mini-Kinetics-200 and Kinetics-400 and K = 3 on Moments-In-Time. More implementation details are included in Section B of Appendix. For video spatio-temporal action localization, we adopt YOWO architecture in (Köpüklü et al., 2019) and replace 2D branch with 3D backbone to directly compare them. We freeze the parameters of 3D backbone as suggested in (Köpüklü et al., 2019) due to small number of training video in J-HMDB-21 (Jhuang et al., 2013). The rest part of the network is optimized by SGD with initial learning rate of 10−4. Learning rate is reduced with a decaying factor of 0.5 at 10k, 20k, 30k and 40k iterations. For semantic segmentation, we conduct experiments using PSPNet (Zhao et al., 2017), with dilated ResNet-18 (Yu et al., 2017; He et al., 2016) as our backbone architecture. As PSPNet is devised for image semantic segmentation, we only apply the channel-wise redundancy reduction to the model and adopt synchronized SGD training for 100k iterations across 4GPUs with 2 images on each GPU. The learning rate decay follows the cosine learning rate schedule schedule (Loshchilov & Hutter, 2016).\nResults on Video Action Recognition. We first evaluate our method by applying it to R(2+1)D18 (Tran et al., 2018) with different number of input frames and different size of search space. Here we use GFLOPs (floating point operations) to measure the computational cost of the model and report clip-1, video-1 and video-5 metrics to measure the accuracy of our models, where clip-1 is the top-1 accuracy of model evaluation with only one clip sampled from video, video-1 and video-5 are the top-1 and top-5 accuracy of model evaluated with K-LeftCenterRight strategy. Note that we report the FLOPs of a single video clips at the spatial resolution 256× 256 (for I3D and X3D) or 128× 128 (for R(2+1)D). In addition, we report the speed of each model with the metric of clip/second, which denotes the number of video clips that are processed in one second. We create the environment with PyTorch 1.6, CUDA 11.0, and a single NVIDIA TITAN RTX (24GB) GPU as our testbed to measure speed of different models. Table 1 shows the results (In all of the tables, 7 represents the\noriginal fixed model architecture while 3 denote the dynamic model trained using our proposed approach). Our proposed approach VA-RED2 significantly reduces the computational cost while improving the accuracy. We observe that dynamic model with the search space size of 2 has the best performance in terms of accuracy, GFLOPS and speed. We further test our VA-RED2 with all of the three model architectures: R(2+1)D-18, I3D-InceptionV2, and X3D-M (Table 2) including the very recent temporal pyramid module (Yang et al., 2020) and correlation module (Wang et al., 2020) on Mini-Kinetics-200 dataset. We choose R(2+1)D-18 with TPN and CorrNet as the backbone architecture and test the performance of our method using a search space of 2 in Table 3 and Table 4 (Left) respectively. Table 2 shows that method boosts the speed of base I3D-InceptionV2 and R(2+1)D models by 21.7% and 10.6% respectively, showing its advantages not only in terms of GFLOPS but also in actual speed. Table 4 (Left) shows that our dynamic approach also outperforms the baseline CorrNet by 1.8% in top-1 video accuracy, while reducing the computational cost by 25.2% on Mini-Kinetics-200. Furthermore, we compare our method with AR-Net (Meng et al., 2020), which is a recent adaptive method that selects optimal input resolutions for video inference. We conduct our experiments on 16-frame TSN (Wang et al., 2016) with ResNet50 backbone and provide the comparison on FLOPs, parameter size, and accuracy (Table 4 (Right)). To make a fair comparison, we train AR-Net using the official implementation on the same Mini-Kinetics-200 dataset with Kaiming initialization (He et al., 2015). Table 4 (Right) shows that our method, VA-RED2 outperforms AR-Net in both accuracy and GFLOPS, while using about 62% less parameters. Table 5 and Table 6 show the results of different methods on Kinetics-400 and Moments-In-Time, respectively. To summarize, we observe that VA-RED2 consistently improves the performance of all the base models including the recent architectures X3D, TPN, and CorrNet, while offering significant reduction in computation. Moreover, our approach is model-agnostic, which allows this to be served as a plugin operation for a wide range of action recognition architectures. From the comparison among different models, we find that our proposed VA-RED2 achieves the most computation reduction on I3D-InceptionV2, between 40% and 50%, while reducing less than 20% on X3D-M. This is because X3D-M is already very efficient both in terms of channel dimension and temporal dimension. Notice that the frames input to X3D-M are at the temporal stride of 5, which makes them share less similarity. Furthermore, we observe that dynamic I3D-InceptionV2 has very little variation of the computation for different input instances. This could be because of the topology configuration of the InceptionV2, which has lots of parallel structures inside the network architecture.\nWe also compare VA-RED2 with a weight-level pruning method (Han et al., 2015b) and a automatic channel pruning method (CGNet) (Hua et al., 2019) on Mini-Kinetics-200. Table 7 shows that our approach significantly outperforms the weight-level pruning method by a margin of about 3%-4% in clip-1 accuracy with similar computation over the original fixed model and consistently outperforms CGNet while requiring less GFLOPs (maximum 2.8% in 16 frame). These results well demonstrate the effectiveness of our dynamic video redundancy framework over network pruning methods.\nTable 7: Comparison with network pruning methods. We choose R(2+1)D on Mini-Kinetics-200 dataset with different number of input frames. Numbers in green/blue quantitatively show how much our proposed method is better/worse than these pruning methods.\nMethod Frames GFLOPs clip-1\nWeight-level 8 19.9 (-0.1) 54.5 (+3.2)\n16 40.3 (-0.1) 57.7 (+2.9) 32 79.6 (-0.3) 59.6 (+3.7)\nCGNet 8 23.8 (+3.8) 56.2 (+1.5)\n16 47.6 (+7.2) 57.8 (+2.8) 32 95.3 (+16.0) 61.8 (+1.5)\nTable 9: Effect of efficiency loss on Kinetics400. Eff. denotes the efficiency loss.\nModel Eff. GFLOPs clip-1 video-1\nR(2+1)D No 49.8 57.9 66.7Yes 40.3 58.4 67.6\nI3D No 56.0 58.0 66.5Yes 32.1 58.6 67.1\nResults on Spatio-Temporal Action Localization. We further extend our method to the spatiotemporal action localization task to demonstrate the generalization ability to different task. We conduct our method on J-HMDB-21 with two different 3D backbone networks: I3D-InceptionV2 and X3D-M. We report frame-mAP at IOU threshold 0.5, recall value at IOU threshold 0.5, and classification accuracy of correctly localized detections to measure the performance of the detector. Table 8 shows that our dynamic approach outperforms the baselines on all three metrics while offering significant savings in FLOPs (e.g., more than 50% savings on I3D). In summary, VA-RED2 is clearly better than the baseline architectures in terms of both accuracy and computation cost on both recognition and localization tasks, making it suitable for efficient video understanding.\nEffect of Efficiency Loss. We conduct an experiment by comparing the model performance before and after being finetuned with our proposed efficiency loss. Table 9 shows that finetuning our dynamic model with efficiency loss significantly reduces the computation without any accuracy loss.\nAblation Experiments on Dynamic Modeling. We test performance of or approach by turning of dynamic modeling along temporal and channel dimensions on Mini-Kinetics-200. Table 10 shows that dynamic modeling along both dimensions obtains the best performance while requiring the least computation. This shows importance of input-dependent policy for deciding how many features need to be computed for both temporal and channel dimensions.\nVisualization and Analysis. To better understand the policy decision process, we dissect the network layers and count the ratio of feature maps that are being computed during each convolution layers for each category. From Figure 3, we observe that: In X3D, point-wise convolutions which right after the depth-wise convolutions have more variation among classes and network tends to consume more temporal-wise features at the early stage and compute more channel-wise features at the late stage of the architecture. The channel-wise policy has also more variation than the temporal-wise policy among different categories. Furthermore, we show few contrasting examples which are in the\nsame category while requiring very different computation in Figure 4. Video clips which have more complicated scene configuration (e.g. cooking eggs and playing volleyball) and more violent camera motion (e.g. flipping pancake) tend to need more feature maps to do the correct predictions. More qualitative results can be found in Section E, Section F and Section G of the Appendix.\nVA-RED2 on Dense Visual Tasks. Our VA-RED2 framework is also applicable for some dense visual tasks, like semantic segmentation, which requires the pixel-level prediction for the input content. To prove this, we apply our method to a semantic segmentation model on the ADE-20K dataset (Zhou et al., 2017; 2018b). We report computational cost of model encoder and the mean IoU (Intersection-Over-Union) in Table 11. As can be seen from Table 11, our proposed VA-RED2 has the absolute advantage in terms of efficiency while maintaining the precision of segmentation. This experiment clearly shows that our method is not only effective on the recognition and detection tasks, but also applicable to the dense visual tasks like semantic segmentation." }, { "heading": "5 CONCLUSION", "text": "In this paper, we propose an input-dependent adaptive framework called VA-RED2 for efficient inference which can be easily plugged into most of existing video understanding models to significantly reduce the model computation while maintaining the accuracy. Extensive experimental results on video action recognition, spatio-temporal localization, and semantic segmentation validate the effectiveness of our framework in multiple standard benchmark datasets.\nAcknowledgements. This work is supported by IARPA via DOI/IBC contract number D17PC00341. This work is also supported by the MIT-IBM Watson AI Lab.\nDisclaimer. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DOI/IBC, or the U.S. Government." }, { "heading": "A DATASET DETAILS", "text": "We evaluate the performance of our approach using three video action recognition datasets, namely Mini-Kinetics-200 (Meng et al., 2020), Kinetics-400 (Carreira & Zisserman, 2017), and MomentsIn-Time (Monfort et al., 2019) and one spatio-temporal action localization task namely J-HMDB21 (Jhuang et al., 2013). Kinetics-400 is a large dataset containing 400 action classes and 240K training videos that are collected from YouTube. The Mini-Kinetics dataset contains 121K videos for training and 10K videos for testing, with each video lasting 6-10 seconds. The original Kinetics dataset is publicly available to download at https://deepmind.com/research/open-source/ kinetics. We use the official training/validation/testing splits of Kinetics-400 and the splits released by authors in (Meng et al., 2020) for Mini-Kinetics-200 in our experiments.\nMoments-in-time (Monfort et al., 2019) is a recent collection of one million labeled videos, involving actions from people, animals, objects or natural phenomena. It has 339 classes and each video clip is trimmed to 3 seconds long. This dataset is designed to have a very large set of both inter-class and intra-class variation that captures a dynamic event at different levels of abstraction (i.e. ”opening” doors, curtains, mouths, even a flower opening its petals). We use the official splits in our experiments. The dataset is publicly available to download at http://moments.csail.mit.edu/.\nJoints for the HMDB dataset (J-HMDB-21 (Jhuang et al., 2013)) is based on 928 clips from HMDB51 comprising 21 action categories. Each frame has a 2D pose annotation based on a 2D articulated human puppet model that provides scale, pose, segmentation, coarse viewpoint, and dense optical flow for the humans in action. The 21 categories are brush hair, catch, clap, climb stairs, golf, jump, kick ball, pick, pour, pull-up, push, run, shoot ball, shoot bow, shoot gun, sit, stand, swing baseball, throw, walk, wave. The dataset is available to download at http://jhmdb.is.tue.mpg.de/.\nB IMPLEMENTATION DETAILS\nDetails of Shared-weight Training and Inference. In this section, we provide more details of the shared-weight mechanism presented in Section 3 of the main paper. We first compute all the possible necessary features using a big kernel and then for each dynamic convolution with different scaling factor, we sample its corresponding ratio of necessary features and reconstruct the rest features by cheap operations to get the final output. For example, the original channel-wise dynamic convolution at ratio r = ( 12 )\n(i−1) can be analogized to[ (f cl (Xl, r = ( 1\n2 )i\nc s−1)[0 : (\n1 2 )(i−1)Cout]), (Φ c(f cl (Xl, r = ( 1 2 )i c s−1)[0 : ( 1 2 )(i−1) · Cout]))\n] , (7)\nwhere [· : ·] is the index operation along the channel dimension, and ics is the index of the largest channel-wise filter, during training phase, we have ics = 1, while during inference phase, i c s is the smallest index for V lc , s.t.V l c [i c s] = 0. By utilizing such a share-weight mechanism, the computation of the total channel-wise dynamic convolution is reduce to ( 12 ) ics−1 · C(fl). Further, we have the total computational cost of the adjunct process as\nC(f t,cl ) = ( 1\n2 )i\nc s+i t s−2 · C(fl), (8)\nwhere its is the index of largest temporal-wise filter.\nTraining and Inference. We apply our method mainly to 2D convolutions in R(2+1)D since 2D convolution takes the most computational cost compared with 1D convolution. We train most of our models on 96 NVIDIA Tesla V100-32GB GPUs and perform synchronized BN (Ioffe & Szegedy, 2015) across all the GPUs. For R(2+1)D (Tran et al., 2018), the learning rate is initialized as 0.18 and the weight decay is set to be 5 × 10−4. For I3D (Carreira & Zisserman, 2017; Xie et al., 2018) and X3D (Feichtenhofer, 2020), the learning rates both start from 1.8 and weight decay factors are 1× 10−4 and 5× 10−5 respectively. Cosine learning rate decaying strategy is applied to decrease the total learning rate. All of the models are trained from scratch and warmed up for 15 epochs on mini-Kinetics/Kinetics, 8 epochs on Moments-In-Time dataset. We adopt the Nesterov momentum optimizer with an initial weight of 0.01 and a momentum of 0.9. During training, we follow the data augmentation (location jittering, horizontal flipping, corner cropping, and scale jittering) used in TSN (Wang et al., 2016) to augment the video with different sizes spatially and flip\nthe video horizontally with 50% probability. We use single-clip, center-crop FLOPs as a basic unit of computational cost. Inference-time computational cost is roughly proportional to this, if a fixed number of clips and crops is used, as is for our all models. Note that Kinetics-400 dataset is shrinking in size (∼15% videos removed from original Kinetics) and the original version used in (Carreira & Zisserman, 2017) are no longer available from official site, resulting in some difference of results." }, { "heading": "C REDUNDANCY ANALYSIS", "text": "To motivate our redundancy reduction approach, we measure and visualize the internal redundancy of well known pretrained networks. We analyze the internal feature maps of existing pre-trained I3DInceptionV2 and R(2+1)D networks on Moments in Time and Kinetics. For each model-dataset pair, we extract feature maps for all examples in the validation sets, in both time and channel dimensions, and measure their similarity. In detail, our method consists of the following steps: (1) For a given input, we first extract the output feature maps from all convolutional layers in the network at hand. (2) In each layer, we measure the similarity of each feature map to each other with Person’s correlation coefficient (CC) and root mean squared error (RMSE). We additionally flag feature maps that exhibit high similarity as redundant. (3) After computing this for the validation sets, we average the values over all examples to obtain mean metrics of redundancy per model and per dataset. We additionally compute the ranges of these values to visualize how much redundancy can vary in a model-dataset pair. We present quantitative results in Table 12 and show examples of our findings in Figure 5.\nD VA-RED2 ON LONGER-TRAINING MODEL\nIn our experiments, all of our models are trained under a common evaluation protocol for a fair comparison. To balance the training cost and model performance, we use a smaller epoch size than the original paper to train our models. For example, authors in (Tran et al., 2018) and (Feichtenhofer, 2020), train the R(2+1)D models and X3D models for 188 epochs and 256 epochs respectively to pursue the state-of-the art. However, we only train the models for 120 epochs to largely save the computation resources and training time. However, to rule out the possibility that our base models (i.e., without using Dynamic Convolution) benefit from longer training epochs while our VA-RED2 may not, we conduct an ablation study on the epoch size in Table 13. We can see that our method still shows superiority over the base model in terms of the computational cost and accuracy on the 256-epoch model. Thus we conclude that the effectiveness of our method in achieving higher performance with low computation also holds on the longer-training models." }, { "heading": "E FEATURE MAP VISUALIZATIONS", "text": "To further validate our initial motivation, we visualize the feature maps which are fully computed by the original convlution operation and those which are generated by the cheap operations. We demonstrate those in both temporal dimension (c.f. Figure 6) and channel dimension (c.f. Figure 7). In both cases we can see that the proposed cheap operation generates meaningful feature maps and some of them looks even no difference from the original feature maps." }, { "heading": "F POLICY VISUALIZATIONS", "text": "To compare with the policy on Mini-Kinetics-200 (Figure 3 of the main paper), we also visualize the ratio of features which are consumed in each layer on Kinetics-400 (c.f. Figure 8) and MomentsIn-Time (c.f. Figure 9). We can see from these two figures that the conclusions we draw from Mini-Kientics-200 still hold. Specifically, In X3D, point-wise convolutions which right after the depth-wise convolutions have more variation among classes and network tends to consume more temporal-wise features at the early stage and compute more channel-wise features at the late stage of the architecture. However, R(2+1)D choose to select fewer features at early stage by both temporal-wise and channel-wise policy. Furthermore, we count the FLOPs of each instance on Mini-Kinetics-200, Kinetics-400, and Moments-In-Time and plot pie charts to visualize the the distribution of this instance-level computational cost. We analyze such distribution with two models: R(2+1)D-18 and X3D-M. All of the results are demonstrated in Figure 10." }, { "heading": "G QUALITATIVE RESULTS", "text": "We show additional input examples which consume different levels of computational cost on Kinetics400 dataset (c.f. Figure 11) and Moments-In-Time dataset (c.f. Figure 12). To be consistent, we\nuse the 16-frame dynamic R(2+1)D-18 as our pre-trained model. We can see that the examples consuming less computation tend to have less temporal motion, like the second example in Figure 11, or have a relatively simple scene configuration, like the first and second examples in Figure 12." } ]
2,021
VA-RED: VIDEO ADAPTIVE REDUNDANCY REDUC- TION
SP:7757f1f1066f31276dcbc93ad684ee84d925206a
[ "On page 2, in the background section: the discounted state distribution, what you wrote is not a distribution (doesn't sum to 1). In order to define this $d^{\\pi_\\theta}$ properly, you can multiply everything by $1-\\gamma$. The interpretation is that you \"reset\" in your initial distribution $\\mu_0$ with probability $1 - \\gamma$ at every step, or continue in the discounted stationary distribution with probability $\\gamma$." ]
Traditional off-policy actor-critic Reinforcement Learning (RL) algorithms learn value functions of a single target policy. However, when value functions are updated to track the learned policy, they forget potentially useful information about old policies. We introduce a class of value functions called Parameter-Based Value Functions (PBVFs) whose inputs include the policy parameters. They can generalize across different policies. PBVFs can evaluate the performance of any policy given a state, a state-action pair, or a distribution over the RL agent’s initial states. First we show how PBVFs yield novel off-policy policy gradient theorems. Then we derive off-policy actor-critic algorithms based on PBVFs trained by Monte Carlo or Temporal Difference methods. We show how learned PBVFs can zeroshot learn new policies that outperform any policy seen during training. Finally our algorithms are evaluated on a selection of discrete and continuous control tasks using shallow policies and deep neural networks. Their performance is comparable to state-of-the-art methods.
[ { "affiliations": [], "name": "Francesco Faccio" }, { "affiliations": [], "name": "Louis Kirsch" } ]
[ { "authors": [ "Leemon Baird" ], "title": "Residual algorithms: Reinforcement learning with function approximation", "venue": "In Machine Learning Proceedings", "year": 1995 }, { "authors": [ "Andrew J. Booker", "J.E. Dennis", "Paul D. Frank", "David B. Serafini", "Virginia Torczon" ], "title": "Optimization Using Surrogate Objectives on a Helicopter", "venue": "Test Example,", "year": 1998 }, { "authors": [ "Vivek S Borkar" ], "title": "Stochastic approximation: a dynamical systems viewpoint, volume 48", "venue": null, "year": 2009 }, { "authors": [ "G.E.P. Box", "K.B. Wilson" ], "title": "On the experimental attainment of optimum conditions", "venue": "Journal of the Royal Statistical Society. Series B (Methodological),", "year": 1951 }, { "authors": [ "Corinna Cortes", "Yishay Mansour", "Mehryar Mohri" ], "title": "Learning bounds for importance weighting", "venue": "In Advances in neural information processing systems,", "year": 2010 }, { "authors": [ "Thomas Degris", "Martha White", "Richard S. Sutton" ], "title": "Off-policy actor-critic", "venue": "In Proceedings of the 29th International Coference on International Conference on Machine Learning,", "year": 2012 }, { "authors": [ "Jean Harb", "Tom Schaul", "Doina Precup", "Pierre-Luc Bacon" ], "title": "Policy evaluation networks", "venue": "arXiv preprint arXiv:2002.11833,", "year": 2020 }, { "authors": [ "Timothy Classen Hesterberg" ], "title": "Advances in importance sampling", "venue": "PhD thesis, Stanford University,", "year": 1988 }, { "authors": [ "Ehsan Imani", "Eric Graves", "Martha White" ], "title": "An off-policy policy gradient theorem using emphatic weightings", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Max Jaderberg", "Wojciech Marian Czarnecki", "Simon Osindero", "Oriol Vinyals", "Alex Graves", "David Silver", "Koray Kavukcuoglu" ], "title": "Decoupled neural interfaces using synthetic gradients", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Vijay Konda", "John Tsitsiklis" ], "title": "Actor-critic algorithms", "venue": "Society for Industrial and Applied Mathematics,", "year": 2001 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "arXiv preprint arXiv:1509.02971,", "year": 2015 }, { "authors": [ "Yao Liu", "Adith Swaminathan", "Alekh Agarwal", "Emma Brunskill" ], "title": "Off-policy policy gradient with state distribution correction", "venue": "arXiv preprint arXiv:1904.08473,", "year": 2019 }, { "authors": [ "Hamid R. Maei", "Csaba Szepesvári", "Shalabh Bhatnagar", "Doina Precup", "David Silver", "Richard S. Sutton" ], "title": "Convergent temporal-difference learning with arbitrary smooth function approximation", "venue": "In Proceedings of the 22nd International Conference on Neural Information Processing Systems,", "year": 2009 }, { "authors": [ "Hamid Reza Maei" ], "title": "Gradient temporal-difference learning algorithms", "venue": "PhD thesis, University of Alberta,", "year": 2011 }, { "authors": [ "Hamid Reza Maei", "Csaba Szepesvári", "Shalabh Bhatnagar", "Richard S. Sutton" ], "title": "Toward off-policy learning control with function approximation", "venue": "In Proceedings of the 27th International Conference on International Conference on Machine Learning,", "year": 2010 }, { "authors": [ "Horia Mania", "Aurelia Guy", "Benjamin Recht" ], "title": "Simple random search of static linear policies is competitive for reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Alberto Maria Metelli", "Matteo Papini", "Francesco Faccio", "Marcello Restelli" ], "title": "Policy optimization via importance sampling", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "N. Metropolis", "S. Ulam" ], "title": "The monte carlo method", "venue": "J. Am. Stat. Assoc.,", "year": 1949 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518(7540):529–533,", "year": 2015 }, { "authors": [ "Andrew W. Moore", "Jeff G. Schneider" ], "title": "Memory-based stochastic optimization", "venue": "Advances in Neural Information Processing Systems", "year": 1996 }, { "authors": [ "Ofir Nachum", "Bo Dai", "Ilya Kostrikov", "Yinlam Chow", "Lihong Li", "Dale Schuurmans" ], "title": "Algaedice: Policy gradient from arbitrary experience", "venue": null, "year": 1912 }, { "authors": [ "Doina Precup", "Richard S. Sutton", "Sanjoy Dasgupta" ], "title": "Off-policy temporal difference learning with function approximation", "venue": "In ICML,", "year": 2001 }, { "authors": [ "Martin L Puterman" ], "title": "Markov decision processes: discrete stochastic dynamic programming", "venue": null, "year": 2014 }, { "authors": [ "Reuven Y. Rubinstein", "Dirk P. Kroese" ], "title": "Simulation and the Monte Carlo Method", "venue": "Wiley Publishing,", "year": 2016 }, { "authors": [ "Tim Salimans", "Jonathan Ho", "Xi Chen", "Szymon Sidor", "Ilya Sutskever" ], "title": "Evolution strategies as a scalable alternative to reinforcement learning", "venue": "arXiv preprint arXiv:1703.03864,", "year": 2017 }, { "authors": [ "Tom Schaul", "Dan Horgan", "Karol Gregor", "David Silver" ], "title": "Universal value function approximators", "venue": "In Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37,", "year": 2015 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Networks adjusting networks", "venue": "In Proceedings of” Distributed Adaptive Neural Information Processing”,", "year": 1990 }, { "authors": [ "Frank Sehnke", "Christian Osendorfer", "Thomas Rückstieß", "Alex Graves", "Jan Peters", "Jürgen Schmidhuber" ], "title": "Policy gradients with parameter-based exploration for control. In Véra Kůrková, Roman Neruda, and Jan Koutnı́k (eds.), Artificial Neural Networks", "venue": "ICANN", "year": 2008 }, { "authors": [ "Frank Sehnke", "Christian Osendorfer", "Thomas Rückstieß", "Alex Graves", "Jan Peters", "Jürgen Schmidhuber" ], "title": "Parameter-exploring policy gradients", "venue": "Neural Networks,", "year": 2010 }, { "authors": [ "David Silver", "Guy Lever", "Nicolas Heess", "Thomas Degris", "Daan Wierstra", "Martin Riedmiller" ], "title": "Deterministic policy gradient algorithms", "venue": "In Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume", "year": 2014 }, { "authors": [ "Jasper Snoek", "Hugo Larochelle", "Ryan P Adams" ], "title": "Practical bayesian optimization of machine learning algorithms", "venue": "In Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "Jasper Snoek", "Oren Rippel", "Kevin Swersky", "Ryan Kiros", "Nadathur Satish", "Narayanan Sundaram", "Md. Mostofa Ali Patwary", "Prabhat Prabhat", "Ryan P. Adams" ], "title": "Scalable bayesian optimization using deep neural networks", "venue": "In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37,", "year": 2015 }, { "authors": [ "Richard S Sutton" ], "title": "Temporal Credit Assignment in Reinforcement Learning", "venue": "PhD thesis, University of Massachusetts Amherst,", "year": 1984 }, { "authors": [ "Richard S Sutton" ], "title": "Learning to predict by the methods of temporal differences", "venue": "Machine learning,", "year": 1988 }, { "authors": [ "Richard S. Sutton", "David McAllester", "Satinder Singh", "Yishay Mansour" ], "title": "Policy gradient methods for reinforcement learning with function approximation", "venue": "In Proceedings of the 12th International Conference on Neural Information Processing Systems,", "year": 1999 }, { "authors": [ "Richard S Sutton", "Hamid R Maei", "Csaba Szepesvári" ], "title": "A convergent o(n) temporal-difference algorithm for off-policy learning with linear function approximation", "venue": "In Advances in neural information processing systems,", "year": 2009 }, { "authors": [ "Richard S. Sutton", "Hamid Reza Maei", "Doina Precup", "Shalabh Bhatnagar", "David Silver", "Csaba Szepesvári", "Eric Wiewiora" ], "title": "Fast gradient-descent methods for temporal-difference learning with linear function approximation", "venue": "In Proceedings of the 26th Annual International Conference on Machine Learning,", "year": 2009 }, { "authors": [ "Richard S. Sutton", "Joseph Modayil", "Michael Delp", "Thomas Degris", "Patrick M. Pilarski", "Adam White", "Doina Precup" ], "title": "Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction", "venue": "In The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 2,", "year": 2011 }, { "authors": [ "Richard S Sutton", "A Rupam Mahmood", "Martha White" ], "title": "An emphatic approach to the problem of off-policy temporal-difference learning", "venue": "The Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Gerald Tesauro" ], "title": "Temporal difference learning and td-gammon", "venue": "Commun. ACM,", "year": 1995 }, { "authors": [ "Thomas Unterthiner", "Daniel Keysers", "Sylvain Gelly", "Olivier Bousquet", "Ilya Tolstikhin" ], "title": "Predicting neural network accuracy from weights", "venue": "arXiv preprint arXiv:2002.11448,", "year": 2020 }, { "authors": [ "Ziyu Wang", "Victor Bapst", "Nicolas Heess", "Volodymyr Mnih", "Remi Munos", "Koray Kavukcuoglu", "Nando de Freitas" ], "title": "Sample efficient actor-critic with experience", "venue": "replay. arXiv preprint arXiv:1611.01224,", "year": 2016 }, { "authors": [ "Paul J Werbos" ], "title": "Backpropagation through time: what it does and how to do it", "venue": "Proceedings of the IEEE,", "year": 1990 }, { "authors": [ "Daan Wierstra", "Tom Schaul", "Tobias Glasmachers", "Yi Sun", "Jan Peters", "Jürgen Schmidhuber" ], "title": "Natural evolution strategies", "venue": "The Journal of Machine Learning Research,", "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "Value functions are central to Reinforcement Learning (RL). For a given policy, they estimate the value of being in a specific state (or of choosing a particular action in a given state). Many RL breakthroughs were achieved through improved estimates of such values, which can be used to find optimal policies (Tesauro, 1995; Mnih et al., 2015). However, learning value functions of arbitrary policies without observing their behavior in the environment is not trivial. Such off-policy learning requires to correct the mismatch between the distribution of updates induced by the behavioral policy and the one we want to learn. Common techniques include Importance Sampling (IS) (Hesterberg, 1988) and deterministic policy gradient methods (DPG) (Silver et al., 2014), which adopt the actorcritic architecture (Sutton, 1984; Konda & Tsitsiklis, 2001; Peters & Schaal, 2008).\nUnfortunately, these approaches have limitations. IS suffers from large variance (Cortes et al., 2010; Metelli et al., 2018; Wang et al., 2016) while traditional off-policy actor-critic methods introduce off-policy objectives whose gradients are difficult to follow since they involve the gradient of the action-value function with respect to the policy parameters∇θQπθ (s, a) (Degris et al., 2012; Silver et al., 2014). This term is usually ignored, resulting in biased gradients for the off-policy objective. Furthermore, off-policy actor-critic algorithms learn value functions of a single target policy. When value functions are updated to track the learned policy, the information about old policies is lost.\nWe address the problem of generalization across many value functions in the off-policy setting by introducing a class of parameter-based value functions (PBVFs) defined for any policy. PBVFs are value functions whose inputs include the policy parameters, the PSSVF V (θ), PSVF V (s, θ), and PAVF Q(s, a, θ). PBVFs can be learned using Monte Carlo (MC) (Metropolis & Ulam, 1949) or Temporal Difference (TD) (Sutton, 1988) methods. The PAVF Q(s, a, θ) leads to a novel stochastic and deterministic off-policy policy gradient theorem and, unlike previous approaches, can directly compute∇θQπθ (s, a). Based on these results, we develop off-policy actor-critic methods and compare our algorithms to two strong baselines, ARS and DDPG (Mania et al., 2018; Lillicrap et al., 2015), outperforming them in some environments.\nWe make theoretical, algorithmic, and experimental contributions: Section 2 introduces the standard MDP setting; Section 3 formally presents PBVFs and derive algorithms for V (θ), V (s, θ) and Q(s, a, θ); Section 4 describes the experimental evaluation using shallow and deep policies; Sections 5 and 6 discuss related and future work. Proofs and derivations can be found in Appendix A.2." }, { "heading": "2 BACKGROUND", "text": "We consider a Markov Decision Process (MDP) (Stratonovich, 1960; Puterman, 2014) M = (S,A, P,R, γ, µ0) where at each step an agent observes a state s ∈ S , chooses action a ∈ A, transitions into state s′ with probability P (s′|s, a) and receives a reward R(s, a). The agent starts from an initial state, chosen with probability µ0(s). It is represented by a parametrized stochastic policy πθ : S → ∆(A), which provides the probability of performing action a in state s. Θ is the space of policy parameters. The policy is deterministic if for each state s there exists an action a such that πθ(a|s) = 1. The return Rt is defined as the cumulative discounted reward from time step t: Rt = ∑T−t−1 k=0 γ\nkR(st+k+1, at+k+1), where T denotes the time horizon and γ a realvalued discount factor. The performance of the agent is measured by the cumulative discounted expected reward (expected return), defined as J(πθ) = Eπθ [R0]. Given a policy πθ, the state-value function V πθ (s) = Eπθ [Rt|st = s] is defined as the expected return for being in a state s and following policy πθ. By integrating over the state space S, we can express the maximization of the expected cumulative reward in terms of the state-value function J(πθ) = ∫ S µ0(s)V\nπθ (s) ds. The action-value function Qπθ (s, a), which is defined as the expected return for performing action a in state s, and following the policy πθ, is Qπθ (s, a) = Eπθ [Rt|st = s, at = a], and it is related to the state-value function by V πθ (s) = ∫ A πθ(a|s)Qπθ (s, a) da. We define as dπθ (s′) the discounted weighting of states encountered starting at s0 ∼ µ0(s) and following the policy πθ: dπθ (s′) = ∫ S ∑∞ t=1 γ\nt−1µ0(s)P (s → s′, t, πθ) ds, where P (s → s′, t, πθ) is the probability of transitioning to s′ after t time steps, starting from s and following policy πθ. Sutton et al. (1999) showed that, for stochastic policies, the gradient of J(πθ) does not involve the derivative of dπθ (s) and can be expressed in a simple form:\n∇θJ(πθ) = ∫ S dπθ (s) ∫ A ∇θπθ(a|s)Qπθ (s, a) dads. (1)\nSimilarly, for deterministic policies Silver et al. (2014) obtained the following: ∇θJ(πθ) = ∫ S dπθ (s)∇θπθ(s)∇aQπθ (s, a)|a=πθ(s) ds. (2)\nOff-policy RL In off-policy policy optimization, we seek to find the parameters of the policy maximizing a performance index Jb(πθ) using data collected from a behavioral policy πb. Here the objective function Jb(πθ) is typically modified to be the value function of the target policy, integrated over dπb∞(s) = limt→∞ P (st = s|s0, πb), the limiting distribution of states under πb (assuming it exists) (Degris et al., 2012; Imani et al., 2018; Wang et al., 2016). Throughout the paper we assume that the support of dπb∞ includes the support of µ0 so that the optimal solution for Jb is also optimal for J . Formally, we want to find:\nJb(πθ∗) = max θ ∫ S dπb∞(s)V πθ (s) ds = max θ ∫ S dπb∞(s) ∫ A πθ(a|s)Qπθ (s, a) da ds. (3)\nUnfortunately, in the off-policy setting, the states are obtained from dπb∞ and not from d πθ ∞ , hence the gradients suffer from a distribution shift (Liu et al., 2019; Nachum et al., 2019). Moreover, since we have no access to dπθ∞ , a term in the policy gradient theorem corresponding to the gradient of the action value function with respect to the policy parameters needs to be estimated. This term is usually ignored in traditional off-policy policy gradient theorems1. In particular, when the policy is stochastic, Degris et al. (2012) showed that:\n∇θJb(πθ) = ∫ S dπb∞(s) ∫ A πb(a|s) πθ(a|s) πb(a|s) (Qπθ (s, a)∇θ log πθ(a|s) +∇θQπθ (s, a)) da ds (4)\n≈ ∫ S dπb∞(s) ∫ A πb(a|s) πθ(a|s) πb(a|s) (Qπθ (s, a)∇θ log πθ(a|s)) da ds. (5)\nAnalogously, Silver et al. (2014) provided the following approximation for deterministic policies 2: ∇θJb(πθ) = ∫ S dπb∞(s) ( ∇θπθ(s)∇aQπθ (s, a)|a=πθ(s) +∇θQπθ (s, a)|a=πθ(s) ) ds (6)\n≈ ∫ S dπb∞(s) ( ∇θπθ(s)∇aQπθ (s, a)|a=πθ(s) ) ds. (7)\n1With tabular policies, dropping this term still results in a convergent algorithm (Degris et al., 2012). 2In the original formulation of Silver et al. (2014) dπb∞ (s) is replaced by dπb(s).\nAlthough the term ∇θQπθ (s, a) is dropped, there might be advantages in using the approximate gradient of Jb in order to find the maximum of the original RL objective J . Indeed, if we were on-policy, the approximated off-policy policy gradients by Degris et al. (2012); Silver et al. (2014) would revert to the on-policy policy gradients, while an exact gradient for Jb would necessarily introduce a bias. However, when we are off-policy, it is not clear whether this would be better than using the exact gradient of Jb in order to maximize J . In this work, we assume that Jb can be considered a good objective for off-policy RL and we derive an exact gradient for it." }, { "heading": "3 PARAMETER-BASED VALUE FUNCTIONS", "text": "In this section, we introduce our parameter-based value functions, the PSSVF V (θ), PSVF V (s, θ), and PAVF Q(s, a, θ) and their corresponding learning algorithms. First, we augment the state and action-value functions, allowing them to receive as an input also the weights of a parametric policy. The parameter-based state-value function (PSVF) V (s, θ) = E[Rt|st = s, θ] is defined as the expected return for being in state s and following policy parameterized by θ. Similarly, the parameter-based action-value function (PAVF) Q(s, a, θ) = E[Rt|st = s, at = a, θ] is defined as the expected return for being in state s, taking action a and following policy parameterized by θ. Using PBVFs, the RL objective becomes: J(πθ) = ∫ S µ0(s)V\nπ(s, θ) ds. Maximizing this objective leads to on-policy policy gradient theorems that are analogous to the traditional ones (Sutton et al., 1999; Silver et al., 2014): Theorem 3.1. Let πθ be stochastic. For any Markov Decision Process, the following holds: ∇θJ(πθ) = Es∼dπθ (s),a∼πθ(.|s) [(Q(s, a, θ)∇θ log πθ(a|s))] . (8) Theorem 3.2. Let πθ be deterministic. Under standard regularity assumptions (Silver et al., 2014), for any Markov Decision Process, the following holds:\n∇θJ(πθ) = Es∼dπθ (s) [ ∇aQ(s, a, θ)|a=πθ(s)∇θπθ(s) ] . (9)\nParameter-based value functions allow us also to learn a function of the policy parameters that directly approximates J(πθ). In particular, the parameter-based start-state-value function (PSSVF) is defined as:\nV (θ) := Es∼µ0(s)[V (s, θ)] = ∫ S µ0(s)V (s, θ) ds = J(πθ). (10)\nOff-policy RL In the off-policy setting, the objective to be maximized becomes:\nJb(πθ∗) = max θ ∫ S dπb∞(s)V (s, θ) ds = max θ ∫ S ∫ A dπb∞(s)πθ(a|s)Q(s, a, θ) dads. (11)\nBy taking the gradient of the performance Jb with respect to the policy parameters θ we obtain novel policy gradient theorems. Since θ is continuous, we need to use function approximators Vw(θ) ≈ V (θ), Vw(s, θ) ≈ V (s, θ) and Qw(s, a, θ) ≈ Q(s, a, θ). Compatible function approximations can be derived to ensure that the approximated value function is following the true gradient. Like in previous approaches, this would result in linearity conditions. However, here we consider nonlinear function approximation and we leave the convergence analysis of linear PBVFs as future work. In episodic settings, we do not have access to dπb∞ , so in the algorithm derivations and in the experiments we approximate it by sampling trajectories generated by the behavioral policy. In all cases, the policy improvement step can be very expensive, due to the computation of the arg max over a continuous space Θ. Actor-critic methods can be derived to solve this optimization problem, where the critic (PBVFs) can be learned using TD or MC methods, while the actor is updated following the gradient with respect to the critic. Although our algorithms on PSSVF and PSVF can be used with both stochastic and deterministic policies, removing the stochasticity of the action-selection process might facilitate learning the value function. All our algorithms make use of a replay buffer.\n3.1 PARAMETER-BASED START-STATE-VALUE FUNCTION V (θ)\nWe first derive the PSSVF V (θ). Given the original performance index J , and taking the gradient with respect to θ, we obtain:\n∇θJ(πθ) = ∫ S µ0(s)∇θV (s, θ) ds = Es∼µ0(s)[∇θV (s, θ)] = ∇θV (θ). (12)\nIn Algorithm 1, the critic Vw(θ) is learned using MC to estimate the value of any policy θ. The actor is then updated following the direction of improvement suggested by the critic. Since the main application of PSSVF is in episodic tasks3, we optimize for the undiscounted objective.\nAlgorithm 1 Actor-critic with Monte Carlo prediction for V (θ) Input: Differentiable critic Vw : Θ → R with parameters w; deterministic or stochastic actor πθ with parameters θ; empty replay buffer D Output : Learned Vw ≈ V (θ)∀θ, learned πθ ≈ πθ∗\nInitialize critic and actor weights w, θ repeat:\nGenerate an episode s0, a0, r1, s1, a1, r2, . . . , sT−1, aT−1, rT with policy πθ Compute return r = ∑T k=1 rk Store (θ, r) in the replay buffer D for many steps do:\nSample a batch B = {(r, θ)} from D Update critic by stochastic gradient descent: ∇w E(r,θ)∈B [r − Vw(θ)]2\nend for for many steps do:\nUpdate actor by gradient ascent: ∇θVw(θ) end for\nuntil convergence\n3.2 PARAMETER-BASED STATE-VALUE FUNCTION V (s, θ)\nLearning the value function using MC approaches can be difficult due to the high variance of the estimate. Furthermore, episode-based algorithms like Algorithm 1 are unable to credit good actions in bad episodes. Gradient methods based on TD updates provide a biased estimate of V (s, θ) with much lower variance and can credit actions at each time step. Taking the gradient of Jb(πθ) in the PSVF formulation4, we obtain:\n∇θJb(πθ) = ∫ S dπb∞(s)∇θV (s, θ) ds = Es∼dπb∞ (s)[∇θV (s, θ)]. (13)\nAlgorithm 2 (Appendix) uses the actor-critic architecture, where the critic is learned via TD5.\n3.3 PARAMETER-BASED ACTION-VALUE FUNCTION Q(s, a, θ)\nThe introduction of the PAVF Q(s, a, θ) allows us to derive new policy gradients theorems when using a stochastic or deterministic policy.\nStochastic policy gradients We want to use data collected from some stochastic behavioral policy πb in order to learn the action-value of a target policy πθ. Traditional off-policy actor-critic algorithms only approximate the gradient of Jb, since they do not estimate the gradient of the actionvalue function with respect to the policy parameters ∇θQπθ (s, a) (Degris et al., 2012; Silver et al., 2014). With PBVFs, we can directly compute this contribution to the gradient. This yields an exact policy gradient theorem for Jb: Theorem 3.3. For any Markov Decision Process, the following holds:\n∇θJb(πθ) = Es∼dπb∞ (s),a∼πb(.|s) [ πθ(a|s) πb(a|s) (Q(s, a, θ)∇θ log πθ(a|s) +∇θQ(s, a, θ)) ] . (14)\nAlgorithm 3 (Appendix) uses an actor-critic architecture and can be seen as an extension of OffPAC (Degris et al., 2012) to PAVF.\n3Alternatives include regenerative method for MC estimation (Rubinstein & Kroese, 2016). 4Compared to standard methods based on the state-value function, we can directly optimize the policy\nfollowing the performance gradient of the PSVF, obtaining a policy improvement step in a model-free way. 5Note that the differentiability of the policy πθ is never required in PSSVF and PSVF.\nDeterministic policy gradients Estimating Q(s, a, θ) is in general a difficult problem due to the stochasticity of the policy. Deterministic policies of the form π : S → A can help improving the efficiency in learning value functions, since the expectation over the action space is no longer required. Using PBVFs, we can write the performance of a policy πθ as:\nJb(πθ) = ∫ S dπb∞(s)V (s, θ) ds = ∫ S dπb∞(s)Q(s, πθ(s), θ) ds. (15)\nTaking the gradient with respect to θ we obtain a deterministic policy gradient theorem: Theorem 3.4. Under standard regularity assumptions (Silver et al., 2014), for any Markov Decision Process, the following holds:\n∇θJb(πθ) = Es∼dπb∞ (s) [ ∇aQ(s, a, θ)|a=πθ(s)∇θπθ(s) +∇θQ(s, a, θ)|a=πθ(s) ] . (16)\nAlgorithm 4 (Appendix) uses an actor-critic architecture and can be seen as an extension of DPG (Silver et al., 2014) to PAVF. Despite the novel formulation of algorithm 3, we decided to avoid the stochasticity of the policy and to implement and analyze only the deterministic PAVF." }, { "heading": "4 EXPERIMENTS6", "text": "Applying algorithms 1, 2 and 4 directly can lead to convergence to local optima, due to the lack of exploration. In practice, like in standard deterministic actor-critic algorithms, we use a noisy version of the current learned policy in order to act in the environment and collect data to encourage exploration. More precisely, at each episode we use πθ̃ with θ̃ = θ+ , ∼ N (0, σ2I) instead of πθ and then store θ̃ in the replay buffer. In our experiments, we report both for our methods as well as the baselines the performance of the policy without parameter noise." }, { "heading": "4.1 VISUALIZING PBVFS USING LQRS", "text": "We start with an illustrative example that allows us to visualize how PBVFs are learning to estimate the expected return over the parameter space. For this purpose, we use an instance of the 1D Linear Quadratic Regulator (LQR) problem and a linear deterministic policy with bias. In figure 1, we plot the episodic J(θ), the cumulative return that an agent would obtain by acting in the environment using policy πθ for a single episode, and the cumulative return predicted by the PSSVF V (θ) for two different times during learning. At the beginning of the learning process, the PSSVF is able to provide just a local estimation of the performance of the agent, since only few data have been observed. However, after 1000 episodes, it is able to provide a more accurate global estimate over the parameter space. Appendix A.4.1 contains a similar visualization for PSVF and PAVF, environment details and hyperparameters used." }, { "heading": "4.2 MAIN RESULTS", "text": "Given the similarities between our PAVF and DPG, Deep Deterministic Policy Gradients (DDPG) is a natural choice for the baseline. Additionally, the PSSVF V (θ) resembles evolutionary methods as the critic can be interpreted as a global fitness function. Therefore, we decided to include in the comparison Augmented Random Search (ARS) which is known for its state-of-the-art performance using only linear policies in continuous control tasks. For the policy, we use a 2-layer MLP (64,64) with tanh activations and a linear policy followed by a tanh nonlinearity. Figure 2 shows results for deterministic policies with both architectures. In all the tasks the PSSVF is able to achieve at least the same performance compared to ARS, often outperforming it. In the Inverted Pendulum environment, PSVF and PAVF with deep policy are very slow to converge, but they excel in the Swimmer task and MountainCarContinuous. In Reacher, all PBVFs fail to learn the task, while DDPG converges quickly to the optimal policy. We conjecture that for this task it is difficult to perform a search in parameter space. On the other hand, in MountainCarContinuous, the reward is more sparse and DDPG only rarely observes positive reward when exploring in action space. In Appendix A.4 we include additional results for PSSVF and PSVF with stochastic policies and hyperparameters. We analyze the sensitivity of the algorithms on the choice of hyperparameters in Appendix A.4.4.\n6Code is available at: https://github.com/FF93/Parameter-based-Value-Functions" }, { "heading": "4.3 ZERO-SHOT LEARNING", "text": "In order to test whether PBVFs are generalizing across the policy space, we perform the following experiment with shallow deterministic policies: while learning using algorithm 1, we stop training and randomly initialize 5 policies. Then, without interacting with the environment, we train these policies offline, in a zero-shot manner, following only the direction of improvement suggested by ∇θVw(θ), whose weights w remain frozen. We observe that shallow policies can be effectively trained from scratch. Results for PSSVFs in Swimmer-v3 are displayed in figure 3. In particular, we compare the performance of the policy learned, the best perturbed policy for exploration seen during training and five policies learned from scratch at three different stages in training. We note that after the PSSVF has been trained for 100,000 time steps interactions with the environment (first snapshot), these policies are already able to outperform both the current policy and any policy seen while training the PSSVF. They achieve an average return of 297, while the best observed return was 225. We include additional results for PSVF and PAVF in different environments, using shallow and deep policies in Appendix A.4.2. When using deep policies, we obtain similar results only for the simplest environments. For this task, we use the same hyperparameters as in figure 2." }, { "heading": "4.4 OFFLINE LEARNING WITH FRAGMENTED BEHAVIORS", "text": "In our last experiment, we investigate how PSVFs are able to learn in a completely offline setting. The goal is to learn a good policy in Swimmer-v3 given a fixed dataset containing 100,000 transitions, without additional environment interactions. Furthermore, the policy generating the data is perturbed every 200 time steps, for a total of 5 policies per episode. Observing only incomplete trajectories for each policy parameter makes TD bootstrapping harder: In order to learn, the PSVF needs to generalize across both the state and the parameter space. Given the fixed dataset, we first train the PSVF, minimizing the TD error. Then, at different stages during learning, we train 5 new shallow deterministic policies. Figure 4 describes this process. We note that at the beginning of training, when the PSVF V (s, θ) has a larger TD error, these policies have poor performance. However, after 7000 gradient updates, they are able to achieve a reward of 237, before eventually degrading to 167. They outperform the best policy in the dataset used to train the PSVF, whose return is only of 58." }, { "heading": "5 RELATED WORK", "text": "There are two main classes of similar algorithms performing search in policy parameter space. Evolutionary algorithms (Wierstra et al., 2014; Salimans et al., 2017; Mania et al., 2018) iteratively estimate a fitness function evaluating the performance of a population of policies and then perform gradient ascent in parameter space, often estimating the gradient using finite difference approximation. By replacing the performance of a population through a likelihood estimation, evolutionary algorithms become a form of Parameter Exploring Policy Gradients (Sehnke et al., 2008; 2010). Our methods are similar to evolution since our value function can be seen as a fitness. Unlike evolution, however, our approach allows for obtaining the fitness gradient directly and is more suitable for reusing past data. While direct V (θ) optimization is strongly related to evolution, our more informed algorithms optimize V (s, θ) and Q(s, a, θ). That is, ours both perform a search in policy parameter space AND train the value function and the policy online, without having to wait for the ends of trials or episodes.\nThe second related class of methods involves surrogate functions (Box & Wilson, 1951; Booker et al., 1998; Moore & Schneider, 1996). They often use local optimizers for generalizing across fitness functions. In particular, Bayesian Optimization (BO) (Snoek et al., 2012; 2015) uses a surrogate function to evaluate the performance of a model over a set of hyperparameters and follows the uncertainty on the surrogate to query the new data to sample. Unlike BO, we do not build a probabilistic model and we use the gradient of the value function instead of a sample from the posterior to decide which policy parameters to use next in the policy improvement step.\nThe possibility of augmenting the value functions with auxiliary parameters was already\nconsidered in work on General Value Functions (Sutton et al., 2011), where the return is defined with respect to an arbitrary reward function. Universal Value Function Approximators (Schaul et al., 2015) extended this approach to learn a single value function V πθ (s, g), representing the value, given possible agent goals g. In particular, they learn different embeddings for states and goals, exploiting their common structure, and they show generalization to new unseen goals. Similarly, our PSVF V (s, θ) is able to generalize to unseen policies, observing data for only a few (s, θ) pairs. General and Universal Value Functions have not been applied to learn a single value function for every possible policy.\nPolicy Evaluation Networks (PENs) (Harb et al., 2020) are closely related to our work and share the same motivation. PENs focus on the simplest PSSVF V (θ) trained without an actor-critic architecture. Like in some of our experiments, the authors show how following the direction of improvement suggested by V (θ) leads to an increase in policy performance. They also suggest to explore in future work a more complex setting where a PSVF V (s, θ) is learned using an actor-critic\narchitecture. Our work directly introduces the PSVF V (s, θ) and PAVF Q(s, a, θ) and presents novel policy gradient theorems for PAVFs when stochastic or deterministic policies are used. There are many differences between our approach to learning V (θ) and theirs. For example, we do not use a fingerprint mechanism (Harb et al., 2020) for embedding the weights of complex policies. Instead, we simply parse all the policy weights as inputs to the value function, even in the nonlinear case. Fingerprinting may be important for representing nonlinear policies without losing information about their structure and for saving memory required to store the weights. Harb et al. (2020) focus on the offline setting. They first use randomly initialized policies to perform rollouts and collect reward from the environment. Then, once V (θ) is trained using the data collected, many gradient ascent steps through V yield new, unseen, randomly initialized policies in a zero-shot manner, exhibiting improved performance. They train their value function using small nonlinear policies of one hidden layer and 30 neurons on Swimmer-v3. They evaluate 2000 deterministic policies on 500 episodes each (1 million policy evaluations), achieving a final expected return of ≈ 180 on new policies trained from scratch through V. On the other hand, in our zero-shot learning experiment using a linear PSSVF, after only 100 policy evaluations, we obtain a return of 297. In our main experiments, we showed that a fingerprint mechanism is not necessary for the tasks we analyzed: even when using a much bigger 2-layers MLP policy, we are able to outperform the results in PEN. Although Harb et al. (2020) use Swimmer-v3 “to scale up their experiments”, our results suggest that Swimmer-v3 does not conclusively demonstrate possible benefits of their policy embedding.\nGradient Temporal Difference (Sutton et al., 2009a;b; Maei et al., 2009; 2010; Maei, 2011) and Emphatic Temporal Difference methods (Sutton et al., 2016) were developed to address convergence under on-policy and off-policy (Precup et al., 2001) learning with function approximation. The first attempt to obtain a stable off-policy actor-critic algorithm under linear function approximation was called Off-PAC (Degris et al., 2012), where the critic is updated using GTD(λ) (Maei, 2011) to estimate the state-value function. This algorithm converges when using tabular policies. However, in general, the actor does not follow the true gradient direction for Jb. A paper on DPG (Silver et al., 2014) extended the Off-PAC policy gradient theorem (Degris et al., 2012) to deterministic policies. This was coupled with a deep neural network to solve continuous control tasks through Deep Deterministic Policy Gradients (Lillicrap et al., 2015). Imani et al. (2018) used emphatic weights to derive an exact off-policy policy gradient theorem for Jb. Differently from Off-PAC, they do not ignore the gradient of the action-value function with respect to the policy, which is incorporated in the emphatic weighting: a vector that needs to be estimated. Our off-policy policy gradients provide an alternative approach that does not need emphatic weights.\nThe widely used off-policy objective function Jb suffers the distribution shift problem. Liu et al. (2019) provided an off-policy policy gradient theorem which is unbiased for the true RL objective J(πθ), introducing a term dπθ∞/d πb ∞ that corrects the mismatch between the states distributions. Despite their sound off-policy formulation, estimating the state weighting ratio remains challenging. All our algorithms are based on the off-policy actor-critic architecture. The two algorithms based on Q(s, a, θ) can be viewed as analogous to Off-PAC and DPG where the critic is defined for all policies and the actor is updated following the true gradient with respect to the critic." }, { "heading": "6 LIMITATIONS AND FUTURE WORK", "text": "We introduced PBVFs, a novel class of value functions which receive as input the parameters of a policy and can be used for off-policy learning. We showed that PBVFs are competitive to ARS and DDPG (Mania et al., 2018; Lillicrap et al., 2015) while generalizing across policies and allowing for zero-shot training in an offline setting. Despite their positive results on shallow and deep policies, PBVFs suffer the curse of dimensionality when the number of policy parameters is high. Embeddings similar to those used in PENs (Harb et al., 2020) may be useful not only for saving memory and computational time, but also for facilitating search in parameter space. We intend to evaluate the benefits of such embeddings and other dimensionality reduction techniques. We derived off-policy policy gradient theorems, showing how PBVFs follow the true gradient of the performance Jb. With these results, we plan to analyze the convergence of our algorithms using stochastic approximation techniques (Borkar, 2009) and test them on environments where traditional methods are known to diverge (Baird, 1995). Finally, we want to investigate how PBVFs applied to supervised learning tasks or POMDPs, can avoid BPTT by mapping the weights of an RNN to its loss." }, { "heading": "ACKNOWLEDGMENTS", "text": "We thank Paulo Rauber, Imanol Schlag, Miroslav Strupl, Róbert Csordás, Aleksandar Stanić, Anand Gopalakrishnan, Sjoerd Van Steenkiste and Julius Kunze for their feedback. This work was supported by the ERC Advanced Grant (no: 742870). We also thank NVIDIA Corporation for donating a DGX-1 as part of the Pioneers of AI Research Award and to IBM for donating a Minsky machine." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "INDEX OF THE APPENDIX", "text": "In the following, we briefly recap the contents of the appendix.\n• Appendix A.1 contains additional related works\n• Appendix A.2 reports all proofs and derivations.\n• Appendix A.3 illustrates implementation details and pseudocode.\n• Appendix A.4 provides the hyperparameters used in the experiments and further results." }, { "heading": "A.1 ADDITIONAL RELATED WORKS", "text": "Recent work (Unterthiner et al., 2020) shows how to map the weights of a trained Convolutional Neural Network to its accuracy. Experiments show how these predictions allow for performance rankings of neural networks on new unseen tasks. These maps are either learned by taking the flattened weights as input or using simple statistics. However, these predictions do not guide the training process of CNNs.\nIn 1990, adaptive critics trained by TD were used to predict the gradients of an RNN from its activations (Schmidhuber, 1990), avoiding backpropagation through time (BPTT) (Werbos, 1990). This idea was later used to update the weights of a neural network asynchronously (Jaderberg et al., 2017). In our work, the critic is predicting errors instead of gradients. If applied to POMDPs, or supervised learning tasks involving long time lags between relevant events, the PSSVF could avoid BPTT by viewing the parameters of an RNN as a static object and mapping them to their loss (negative reward).\nAdditional differences between our work and Policy Evaluation Networks (PENs) (Harb et al., 2020) concern the optimization problem: we do not predict a bucket index for discretized reward, but perform a regression task. Therefore our loss is simply the mean squared error between the prediction of V (θ) and the reward obtained by πθ, while their loss (Harb et al., 2020) is the KL divergence between the predicted and target distributions. Both approaches optimize the undiscounted objective when learning V (θ)." }, { "heading": "A.2 PROOFS AND DERIVATIONS", "text": "Theorem 3.1. Let πθ be stochastic. For any Markov Decision Process, the following holds:\n∇θJ(πθ) = Es∼dπθ (s),a∼πθ(.|s) [(Q(s, a, θ)∇θ log πθ(a|s))] . (8)\nProof. The proof follows the standard approach by Sutton et al. (1999) and we report it for completeness. We start by deriving an expression for∇θV (s, θ):\n∇θV (s, θ) = ∇θ ∫ A πθ(a|s)Q(s, a, θ) da = ∫ A ∇θπθ(a|s)Q(s, a, θ) + πθ(a|s)∇θQ(s, a, θ) da\n= ∫ A ∇θπθ(a|s)Q(s, a, θ) + πθ(a|s)∇θ ( R(s, a) + γ ∫ S P (s′|s, a)V (s′, θ) ds′ ) da\n= ∫ A ∇θπθ(a|s)Q(s, a, θ) + πθ(a|s)γ ∫ S P (s′|s, a)∇θV (s′, θ) ds′ da\n= ∫ A ∇θπθ(a|s)Q(s, a, θ) + πθ(a|s)γ ∫ S P (s′|s, a)×\n× ∫ A ∇θπθ(a′|s′)Q(s′, a′, θ) + πθ(a′|s′)γ ∫ S P (s ′′ |s′, a′)∇θV (s ′′ , θ) ds ′′ da′ ds′ da\n= ∫ S ∞∑ t=0 γtP (s→ s′, t, πθ) ∫ A ∇θπθ(a|s′)Q(s′, a, θ) da ds′.\nTaking the expectation with respect to s0 ∼ µ0(s) we have: ∇θJ(θ) = ∇θ ∫ S µ0(s)V (s, θ) ds = ∫ S µ0(s)∇θV (s, θ) ds\n= ∫ S µ0(s) ∫ S ∞∑ t=0 γtP (s→ s′, t, πθ) ∫ A ∇θπθ(a|s)Q(s, a, θ) ds′ da ds\n= ∫ S dπθ (s) ∫ A ∇θπθ(a|s)Q(s, a, θ) da ds = Es∼dπθ (s),a∼πθ(.|s) [(Q(s, a, θ)∇θ log πθ(a|s))] .\nTheorem 3.2. Let πθ be deterministic. Under standard regularity assumptions (Silver et al., 2014), for any Markov Decision Process, the following holds:\n∇θJ(πθ) = Es∼dπθ (s) [ ∇aQ(s, a, θ)|a=πθ(s)∇θπθ(s) ] . (9)\nProof. The proof follows the standard approach by Silver et al. (2014) and we report it for completeness. We start by deriving an expression for∇θV (s, θ):\n∇θV (s, θ) = ∇θQ(s, πθ(s), θ) = ∇θ ( R(s, πθ(s)) + γ ∫ S P (s′|s, πθ(s))V (s′, θ) ds′ ) = ∇θπθ(s)∇aR(s, a)|a=πθ(s)+\n+ γ ∫ S P (s′|s, πθ(s))∇θV (s′, θ) +∇θπθ(s)∇aP (s′|s, a)|a=πθ(s) ds′\n= ∇θπθ(s)∇a ( R(s, a) + γ ∫ S P (s′|s, a)V (s′, θ) ds′ ) |a=πθ(s)+\n+ γ ∫ S P (s′|s, πθ(s))∇θV (s′, θ) ds′\n= ∇θπθ(s)∇aQ(s, a, θ)|a=πθ(s) + γ ∫ S P (s′|s, πθ(s))∇θV (s′, θ) ds′ = ∇θπθ(s)∇aQ(s, a, θ)|a=πθ(s)+\n+ γ ∫ S P (s′|s, πθ(s))∇θπθ(s′)∇aQ(s′, a, θ)|a=πθ(s′) ds′+\n+ γ ∫ S P (s′|s, πθ(s))γ ∫ S P (s ′′ |s′, πθ(s′))∇θV (s ′′ , θ) ds ′′ ds′\n= ∫ S ∞∑ t=0 γtP (s→ s′, t, πθ)∇θπθ(s′)∇aQ(s′, a, θ)|a=πθ(s′) ds′\nTaking the expectation with respect to s0 ∼ µ0(s) we have: ∇θJ(θ) = ∇θ ∫ S µ0(s)V (s, θ) ds = ∫ S µ0(s)∇θV (s, θ) ds\n= ∫ S µ0(s) ∫ S ∞∑ t=0 γtP (s→ s′, t, πθ)∇θπθ(s′)∇aQ(s′, a, θ)|a=πθ(s′) ds′ ds\n= ∫ S dπθ (s)∇θπθ(s)∇aQ(s, a, θ)|a=πθ(s) ds\n= Es∼dπθ (s) [ ∇θπθ(s)∇aQ(s, a, θ)|a=πθ(s) ]\nTheorem 3.3. For any Markov Decision Process, the following holds: ∇θJb(πθ) = Es∼dπb∞ (s),a∼πb(.|s) [ πθ(a|s) πb(a|s) (Q(s, a, θ)∇θ log πθ(a|s) +∇θQ(s, a, θ)) ] . (14)\nProof. ∇θJb(πθ) = ∇θ ∫ S dπb∞(s)V (s, θ) ds (17)\n= ∇θ ∫ S dπb∞(s) ∫ A πθ(a|s)Q(s, a, θ) da ds (18)\n= ∫ S dπb∞(s) ∫ A [Q(s, a, θ)∇θπθ(a|s) + πθ(a|s)∇θQ(s, a, θ)] dads (19)\n= ∫ S dπb∞(s) ∫ A πb(a|s) πb(a|s) πθ(a|s)[Q(s, a, θ)∇θ log πθ(a|s) +∇θQ(s, a, θ)] dads (20)\n= Es∼dπb∞ (s),a∼πb(.|s) [ πθ(a|s) πb(a|s) (Q(s, a, θ)∇θ log πθ(a|s) +∇θQ(s, a, θ)) ]\n(21)\nTheorem 3.4. Under standard regularity assumptions (Silver et al., 2014), for any Markov Decision Process, the following holds:\n∇θJb(πθ) = Es∼dπb∞ (s) [ ∇aQ(s, a, θ)|a=πθ(s)∇θπθ(s) +∇θQ(s, a, θ)|a=πθ(s) ] . (16)\nProof. ∇θJb(πθ) = ∫ S dπb∞(s)∇θQ(s, πθ(s), θ) ds (22)\n= ∫ S dπb∞(s) [ ∇aQ(s, a, θ)|a=πθ(s)∇θπθ(s) +∇θQ(s, a, θ)|a=πθ(s) ] ds (23)\n= Es∼dπb∞ (s) [ ∇aQ(s, a, θ)|a=πθ(s)∇θπθ(s) +∇θQ(s, a, θ)|a=πθ(s) ] (24)\nA.3 IMPLEMENTATION DETAILS\nA.3.1\nIn this appendix, we report the implementation details for PSSVF, PSVF, PAVF and the baselines. We specify for each hyperparameter, which algorithms and tasks are sharing them.\nShared hyperparameters:\n• Deterministic policy architecture (continuous control tasks): We use three different deterministic policies: a linear mapping between states and actions; a single-layer MLP with 32 neurons and tanh activation; a 2-layers MLP (64,64) with tanh activations. All policies contain a bias term and are followed by a tanh nonlinearity in order to bound the action. • Deterministic policy architecture (discrete control tasks): We use three different determin-\nistic policies: a linear mapping between states and a probability distribution over actions; a single-layer MLP with 32 neurons and tanh activation; a 2-layers MLP (64,64) with tanh activations. The deterministic action a is obtained choosing a = arg maxπθ(a|s). All policies contain a bias term. • Stochastic policy architecture (continuous control tasks): We use three different stochastic\npolicies: a linear mapping; a single-layer MLP with 32 neurons and tanh activation; a 2- layers MLP (64,64) with tanh activations all mapping from states to the mean of a Normal distribution. The variance is state-independent and parametrized as e2Ω with diagonal Ω. All policies contain a bias term. Actions sampled are given as input to a tanh nonlinearity in order to bound them in the action space. • Stochastic policy architecture (discrete control tasks): We use three different deterministic\npolicies: a linear mapping between states and a probability distribution over actions; a single-layer MLP with 32 neurons and tanh activation; a 2-layers MLP (64,64) with tanh activations. All policies contain a bias term. • Policy initialization: all weights and biases are initialized using the default Pytorch initial-\nization for PBVFs and DDPG and are set to zero for ARS. • Critic architecture: 2-layers MLP (512,512) with bias and ReLU activation functions for\nPSVF, PAVF; 2-layers MLP (256,256) with bias and ReLU activation functions for DDPG. • Critic initialization: all weights and biases are initialized using the default Pytorch initial-\nization for PBVFs and DDPG. • Batch size: 128 for DDPG, PSVF, PAVF; 16 for PSSVF. • Actor’s frequency of updates: every episode for PSSVF; every batch of episodes for ARS;\nevery 50 time steps for DDPG, PSVF, PAVF. • Critic’s frequency of updates: every episode for PSSVF; every 50 time steps for DDPG,\nPSVF, PAVF. • Replay buffer: the size is 100k; data are sampled uniformly. • Optimizer: Adam for PBVFs and DDPG.\nTuned hyperparameters:\n• Number of directions and elite directions for ARS ([directions, elite directions]): tuned with values in [[1, 1], [4, 1], [4, 4], [16, 1], [16, 4], [16, 16]]. • Policy’s learning rate: tuned with values in [1e− 2, 1e− 3, 1e− 4]. • Critic’s learning rate: tuned with values in [1e− 2, 1e− 3, 1e− 4]. • Noise for exploration: the perturbation for the action (DDPG) or the parameter is sampled\nfrom N (0, σI) with σ tuned with values in [1, 1e − 1] for PSSVF, PSVF, PAVF; [1e − 1, 1e− 2] for DDPG; [1, 1e− 1, 1e− 2, 1e− 3] for ARS. For stochastic PSSVF and PSVF we include also the value σ = 0, although it almost never results optimal.\nEnvironment hyperparameters:\n• Environment interactions: 1M time steps for Swimmer-v3 and Hopper-v3; 100k time steps for all other environments.\n• Discount factor for TD algorithms: 0.999 for Swimmer; 0.99 for all other environments. • Survival reward in Hopper: True for DDPG, PSVF, PAVF; False for ARS, PSSVF.\nAlgorithm-specific hyperparameters:\n• Critic’s number of updates: 50 for DDPG, 5 for PSVF and PAVF; 10 for PSSVF.\n• Actor’s number of updates: 50 for DDPG, 1 for PSVF and PAVF; 10 for PSSVF. • Observation normalization: False for DDPG; True for all other algorithms. • Starting steps in DDPG (random actions and no training): first 1%. • Polyak parameter in DDPG: 0.995.\nPAVF ∇θQ(s, a, θ) ablation We investigate the effect of the term ∇θQ(s, a, θ) in the off-policy policy gradient theorem for deterministic PAVF. We follow the same methodology as in our main experiments to find the optimal hyperparameters when updating using the now biased gradient:\n∇θJb(πθ) ≈ Es∼dπb∞ (s) [ ∇aQ(s, a, θ)|a=πθ(s)∇θπθ(s) ] , (25)\nwhich corresponds to the gradient that DDPG is following. Figure 5 reports the results for Hopper and Swimmer using shallow and deep policies. We observe a significant drop in performance in Swimmer when removing part of the gradient. In Hopper the loss of performance is less significant, possibly because both algorithms tend to converge to the same sub-optimal behavior.\nARS For ARS, we used the official implementation provided by the authors and we modified it in order to use nonlinear policies. More precisely, we used the implementation of ARSv2-t (Mania et al., 2018), which uses observation normalization, elite directions and an adaptive learning rate based on the standard deviation of the return collected. To avoid divisions by zero, which may happen if all data sampled have the same return, we perform the standardization only in case the standard deviation is not zero. In the original implementation of ARS (Mania et al., 2018), the survival bonus for the reward in the Hopper environment is removed to avoid local minima. Since we wanted our PSSVF to be close to their setting, we also applied this modification. We did not remove the survival bonus from all TD algorithms and we did not investigate how this could affect their performance. We provide a comparison of the performance of PSSVF with and without the bonus in figure 6 using deterministic policies.\nDDPG For DDPG, we used the Spinning Up implementation provided by OpenAI (Achiam, 2018), which includes target networks for the actor and the critic and no learning for a fixed set of time steps, called starting steps. We did not include target networks and starting steps in our PBVFs, although they could potentially help stabilizing training. The implementation of DDPG that we used (Achiam, 2018) does not use observation normalization. In preliminary experiments we observed that it failed to significantly increase or decrease performance, hence we did not use it. Another difference between our TD algorithms and DDPG consists in the number of updates of the actor and the critic. Since DDPG’s critic needs to keep track of the current policy, the critic and the actor are updated in a nested form, with the first’s update depending on the latter and vice versa. Our PSVF and PAVF do not need to track the policy learned, hence, when it is time to update, we need only to train once the critic for many gradient steps and then train the actor for many gradient steps. This requires less compute. On the other hand, when using nonlinear policies, our PBVFs suffer the curse of dimensionality. For this reason, we profited from using a bigger critic. In preliminary experiments, we observed that DDPG’s performance did not change significantly through a bigger critic. We show differences in performance for our methods when removing observation normalization and when using a smaller critic (MLP(256,256)) in figure 7. We observe that the performance is decreasing if observation normalization is removed. However, only for shallow policies in Swimmer and deep policies in Hopper there seems to be a significant benefit. Future work will assess when bigger critics help.\nDiscounting in Swimmer For TD algorithms, we chose a fixed discount factor γ = 0.99 for all environments but Swimmer-v3. This environment is known to be challenging for TD based algorithms because discounting causes the agents to become too short-sighted. We observed that, with the standard discounting, DDPG, PSVF and PAVF were not able to learn the task. However, making the algorithms more far-sighted greatly improved their performance. In figure 8 we report the return obtained by DDPG, PSVF and PAVF for different values of the discount factor in Swimmer when using deterministic policies." }, { "heading": "A.3.2 PSEUDOCODE", "text": "Algorithm 2 Actor-critic with TD prediction for V (s, θ) Input: Differentiable critic Vw : S × Θ → R with parameters w; deterministic or stochastic actor πθ with parameters θ; empty replay buffer D Output : Learned Vw ≈ V (s, θ), learned πθ ≈ πθ∗\nInitialize critic and actor weights w, θ repeat:\nObserve state s, take action a = πθ(s), observe reward r and next state s′ Store (s, θ, r, s′) in the replay buffer D if it’s time to update then:\nfor many steps do: Sample a batch B1 = {(s, θ̃, r, s′)} from D Update critic by stochastic gradient descent: ∇w 1|B1| E(s,θ̃,r,s′)∈B1 [Vw(s, θ̃)− (r + γVw(s ′, θ̃))]2 end for for many steps do:\nSample a batch B2 = {(s)} from D Update actor by stochastic gradient ascent: ∇θ 1|B2| Es∈B2 [Vw(s, θ)]\nend for end if\nuntil convergence\nAlgorithm 3 Stochastic actor-critic with TD prediction for Q(s, a, θ) Input: Differentiable critic Qw : S ×A×Θ→ R with parameters w; stochastic differentiable actor πθ with parameters θ; empty replay buffer D Output : Learned Qw ≈ Q(s, a, θ), learned πθ ≈ πθ∗\nInitialize critic and actor weights w, θ repeat:\nObserve state s, take action a = πθ(s), observe reward r and next state s′ Store (s, a, θ, r, s′) in the replay buffer D if it’s time to update then:\nfor many steps do: Sample a batch B1 = {(s, a, θ̃, r, s′)} from D Update critic by stochastic gradient descent: ∇w 1|B1| E(s,a,θ̃,r,s′)∈B1 [Qw(s, a, θ̃)− (r + γQw(s\n′, a′ ∼ πθ̃(s′), θ̃))]2 end for for many steps do:\nSample a batch B2 = {(s, a, θ̃)} from D Update actor by stochastic gradient ascent:\n1 |B2| E(s,a,θ̃)∈B2 [ πθ(a|s) πθ̃(a|s) (Q(s, a, θ)∇θ log πθ(a|s) +∇θQ(s, a, θ)) ]\nend for end if\nuntil convergence\nAlgorithm 4 Deterministic actor-critic with TD prediction for Q(s, a, θ) Input: Differentiable critic Qw : S ×A×Θ→ R with parameters w; differentiable deterministic actor πθ with parameters θ; empty replay buffer D Output : Learned Qw ≈ Q(s, a, θ), learned πθ ≈ πθ∗\nInitialize critic and actor weights w, θ repeat:\nObserve state s, take action a = πθ(s), observe reward r and next state s′ Store (s, a, θ, r, s′) in the replay buffer D if it’s time to update then:\nfor many steps do: Sample a batch B1 = {(s, a, θ̃, r, s′)} from D Update critic by stochastic gradient descent: ∇w 1|B1| E(s,a,θ̃,r,s′)∈B1 [Qw(s, a, θ̃)− (r + γQw(s ′, πθ̃(s ′), θ̃))]2 end for for many steps do:\nSample a batch B2 = {(s)} from D Update actor by stochastic gradient ascent:\n1 |B2| Es∈B2 [∇θπθ(s)∇aQw(s, a, θ)|a=πθ(s) +∇θQw(s, a, θ)|a=πθ(s)]\nend for end if\nuntil convergence" }, { "heading": "A.4 EXPERIMENTAL DETAILS", "text": "A.4.1 LQR\nFor our visualization experiment, we employ an instance of the Linear Quadratic Regulator. Here, the agent observes a 1-D state, corresponding to its position and chooses a 1-D action. The transitions are s′ = s + a and there is a quadratic negative term for the reward: R(s, a) = −s2 − a2. The agent starts in state s0 = 1 and acts in the environment for 50 time steps. The state space is bounded in [-2,2]. The goal of the agent is to reach and remain in the origin. The agent is expected to perform small steps towards the origin when it uses the optimal policy. For this task, we use a\ndeterministic policy without tanh nonlinearity and we do not use observation normalization. Below additional details and plots for different algorithms.\nPSSVF We use a learning rate of 1e − 3 for the policy and 1e − 2 for the PSSVF. Weights are perturbed every episode using σ = 0.5. The policy is initialized with weight 3.2 and bias −3.5. All the other hyperparameters are set to their default. The true episodic J(θ) is computed by running 10,000 policies in the environment with parameters in [−5, 5] × [−5, 5]. Vw(θ) is computed by measuring the output of the PSSVF on the same set of policies. Each red arrow in figure 1 represents 200 update steps of the policy.\nPSVF ad PAVF Using the exact same setting, we run PSVF and PAVF in LQR environment and we compare learned V (s0, θ) andQ(s0, πθ(s0), θ) with the true PSVF and PAVF over the parameter space. Computing the value of the true PSVF and PAVF requires computing the infinite sum of discounted reward obtained by the policy. Here we approximate it by running 10,000 policies in the environment with parameters in [−5,−5] × [−5, 5] for 500 time steps. This, setting γ = 0.99, provides a good approximation of their true values, since further steps in the environment result in almost zero discounted reward from s0. We use a learning rate of 1e − 2 for the policy and 1e − 1 for the PSVF and PAVF. Weights are perturbed every episode using σ = 0.5. The policy is updated every 10 time steps using 2 gradient steps; the PSVF and PAVF are updated every 10 time steps using 10 gradient updates. The critic is a 1-layer MLP with 64 neurons and tanh nonlinearity.\nIn Figures 9 and 10 we report J(θ), the cumulative discounted reward that an agent would obtain by acting in the environment for infinite time steps using policy πθ and the cumulative return predicted by the PSVF and PAVF for two different times during learning. Like in the PSSVF experiment, the critic is able improve its predictions over the parameter space. Since in the plots V (s, θ) and Q(s, πθ(s), θ) are evaluated only in s0, the results show that PBVFs are able to effectively bootstrap the values of future states. Each red arrow in Figures 9 and 10 represents 50 update steps of the policy." }, { "heading": "A.4.2 OFFLINE EXPERIMENTS", "text": "Zero-shot learning We evaluate the performance of the policies learned from scratch evaluating them with 5 test trajectories every 5 gradient steps. In addition to the results in the main paper, we report in Figures 11 and 12 a comparison of zero-shot performance between PSSVF, PSVF and PAVF in three different environments using deterministic shallow and deep policies (2-layers MLP(64,64)). In this task we use the same hyperparameters found in tables 4, 6 and 8. One additional hyperparameter needs to be considered: the learning rate of the policies trained from scratch. In Figure 3 of the main paper, we use a tuned learning rate of 0.02 that we found working particularly well for PSSVF in the Swimmer environment. In the additional experiments in Figures 11 and 12, we use a learning rate of 0.05 that we found working well across all policies, environments and algorithms when learning zero-shot.\nWe observe that, using shallow policies, PBVFs can effectively zero-shot learn policies with performance comparable to the policy learned in the environment without additional tuning for the learning rate. We note the regular presence of a spike in performance followed by a decline due to the policy going to regions of the parameter space never observed. This suggests that there is a trade-off between exploiting the generalization of the critic and remaining in the part of the parameter space where the critic is accurate. Measuring the width of these spikes can be useful for determining the number of offline gradient steps to perform in the general algorithm. When using deep policies the results become much worse and zero-shot learned policies can recover the performance of the main policy being learned only in simple environments and at beginning of training (eg. MountainCarContinuous). We observe that, when the critic is trained (last column), the replay buffer contains policies that are very distant to policies randomly initialized. This might explain why the zero-shot performance is better sometimes at the beginning of training (eg. second column). However, since PBVFs in practice perform mostly local off-policy evaluation around the learned policy, this problem is less prone to arise in our main experiments.\nOffline learning with fragmented behaviors In this task, data are generated by perturbing a randomly initialized deterministic policy every 200 time steps and using it to act in the environment.\nWe use σ = 0.5 for the perturbations. After the dataset is collected, the PSVF is trained using a learning rate of 1e − 3 with a batch size of 128. When the policy is learned, we use a learning rate of 0.02. All other hyperparameters are set to default values." }, { "heading": "A.4.3 FULL EXPERIMENTAL RESULTS", "text": "Methodology In order to ensure a fair comparison of our methods and the baselines, we adopt the following procedure. For each hyperparameter configuration, for each environment and policy architecture, we run 5 instances of the learning algorithm using different seeds. We measure the learning progress by running 100 evaluations while learning the deterministic policy (without action or parameter noise) using 10 test trajectories. We use two metrics to determine the best hyperparameters: the average return over policy evaluations during the whole training process and the average return over policy evaluations during the last 20% time steps. For each algorithm, environment and policy architecture, we choose the two hyperparameter configurations maximizing the performance of the two metrics and test them on 20 new seeds, reporting average and final performance in table 2 and 3 respectively.\nFigures 13 and 14 report all the learning curves from the main paper and for a small non linear policy with 32 hidden neurons.\nStochastic policies We include some results for stochastic policies when using PSSVF and PSVF. Figures 15 and 16 show a comparison with the baselines when using shallow and deep policies respectively. We observe results sometimes comparable, but often inferior with respect to deterministic policies. In particular, when using shallow policies, PBVFs are able to outperform the baselines in the MountainCar environment, while obtaining comparable performance in CartPole and InvertedPendulum. Like in previous experiments, PBVFs fail to learn a good policy in Reacher. When using deep policies, the results are slightly different: PBVFs outperform ARS and DDPG in Swimmer, but fail to learn InvertedPendulum. Although the use of stochastic policies can help smoothing the\nobjective function and allows the agent exploring in action space, we believe that the lower variance provided by deterministic policies can facilitate learning PBVFs." }, { "heading": "A.4.4 SENSITIVITY ANALYSIS", "text": "In the following, we report the sensitivity plots for all algorithms, for all deterministic policy architectures and environments. In particular, figure 17, 18, 19, 20 and 21 show the performance of each algorithm given different hyperparameters tried during training. We observe that in general deep policies are more sensitive and, apart for DDPG, achieve often a better performance than smaller policies. The higher sensitivity displayed by ARS is in part caused by the higher number of hyperparameters we tried when tuning the algorithm." }, { "heading": "A.4.5 TABLE OF BEST HYPERPARAMETERS", "text": "We report for each algorithm, environment, and policy architecture the best hyperparameters found when optimizing for average return or final return in tables 4, 5, 6, 7, 8 and 9." } ]
2,021
PARAMETER-BASED VALUE FUNCTIONS
SP:01597fbcad0e467ed94efcdfde93a565cb3a763e
[ "This paper proposes a generative model to synthesize videos using VQ-VAEs. The scheme works in latent space by using embeddings for video sequences learnt by the VQ-VAE. For inference, an autoregressive transformer prior for video sequences is learnt, which upon sampling from and sending to the VQ-VAE decoder, generates unconditional (or conditional) samples of video. To learn video embeddings, the paper uses a 3D convolutional network, with an extra dimension for time. " ]
We present VideoGen: a conceptually simple architecture for scaling likelihood based generative modeling to natural videos. VideoGen uses VQ-VAE that learns learns downsampled discrete latent representations of a video by employing 3D convolutions and axial self-attention. A simple GPT-like architecture is then used to autoregressively model the discrete latents using spatio-temporal position encodings. Despite the simplicity in formulation, ease of training and a light compute requirement, our architecture is able to generate samples competitive with state-ofthe-art GAN models for video generation on the BAIR Robot dataset, and generate coherent action-conditioned samples based on experiences gathered from the ViZDoom simulator. We hope our proposed architecture serves as a reproducible reference for a minimalistic implementation of transformer based video generation models without requiring industry scale compute resources. Samples are available at https://sites.google.com/view/videogen.
[]
[ { "authors": [ "Dinesh Acharya", "Zhiwu Huang", "Danda Pani Paudel", "Luc Van Gool" ], "title": "Towards high resolution video generation with progressive growing of sliced wasserstein gans", "venue": "arXiv preprint arXiv:1810.02419,", "year": 2018 }, { "authors": [ "Mohammad Babaeizadeh", "Chelsea Finn", "Dumitru Erhan", "Roy H Campbell", "Sergey Levine" ], "title": "Stochastic variational video prediction", "venue": "arXiv preprint arXiv:1710.11252,", "year": 2017 }, { "authors": [ "Mikołaj Bińkowski", "Jeff Donahue", "Sander Dieleman", "Aidan Clark", "Erich Elsen", "Norman Casagrande", "Luis C Cobo", "Karen Simonyan" ], "title": "High fidelity speech synthesis with adversarial networks", "venue": null, "year": 1909 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale gan training for high fidelity natural image synthesis", "venue": "arXiv preprint arXiv:1809.11096,", "year": 2018 }, { "authors": [ "Tom B Brown", "Benjamin Mann", "Nick Ryder", "Melanie Subbiah", "Jared Kaplan", "Prafulla Dhariwal", "Arvind Neelakantan", "Pranav Shyam", "Girish Sastry", "Amanda Askell" ], "title": "Language models are few-shot learners", "venue": null, "year": 2005 }, { "authors": [ "Mark Chen", "Alec Radford", "Rewon Child", "Jeff Wu", "Heewoo Jun", "Prafulla Dhariwal", "David Luan", "Ilya Sutskever" ], "title": "Generative pretraining from pixels", "venue": "In Proceedings of the 37th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Rewon Child", "Scott Gray", "Alec Radford", "Ilya Sutskever" ], "title": "Generating long sequences with sparse transformers", "venue": "arXiv preprint arXiv:1904.10509,", "year": 2019 }, { "authors": [ "Aidan Clark", "Jeff Donahue", "Karen Simonyan" ], "title": "Adversarial video generation on complex datasets, 2019", "venue": null, "year": 2019 }, { "authors": [ "Emily Denton", "Rob Fergus" ], "title": "Stochastic video generation with a learned prior", "venue": "arXiv preprint arXiv:1802.07687,", "year": 2018 }, { "authors": [ "Emily L Denton" ], "title": "Unsupervised learning of disentangled representations from video", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Prafulla Dhariwal", "Heewoo Jun", "Christine Payne", "Jong Wook Kim", "Alec Radford", "Ilya Sutskever" ], "title": "Jukebox: A generative model for music", "venue": "arXiv preprint arXiv:2005.00341,", "year": 2020 }, { "authors": [ "Laurent Dinh", "David Krueger", "Yoshua Bengio" ], "title": "Nice: Non-linear independent components estimation", "venue": "arXiv preprint arXiv:1410.8516,", "year": 2014 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using Real NVP", "venue": "arXiv preprint arXiv:1605.08803,", "year": 2016 }, { "authors": [ "Frederik Ebert", "Chelsea Finn", "Alex X Lee", "Sergey Levine" ], "title": "Self-supervised visual planning with temporal skip connections", "venue": "arXiv preprint arXiv:1710.05268,", "year": 2017 }, { "authors": [ "Chelsea Finn", "Ian Goodfellow", "Sergey Levine" ], "title": "Unsupervised learning for physical interaction through video prediction", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "arXiv preprint arXiv:1512.03385,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Jonathan Ho", "Xi Chen", "Aravind Srinivas", "Yan Duan", "Pieter Abbeel" ], "title": "Flow++: Improving flowbased generative models with variational dequantization and architecture design", "venue": "arXiv preprint arXiv:1902.00275,", "year": 2019 }, { "authors": [ "Jonathan Ho", "Nal Kalchbrenner", "Dirk Weissenborn", "Tim Salimans" ], "title": "Axial attention in multidimensional transformers", "venue": "arXiv preprint arXiv:1912.12180,", "year": 2019 }, { "authors": [ "Jonathan Ho", "Ajay Jain", "Pieter Abbeel" ], "title": "Denoising diffusion probabilistic models", "venue": "arXiv preprint arXiv:2006.11239,", "year": 2020 }, { "authors": [ "Norman P Jouppi", "Cliff Young", "Nishant Patil", "David Patterson", "Gaurav Agrawal", "Raminder Bajwa", "Sarah Bates", "Suresh Bhatia", "Nan Boden", "Al Borchers" ], "title": "In-datacenter performance analysis of a tensor processing unit", "venue": "In Proceedings of the 44th Annual International Symposium on Computer Architecture,", "year": 2017 }, { "authors": [ "Emmanuel Kahembwe", "Subramanian Ramamoorthy" ], "title": "Lower dimensional kernels for video discriminators", "venue": "Neural Networks,", "year": 2020 }, { "authors": [ "Nal Kalchbrenner", "Aäron Oord", "Karen Simonyan", "Ivo Danihelka", "Oriol Vinyals", "Alex Graves", "Koray Kavukcuoglu" ], "title": "Video pixel networks", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Tero Karras", "Timo Aila", "Samuli Laine", "Jaakko Lehtinen" ], "title": "Progressive growing of gans for improved quality, stability, and variation", "venue": "arXiv preprint arXiv:1710.10196,", "year": 2017 }, { "authors": [ "Tero Karras", "Samuli Laine", "Timo Aila" ], "title": "A style-based generator architecture for generative adversarial networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2019 }, { "authors": [ "Michał Kempka", "Marek Wydmuch", "Grzegorz Runc", "Jakub Toczek", "Wojciech Jaśkowski" ], "title": "Vizdoom: A doom-based ai research platform for visual reinforcement learning", "venue": "IEEE Conference on Computational Intelligence and Games (CIG),", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "arXiv preprint arXiv:1807.03039,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational Bayes", "venue": "Proceedings of the 2nd International Conference on Learning Representations,", "year": 2013 }, { "authors": [ "Diederik P. Kingma", "Tim Salimans", "Rafal Jozefowicz", "Xi Chen", "Ilya Sutskever", "Max Welling" ], "title": "Improving variational inference with inverse autoregressive flow", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Brenden M Lake", "Ruslan Salakhutdinov", "Joshua B Tenenbaum" ], "title": "Human-level concept learning through probabilistic program induction", "venue": null, "year": 2015 }, { "authors": [ "Didier Le Gall" ], "title": "Mpeg: A video compression standard for multimedia applications", "venue": "Communications of the ACM,", "year": 1991 }, { "authors": [ "Alex X Lee", "Richard Zhang", "Frederik Ebert", "Pieter Abbeel", "Chelsea Finn", "Sergey Levine" ], "title": "Stochastic adversarial video prediction", "venue": "arXiv preprint arXiv:1804.01523,", "year": 2018 }, { "authors": [ "Pauline Luc", "Natalia Neverova", "Camille Couprie", "Jakob Verbeek", "Yann LeCun" ], "title": "Predicting deeper into the future of semantic segmentation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Pauline Luc", "Aidan Clark", "Sander Dieleman", "Diego de Las Casas", "Yotam Doron", "Albin Cassirer", "Karen Simonyan" ], "title": "Transformation-based adversarial video prediction on large-scale data", "venue": null, "year": 2003 }, { "authors": [ "Jacob Menick", "Nal Kalchbrenner" ], "title": "Generating high fidelity images with subscale pixel networks and multidimensional upscaling", "venue": "arXiv preprint arXiv:1812.01608,", "year": 2018 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Igor Babuschkin", "Karen Simonyan", "Oriol Vinyals", "Koray Kavukcuoglu", "George van den Driessche", "Edward Lockhart", "Luis C Cobo", "Florian Stimberg" ], "title": "Parallel wavenet: Fast high-fidelity speech synthesis", "venue": "arXiv preprint arXiv:1711.10433,", "year": 2017 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "arXiv preprint arXiv:1511.06434,", "year": 2015 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": "OpenAI Blog,", "year": 2019 }, { "authors": [ "Ruslan Rakhimov", "Denis Volkhonskiy", "Alexey Artemov", "Denis Zorin", "Evgeny Burnaev" ], "title": "Latent video transformer", "venue": "arXiv preprint arXiv:2006.10704,", "year": 2020 }, { "authors": [ "Ali Razavi", "Aaron van den Oord", "Oriol Vinyals" ], "title": "Generating diverse high-fidelity images with vq-vae-2", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Masaki Saito", "Shunta Saito" ], "title": "Tganv2: Efficient training of large models for video generation with multiple subsampling layers", "venue": "arXiv preprint arXiv:1811.09245,", "year": 2018 }, { "authors": [ "Masaki Saito", "Eiichi Matsumoto", "Shunta Saito" ], "title": "Temporal generative adversarial nets with singular value clipping", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Tim Salimans", "Andrej Karpathy", "Xi Chen", "Diederik P Kingma" ], "title": "Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications", "venue": "arXiv preprint arXiv:1701.05517,", "year": 2017 }, { "authors": [ "Jascha Sohl-Dickstein", "Eric A Weiss", "Niru Maheswaranathan", "Surya Ganguli" ], "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "venue": "arXiv preprint arXiv:1503.03585,", "year": 2015 }, { "authors": [ "Casper Kaae Sønderby", "Lasse Espeholt", "Jonathan Heek", "Mostafa Dehghani", "Avital Oliver", "Tim Salimans", "Shreya Agrawal", "Jason Hickey", "Nal Kalchbrenner" ], "title": "Metnet: A neural weather model for precipitation forecasting", "venue": null, "year": 2003 }, { "authors": [ "Yang Song", "Stefano Ermon" ], "title": "Generative modeling by estimating gradients of the data distribution", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Nitish Srivastava", "Elman Mansimov", "Ruslan Salakhudinov" ], "title": "Unsupervised learning of video representations using lstms", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "Du Tran", "Lubomir Bourdev", "Rob Fergus", "Lorenzo Torresani", "Manohar Paluri" ], "title": "Learning spatiotemporal features with 3d convolutional networks", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Sergey Tulyakov", "Ming-Yu Liu", "Xiaodong Yang", "Jan Kautz" ], "title": "Mocogan: Decomposing motion and content for video generation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Arash Vahdat", "Jan Kautz" ], "title": "Nvae: A deep hierarchical variational autoencoder", "venue": "arXiv preprint arXiv:2007.03898,", "year": 2020 }, { "authors": [ "Aaron van den Oord", "Sander Dieleman", "Heiga Zen", "Karen Simonyan", "Oriol Vinyals", "Alex Graves", "Nal Kalchbrenner", "Andrew Senior", "Koray Kavukcuoglu" ], "title": "Wavenet: A generative model for raw audio", "venue": "arXiv preprint arXiv:1609.03499,", "year": 2016 }, { "authors": [ "Aaron van den Oord", "Nal Kalchbrenner", "Koray Kavukcuoglu" ], "title": "Pixel recurrent neural networks", "venue": "International Conference on Machine Learning (ICML),", "year": 2016 }, { "authors": [ "Aaron van den Oord", "Nal Kalchbrenner", "Oriol Vinyals", "Lasse Espeholt", "Alex Graves", "Koray Kavukcuoglu" ], "title": "Conditional image generation with pixelcnn decoders", "venue": "arXiv preprint arXiv:1606.05328,", "year": 2016 }, { "authors": [ "Aaron Van Den Oord", "Oriol Vinyals" ], "title": "Neural discrete representation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "arXiv preprint arXiv:1706.03762,", "year": 2017 }, { "authors": [ "Carl Vondrick", "Hamed Pirsiavash", "Antonio Torralba" ], "title": "Generating videos with scene dynamics", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Gregory K Wallace" ], "title": "The jpeg still picture compression standard", "venue": "IEEE transactions on consumer electronics,", "year": 1992 }, { "authors": [ "Ting-Chun Wang", "Ming-Yu Liu", "Jun-Yan Zhu", "Guilin Liu", "Andrew Tao", "Jan Kautz", "Bryan Catanzaro" ], "title": "Video-to-video synthesis", "venue": "arXiv preprint arXiv:1808.06601,", "year": 2018 }, { "authors": [ "Xiaolong Wang", "Ross Girshick", "Abhinav Gupta", "Kaiming He" ], "title": "Non-local neural networks", "venue": "arXiv preprint arXiv:1711.07971,", "year": 2017 }, { "authors": [ "Dirk Weissenborn", "Oscar Täckström", "Jakob Uszkoreit" ], "title": "Scaling autoregressive video models", "venue": "arXiv preprint arXiv:1906.02634,", "year": 2019 }, { "authors": [ "Vladyslav Yushchenko", "Nikita Araslanov", "Stefan Roth" ], "title": "Markov decision process for video generation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision Workshops,", "year": 2019 }, { "authors": [ "Han Zhang", "Ian Goodfellow", "Dimitris Metaxas", "Augustus Odena" ], "title": "Self-attention generative adversarial networks", "venue": "In International Conference on Machine Learning,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep generative models of multiple types (Goodfellow et al., 2014; van den Oord et al., 2016b; Dinh et al., 2016) have seen incredible progress in the last few years on multiple modalities including natural images (van den Oord et al., 2016c; Zhang et al., 2019; Brock et al., 2018; Kingma & Dhariwal, 2018; Ho et al., 2019a; Karras et al., 2017; 2019; Van Den Oord et al., 2017; Razavi et al., 2019; Vahdat & Kautz, 2020; Ho et al., 2020; Chen et al., 2020), audio waveforms conditioned on language features (van den Oord et al., 2016a; Oord et al., 2017; Bińkowski et al., 2019), natural language in the form of text (Radford et al., 2019; Brown et al., 2020), and music generation (Dhariwal et al., 2020). These results have been made possible thanks to fundamental advances in deep learning architectures (He et al., 2015; van den Oord et al., 2016b;c; Vaswani et al., 2017; Zhang et al., 2019; Menick & Kalchbrenner, 2018) as well as the availability of compute resources (Jouppi et al., 2017; Amodei & Hernandez, 2018) that are more powerful than a few years ago. However, one notable modality that has not seen the same level of progress in generative modeling is high fidelity natural videos. The complexity of natural videos requires modeling correlations across both space and time with much higher input dimensions, thereby presenting a natural next challenge for current deep generative models. The complexity of the problem also demands more compute resources which can be considered as one important reason for the slow progress in generative modeling of videos.\nIt is useful to build generative models of videos, both conditional and unconditional, as it implicitly solves the problem of video prediction and forecasting. Video prediction (Kalchbrenner et al., 2017;\nSønderby et al., 2020) can be seen as learning a generative model of future frames conditioned on the past frames. Architectures developed for video generation can be useful in forecasting applications for autonomous driving, such as predicting the future in more semantic and dense abstractions like segmentation masks (Luc et al., 2017). Finally, building generative models of the world around us is considered as one way to measure understanding of physical common sense (Lake et al., 2015).\nMultiple classes of generative models have been shown to produce strikingly good samples such as autoregressive models (van den Oord et al., 2016b;c; Menick & Kalchbrenner, 2018; Radford et al., 2019; Chen et al., 2020), generative adversarial networks (GANs) (Goodfellow et al., 2014; Radford et al., 2015), variational autoencoders (VAEs) (Kingma & Welling, 2013; Kingma et al., 2016; Vahdat & Kautz, 2020), Flows (Dinh et al., 2014; 2016; Kingma & Dhariwal, 2018), vector quantized VAE (VQ-VAE) (Van Den Oord et al., 2017; Razavi et al., 2019), and lately diffusion and score matching models (Sohl-Dickstein et al., 2015; Song & Ermon, 2019; Ho et al., 2020). These different generative model families have their tradeoffs: sampling speed, sample diversity, sample quality, ease of training, compute requirements, and ease of evaluation.\nTo build a generative model for videos, we first make a choice between likelihood-based and adversarial models. Likelihood-based models are convenient to train since the objective is well understood, easy to optimize across a range of batch sizes, and easy to evaluate. Given that videos already present a hard modeling challenge due to the nature of the data, we believe likelihood-based models present fewer difficulties in the optimization and evaluation, hence allowing us to focus on the architecture modeling. Among likelihood-based models, autoregressive models that work on discrete data in particular have shown great success and have well established training recipes and modeling architectures.\nSecond, we consider the following question: Is it better to perform autoregressive modeling in a downsampled latent space without spatio-temporal redundancies compared to modeling at the atomic level of all pixels across space and time? Below, we present our reasons for choosing the former: Natural images and videos contain a lot of spatial and temporal redundancies and hence the reason we use image compression tools such as JPEG (Wallace, 1992) and video codecs such as MPEG (Le Gall, 1991) everyday. These redundancies can be removed by learning a denoised downsampled encoding of the high resolution inputs. For example, 4x downsampling across spatial and temporal dimensions results in 64x downsampled resolution so that the computation of powerful deep generative models is spent on these more fewer and useful bits. As shown in VQ-VAE (Van Den Oord et al., 2017), even a lossy decoder can transform the latents to generate sufficiently realistic samples. Furthermore, modeling in the latent space downsampled across space and time instead of the pixel space improves sampling speed and compute requirements due to reduced dimensionality. 1\nThe above line of reasoning leads us to our proposed model: VideoGen, a simple video generation architecture that is a minimal adaptation of VQ-VAE and GPT architectures for videos. VideoGen employs 3D convolutions and transposed convolutions (Tran et al., 2015) along with axial attention (Clark et al., 2019; Ho et al., 2019b) for the autoencoder in VQ-VAE in order to be able to learn a downsampled set of discrete latents. These latents are then autoregressively generated by a GPT-like (Radford et al., 2019; Child et al., 2019; Chen et al., 2020) architecture. The latents are then decoded to videos of the original resolution using the decoder of the VQ-VAE.\nOur results are highlighted below:\n1. On the widely benchmarked BAIR Robot Pushing dataset (Ebert et al., 2017), VideoGen can generate realistic samples that are competitive with existing methods such as DVD-GAN (Clark et al., 2019), achieving an FVD of 112 when benchmarked with real samples, and an FVD* (Razavi et al., 2019) of 94 when benchmarked with reconstructions. 2. VideoGen can easily be adapted for action conditional video generation. We present qualitative results on the BAIR Robot Pushing dataset and Vizdoom simulator (Kempka et al., 2016). 3. We present ablations showing that employing axial attention blocks in the VQ-VAE and spatiotemporal position encodings in the Transformer are helpful design choices in VideoGen. 4. Our results are achievable with a maximum of 8 Quadro RTX 6000 GPUs (24 GB memory), significantly lower than the resources used in prior methods such as DVD-GAN (Clark et al., 2019) (32 to 512 16GB TPU (Jouppi et al., 2017) cores).\n1Modeling long sequences is a challenge for transformer based architectures due to quadratic memory complexity of the attention matrix (Child et al., 2019)." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 VQ-VAE", "text": "The Vector Quantized Variational Autoencoder (VQ-VAE) (Van Den Oord et al., 2017) is a model that learns to compress high dimensional data points into a discretized latent space and reconstruct them. The encoder E(x) → h first encodes x into a series of latent vectors h which is then discretized by performing a nearest neighbors lookup in a codebook of embeddings C = {ei}Ki=1 of size K. The decoder D(e)→ x̂ then learns to reconstruct x from the quantized encodings. The VQ-VAE is trained using the following objective:\nL = ‖x−D(e)‖22︸ ︷︷ ︸ Lrecon + ‖sg[E(x)]− e‖22︸ ︷︷ ︸ Lcodebook +β ‖sg[e]− E(x)‖22︸ ︷︷ ︸ Lcommit\nwhere sg refers to a stop-gradient. The objective consists of a reconstruction loss Lrecon, a codebook loss Lcodebook, and a commitment loss Lcommit. The reconstruction loss encourages the VQ-VAE to learn good representations to accurately reconstruct data samples. The codebook loss brings codebook embeddings closer to their corresponding encoder outputs, and the commitment loss is weighted by a hyperparameter β and prevents the encoder outputs from fluctuating between different code vectors. An alternative replacement for the codebook loss described in Van Den Oord et al. (2017) is to use an EMA update which empirically shows faster training and convergence speed. In this paper, we use the EMA update when training the VQ-VAE." }, { "heading": "2.2 GPT", "text": "GPT and Image-GPT (Chen et al., 2020) are a class of autoregressive transformers that have shown tremendous success in modelling discrete data such as natural language and high dimensional images. These models factorize the data distribution p(x) according to p(x) = ∏d i=1 p(xi|x<i) through masked self-attention mechanisms and are optimized through maximum likelihood. As in Vaswani et al. (2017), the architectures follow the standard design of employing multi-head self-attention blocks followed by pointwise MLP feedforward blocks." }, { "heading": "3 VIDEOGEN", "text": "Our primary contribution is VideoGen, a new method to model complex video data in a computationally efficient manner. An overview of our method is shown in Fig 2.\nLearning Latent Codes In order to learn a set of discrete latent codes, we first train a VQ-VAE on the video data. The encoder architecture consists of a series of 3D convolutions that downsample over space-time, followed by attention residual blocks. Each attention residual block is designed as shown in Fig 3, where we use LayerNorm (Ba et al., 2016), and axial attention layers follow Ho et al. (2019b).\nThe architecture for the decoder is the reverse of the encoder, with attention residual blocks followed by a series of 3D transposed convolution that upsample over space-time. The position encodings are learned spatio-temporal embeddings that are shared between all axial attention layers in the encoder and decoder.\nLearning a Prior The second stage of our method is to learn a prior over the latents. The prior is learned by training a transformer model over the VQ-VAE latents. We follow the iGPT architecture with added dropout after the feedforward and attention block layers for regularization.\nAlthough the VQ-VAE is trained unconditionally, we can generate conditional samples by training a conditional prior. We use two types of conditioning:\n• Concatenation: We concatenate a conditional vector before every feedforward block in the transformer. This conditioning method is primarily used for frame conditioning, where the conditioned frame is encoded into a conditioning vector by a ResNet (He et al., 2016) backbone and then concatenated.\n• Conditional Norms: Similar to conditioning methods used in GANs, we parameterize the gain and bias in the transformer Layer Normalization (Ba et al., 2016) layers as affine functions of the conditional vector. This conditioning method is used for action conditioning." }, { "heading": "4 EXPERIMENTS", "text": "In the following section, we evaluate our method and design experiments to answer the following questions:\n• Can we generate high-fidelity samples from complex video datasets with limited compute? • What architecture design choices in the VQ-VAE and transformer help the most?" }, { "heading": "4.1 TRAINING DETAILS", "text": "All image data is scaled to [−0.5, 0.5] before training. For VQ-VAE training, we use random restarts for embeddings, and codebook initialization by copying encoder latents as described in Dhariwal et al. (2020). In addition, we found VQ-VAE training to be more stable (less codebook collapse) when using Normalized MSE for the reconstruction loss, where MSE loss is divided by the variance of the dataset. For all datasets, we train on 64 × 64 videos of sequence length 16. More training details can be found in Appendix A." }, { "heading": "4.2 MOVING MNIST", "text": "For Moving MNIST, the VQ-VAE downsamples by a factor of 4 over space-time (64x total reduction), and contains two residual layers with no attention. We use a codebook of 512 codes, each 64-dim embeddings. To learn the single-frame conditional prior, we train a conditional transformer with 384 hidden features, 4 heads, 8 layers, and a ResNet-18 single frame encoder. Fig 4 shows several different generated trajectories conditioned on a single frame.\nTable 2: Comparing FVD and FVD* values for BAIR on different transformer architectures. C = single-frame conditional, U = unconditional. The Axial Attention model follows the transformer architectures described in Ho et al. (2019b). GPT and GPT Small follow the same architecture, but GPT Small containing half the number of layers as GPT. FVD* is computed similar to FVD, but using reconstructed dataset examples instead of the original dataset examples\nbits/dim FVD FVD*\nAxial (C) 3.94 170.1 133.3 GPT Small (U) 4.53 230.4 187.0 GPT (U) 4.08 191.7 164.2 GPT (C) 2.95 112.5 94.2" }, { "heading": "4.3 BAIR ROBOT PUSHING", "text": "For BAIR, the VQ-VAE downsamples by a factor of 4 over space, and 2 over time (32x reduction), with 4 attention residual blocks. We use a codebook of 2048 codes, each 256-dim embeddings. The transformer prior has a hidden size of 384 and 22 layers. For single-frame conditioning, we jointly train a ResNet-34 encoder that encodes the frame to a 512-dim vector.\nQualitatively, Fig 5 shows VQ-VAE reconstructions on BAIR. Fig 6 shows samples primed with a single frame as well as frame and action-conditioned samples. We can see that our method is able to generate good, realistic looking samples, and generalize to more out of distribution examples as shown in the case of samples for different priming frames with the same actions.\nQuantitatively, Table 12 shows FVD results on BAIR, comparing our method with prior work. Although our method does not achieve state of the art, it is able to produce very realistic samples. To have a more thorough understanding of the model performance, we compute two extra metrics: (1) FVD of reconstructed VQ-VAE data examples, and (2) an adjusted FVD* metric similar to Razavi et al. (2019), where FVD is calculated between samples and a VQ-VAE reconstructed validation set (as opposed to the actual validation set). Table 2 shows that the adjusted FVD* of our single-frame conditional model is 94.2, coming very close the Video Transformer performance for FVD. This suggests that the FVD sample performance of our method is primarily bounded by the VQ-VAE reconstruction quality, as opposed to the transformer prior." }, { "heading": "4.4 VIZDOOM", "text": "For ViZDoom, we use the same VQ-VAE and transformer architectures as for the BAIR dataset, with the exception that the transformer is trained without single-frame conditioning. We collect the training data by training a policy in each ViZDoom environment, and collecting rollouts of the final trained policies. The total dataset size consists of 1000 episodes of length 100 trajectories, split into an 8:1:1 train / validataion / test ratio. We experiment on the Health Gathering Supreme and Battle2 ViZDoom environments, training both unconditional and action-conditioned priors. Fig 7 and Fig 8 show samples from priors trained in each dataset and can see that the VQ-VAE and transformer are able to capture complex 3D camera movements and environment interactions. In addition, action-conditioned samples are visually consistent with the input action sequence and show a diverse range of backgrounds and scenarios under different random generations for the same set of actions.\n2SV2P (Babaeizadeh et al., 2017), SAVP (Lee et al., 2018), DVD-GAN-FP (Clark et al., 2019), Video Transformer (Weissenborn et al., 2019), Latent Video Transformer (LVT) (Rakhimov et al., 2020), and TrIVDGAN (Luc et al., 2020) are our baselines and we use FVD to compare different models given that adversarial models do not have a likelihood metric while the likelihood loss in VideoGen is on the discrete latents and not directly comparable to pixel level likelihood based models such as Video Transformer." }, { "heading": "4.5 ABLATIONS", "text": "In this section, we perform ablations on various architectural design choice for VideoGen.\nDoes attention in the VQ-VAE help? We remove the axial attention layers from the VQ-VAE and compare with the original architecture as shown in Table 2. Empirically, incorporating axial attention into the VQ-VAE architecture improves reconstruction (NMSE) performance, and has much better reconstruction FVD compared to using no attention. Fig 5 shows a qualitative result of the VQ-VAE architecture with axial attention module.\nAre other forms of attention better for the transformer? In addition to training a transformer with standard self-attention, we test our method using an Axial Transformer (Ho et al., 2019b) as a replacement. Table 3 shows an FVD comparison between a single-frame conditional Axial Transformer, and the standard GPT model using full self-attention. Using full attention achieves better FVD performance than an Axial Transformer. We hypothesize that this may be due to the full attention transformer being better able to capture single-frame conditional information than the Axial Transformer.\nCan a smaller transformer be used? Computational efficiency is a primary advantage to our method, where we can first use the VQ-VAE to downsample by space-time before learning an autoregressive prior. Lower resolution latents allow us to train larger and more expressive prior to learn complex data distributions. In order to better understand why these advantages are important, we run an ablation on transformer size to demonstrate that training a larger transformer produces\nTable 3: Ablation on attention in VQ-VAE. FVD is with reconstructed examples\nVQ-VAE Architecture NMSE (↓) FVD (↓) No Attention 0.014 156 With Attention 0.010 125\nTable 4: Ablation on positional encodings\nPosition Encoding bits/dim (↓) FVD (↓) Sin-Cosine 3.94 232 None 5.96 1697 Temporal Only 5.84 1360 Spatial Only 4.82 300 Spatial + Temporal 4.53 230\nsignificantly better result. Table 2 shows the results of training a smaller unconditional transformer on BAIR, where it achieves both a worse bits per dim and FVD (230) compared to the larger transformer (192).\nDo learned spatiotemporal positional encodings help? Finally, we study the effects of using learned spatiotemporal positional encodings when training the GPT model. Table 4 shows ablations on comparing different positional encodings. We see that adding in both space and time broadcasting aspects is crucial to better learn the data, and achieve a lower FVD. Using sine-cosine encodings performs roughly the same as spatiotemporal encodings, with a slightly worse FVD." }, { "heading": "5 RELATED WORK", "text": "Video Prediction The problem of video prediction (Srivastava et al., 2015) is quite related to video generation in that the latter is one way to solve the former. Plenty of methods have been proposed for video prediction on the BAIR Robot dataset (Finn et al., 2016; Ebert et al., 2017; Babaeizadeh et al., 2017; Denton et al., 2017; Denton & Fergus, 2018; Lee et al., 2018) where the future frames are predicted given the past frame(s) and (or) action(s) of a robot arm moving across multiple objects thereby benchmarking the ability of video models to capture object-robot interaction, object permanance, robot arm motion, etc. Translating videos to videos is another paradigm to think about video prediction with a prominent example being vid2vid Wang et al. (2018). The vid2vid framework uses automatically generated supervision from more abstract information such as semantic segmentation (Luc et al., 2017) masks, keypoints, poses, edge detectors, etc to further condition the GAN based video translation setup.\nVideo Generation Most modern generative modeling architectures allow for easy adaptation of unconditional video generation to conditional versions through conditional batch-norm (Brock et al., 2018), concatenation (Salimans et al., 2017; van den Oord et al., 2016c), etc. Video Pixel Networks (Kalchbrenner et al., 2017) propose a convolutional LSTM based encoding of the past frames to be able to generate the next frame pixel by pixel autoregressively with a PixelCNN (van den Oord et al., 2016c) decoder. The architecture serves both as a video generative as well as predictive model, optimized through log-likelihood loss at the pixel level. Subscale Video Transformers (Weissenborn et al., 2019) extend the idea of Subscale Pixel Networks (Menick & Kalchbrenner, 2018) for video generation at the pixel level using the subscale ordering across space and time. However, the sampling time and compute requirements are large for these models. In the past, video specific architectures have been proposed for GAN based video generation with primitive results by Vondrick et al. (2016). Recently, DVD-GAN proposed by Clark et al. (2019) adopts a BigGAN like architecture for videos with disentangled (axial) non-local (Wang et al., 2017) blocks across space and time. They present a wide range of results, unconditional, past frame(s) conditional, and class conditional video generation. However, the DVD-GAN architecture is hard to replicate without industry scale compute resources. Furthermore, training DVD-GAN without a clean open source implementation is difficult due to instabilities often encountered in GAN training. Other examples of prior work with video generation of GANs include Saito et al. (2017), Tulyakov et al. (2018), Acharya et al. (2018), Yushchenko et al. (2019). In addition, Saito & Saito (2018) and Kahembwe & Ramamoorthy (2020) propose more scalable and efficient GAN models for training on less compute. Our approach builds on top of VQ-VAE (Van Den Oord et al., 2017) by adapting it for video generation. A clean architecture with VQ-VAE for video generation has not been presented yet and we hope VideoGen is useful from that standpoint. While VQ-VAE-2 (Razavi et al., 2019) proposes using multi-scale hierarchical latents, the pipeline is inherently more complicated. For simplicity, ease of reproduction and presenting the\nfirst VQ-VAE based video generation model with minimal complexity, we stick with the single scale setup." }, { "heading": "6 CONCLUSION", "text": "We have presented VideoGen, a new video generation architecture adapting VQ-VAE and Transformer models typically used for image generation to the domain of videos with minimal modifications. We have shown that VideoGen is able to synthesize videos that are competitive with state-of-the-art GAN based video generation models by requiring orders of magnitude fewer resources. We have also presented ablations on key design choices used in VideoGen which we hope is useful for future design of architectures in video generation. We hope that VideoGen serves as a simple baseline that is easy to reproduce and build upon for future research in this challenging topic." }, { "heading": "A ARCHITECTURE DETAILS AND HYPERPARAMETERS", "text": "A.1 VQ-VAE ENCODER AND DECODER\nA.2 PRIOR NETWORKS" } ]
2,020
null
SP:2376a19af5be2a66ce8cf04713ab41c972f48382
[ "This paper studies the topic of evaluating the performance of optimizers for neural networks. The paper makes the argument that existing evaluation procedures either over emphasize the finding of optimal hyperparameters or under-evaluate the performance of an algorithm by randomly sampling hyperparameters. This paper's primary objective is to propose an evaluation procedure that better aligns with a practitioner's goal than existing evaluation procedures. The proposed procedure evaluates optimization algorithms by using the hyperband hyperparameter optimization algorithm to tune hyperparameters and then score the algorithm using a weighted combination of validation performance scores over regularly sampled training intervals. The aggregate performances of algorithms are then ranked using performance profiles. " ]
Many optimizers have been proposed for training deep neural networks, and they often have multiple hyperparameters, which make it tricky to benchmark their performance. In this work, we propose a new benchmarking protocol to evaluate both end-to-end efficiency (training a model from scratch without knowing the best hyperparameter) and data-addition training efficiency (the previously selected hyperparameters are used for periodically re-training the model with newly collected data). For end-to-end efficiency, unlike previous work that assumes random hyperparameter tuning, which over-emphasizes the tuning time, we propose to evaluate with a bandit hyperparameter tuning strategy. A human study is conducted to show that our evaluation protocol matches human tuning behavior better than the random search. For data-addition training, we propose a new protocol for assessing the hyperparameter sensitivity to data shift. We then apply the proposed benchmarking framework to 7 optimizers and various tasks, including computer vision, natural language processing, reinforcement learning, and graph mining. Our results show that there is no clear winner across all the tasks.
[ { "affiliations": [], "name": "COL FOR" }, { "affiliations": [], "name": "BENCHMARKING OPTIMIZERS" } ]
[ { "authors": [ "Joshua Achiam" ], "title": "Spinning Up in Deep Reinforcement Learning, 2018", "venue": null, "year": 2018 }, { "authors": [ "Hilal Asi", "John C Duchi" ], "title": "The importance of better models in stochastic optimization", "venue": "Proceedings of the National Academy of Sciences,", "year": 2019 }, { "authors": [ "André MS Barreto", "Heder S Bernardino", "Helio JC Barbosa" ], "title": "Probabilistic performance profiles for the experimental evaluation of stochastic algorithms", "venue": "In Proceedings of the 12th annual conference on Genetic and evolutionary computation,", "year": 2010 }, { "authors": [ "James Bergstra", "Yoshua Bengio" ], "title": "Random search for hyper-parameter optimization", "venue": "The Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "James S Bergstra", "Rémi Bardenet", "Yoshua Bengio", "Balázs Kégl" ], "title": "Algorithms for hyper-parameter optimization", "venue": "In Advances in neural information processing systems,", "year": 2011 }, { "authors": [ "Wei-Lin Chiang", "Xuanqing Liu", "Si Si", "Yang Li", "Samy Bengio", "Cho-Jui Hsieh" ], "title": "Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks", "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2019 }, { "authors": [ "Dami Choi", "Christopher J Shallue", "Zachary Nado", "Jaehoon Lee", "Chris J Maddison", "George E Dahl" ], "title": "On empirical comparisons of optimizers for deep learning", "venue": null, "year": 1910 }, { "authors": [ "Cody Coleman", "Deepak Narayanan", "Daniel Kang", "Tian Zhao", "Jian Zhang", "Luigi Nardi", "Peter Bailis", "Kunle Olukotun", "Chris Ré", "Matei Zaharia" ], "title": "Dawnbench: An end-to-end deep learning benchmark and competition", "venue": null, "year": 2017 }, { "authors": [ "Elizabeth D Dolan", "Jorge J Moré" ], "title": "Benchmarking optimization software with performance profiles", "venue": "Mathematical programming,", "year": 2002 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "Journal of machine learning research,", "year": 2011 }, { "authors": [ "Stefan Falkner", "Aaron Klein", "Frank Hutter" ], "title": "Bohb: Robust and efficient hyperparameter optimization at scale", "venue": "arXiv preprint arXiv:1807.01774,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Frank Hutter", "Holger H Hoos", "Kevin Leyton-Brown" ], "title": "Sequential model-based optimization for general algorithm configuration", "venue": "In International conference on learning and intelligent optimization,", "year": 2011 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Lisha Li", "Kevin Jamieson", "Giulia DeSalvo", "Afshin Rostamizadeh", "Ameet Talwalkar" ], "title": "Hyperband: A novel bandit-based approach to hyperparameter optimization", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Liyuan Liu", "Haoming Jiang", "Pengcheng He", "Weizhu Chen", "Xiaodong Liu", "Jianfeng Gao", "Jiawei Han" ], "title": "On the variance of the adaptive learning rate and beyond", "venue": null, "year": 1908 }, { "authors": [ "Liangchen Luo", "Yuanhao Xiong", "Yan Liu", "Xu Sun" ], "title": "Adaptive gradient methods with dynamic bound of learning rate", "venue": null, "year": 1902 }, { "authors": [ "Luke Metz", "Niru Maheswaranathan", "Ruoxi Sun", "C Daniel Freeman", "Ben Poole", "Jascha SohlDickstein" ], "title": "Using a thousand optimization tasks to learn hyperparameter search strategies", "venue": null, "year": 2002 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "arXiv preprint arXiv:1802.05957,", "year": 2018 }, { "authors": [ "Ning Qian" ], "title": "On the momentum term in gradient descent learning algorithms", "venue": "Neural networks,", "year": 1999 }, { "authors": [ "Sashank J Reddi", "Satyen Kale", "Sanjiv Kumar" ], "title": "On the convergence of adam and beyond", "venue": "arXiv preprint arXiv:1904.09237,", "year": 2019 }, { "authors": [ "Herbert Robbins", "Sutton Monro" ], "title": "A stochastic approximation method", "venue": "The annals of mathematical statistics,", "year": 1951 }, { "authors": [ "Frank Schneider", "Lukas Balles", "Philipp Hennig" ], "title": "Deepobs: A deep learning optimizer benchmark suite", "venue": "arXiv preprint arXiv:1903.05499,", "year": 2019 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Vatsal Shah", "Anastasios Kyrillidis", "Sujay Sanghavi" ], "title": "Minimum weight norm models do not always generalize well for over-parameterized problems", "venue": "arXiv preprint arXiv:1811.07055,", "year": 2018 }, { "authors": [ "Christopher J Shallue", "Jaehoon Lee", "Joseph Antognini", "Jascha Sohl-Dickstein", "Roy Frostig", "George E Dahl" ], "title": "Measuring the effects of data parallelism on neural network training", "venue": "arXiv preprint arXiv:1811.03600,", "year": 2018 }, { "authors": [ "Prabhu Teja Sivaprasad", "Florian Mai", "Thijs Vogels", "Martin Jaggi", "Francois Fleuret" ], "title": "Optimizer benchmarking needs to account for hyperparameter tuning", "venue": "In Proceedings of the 37th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Tijmen Tieleman", "Geoffrey Hinton" ], "title": "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning", "venue": null, "year": 2012 }, { "authors": [ "Ashia C Wilson", "Rebecca Roelofs", "Mitchell Stern", "Nati Srebro", "Benjamin Recht" ], "title": "The marginal value of adaptive gradient methods in machine learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Yang You", "Igor Gitman", "Boris Ginsburg" ], "title": "Large batch training of convolutional networks", "venue": "arXiv preprint arXiv:1708.03888,", "year": 2017 }, { "authors": [ "Yang You", "Jing Li", "Sashank Reddi", "Jonathan Hseu", "Sanjiv Kumar", "Srinadh Bhojanapalli", "Xiaodan Song", "James Demmel", "Kurt Keutzer", "Cho-Jui Hsieh" ], "title": "Large batch optimization for deep learning: Training bert in 76", "venue": null, "year": 1904 }, { "authors": [ "Manzil Zaheer", "Sashank Reddi", "Devendra Sachan", "Satyen Kale", "Sanjiv Kumar" ], "title": "Adaptive methods for nonconvex optimization", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Michael Zhang", "James Lucas", "Jimmy Ba", "Geoffrey E Hinton" ], "title": "Lookahead optimizer: k steps forward, 1 step back", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Hongyu Zhu", "Mohamed Akrout", "Bojian Zheng", "Andrew Pelegris", "Amar Phanishayee", "Bianca Schroeder", "Gennady Pekhimenko" ], "title": "Tbd: Benchmarking and analyzing deep neural network training", "venue": "arXiv preprint arXiv:1803.06905,", "year": 2018 }, { "authors": [ "• GAN" ], "title": "We train SNGAN with the same network architecture and objective function with spectral normalization for CIFAR10 in Miyato et al. (2018), and the batch size of the generator and the discriminator", "venue": null, "year": 2018 }, { "authors": [ "OpenAI Gym (Brockman" ], "title": "2016)) as our training environment, and PPO (Schulman et al. (2017)), implemented by OpenAI SpinningUp (Achiam (2018)), as the algorithm that required tuning. We use the same architectures for both action value network Q and the policy network π. We define 40,000 of environment interactions as one epoch, with a batch size of 4,000", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Due to the enormous data size and non-convexity, stochastic optimization algorithms have become widely used in training deep neural networks. In addition to Stochastic Gradient Descent (SGD) (Robbins & Monro, 1951), many variations such as Adagrad (Duchi et al., 2011) and Adam (Kingma & Ba, 2014) have been proposed. Unlike classical, hyperparameter free optimizers such as gradient descent and Newton’s method1, stochastic optimizers often hold multiple hyperparameters including learning rate and momentum coefficients. Those hyperparameters are critical not only to the speed, but also to the final performance, and are often hard to tune.\nIt is thus non-trivial to benchmark and compare optimizers in deep neural network training. And a benchmarking mechanism that focuses on the peak performance could lead to a false sense of improvement when developing new optimizers without considering tuning efforts. In this paper, we aim to rethink the role of hyperparameter tuning in benchmarking optimizers and develop new benchmarking protocols to reflect their performance in practical tasks better. We then benchmark seven recently proposed and widely used optimizers and study their performance on a wide range of tasks. In the following, we will first briefly review the two existing benchmarking protocols, discuss their pros and cons, and then introduce our contributions.\nBenchmarking performance under the best hyperparameters A majority of previous benchmarks and comparisons on optimizers are based on the best hyperparameters. Wilson et al. (2017); Shah et al. (2018) made a comparison of SGD-based methods against adaptive ones under their best hyperparameter configurations. They found that SGD can outperform adaptive methods on several datasets under careful tuning. Most of the benchmarking frameworks for ML training also assume knowing the best hyperparameters for optimizers (Schneider et al., 2019; Coleman et al., 2017; Zhu et al., 2018). Also, the popular MLPerf benchmark evaluated the performance of optimizers under the best hyperparameter. It showed that ImageNet and BERT could be trained in 1 minute using the combination of good optimizers, good hyperparameters, and thousands of accelerators.\n1The step sizes of gradient descent and Newton’s method can be automatically adjusted by a line search procedure (Nocedal & Wright, 2006).\nDespite each optimizer’s peak performance being evaluated, benchmarking under the best hyperparameters makes the comparison between optimizers unreliable and fails to reflect their practical performance. First, the assumption of knowing the best hyperparameter is unrealistic. In practice, it requires a lot of tuning efforts to find the best hyperparameter, and the tuning efficiency varies greatly for different optimizers. It is also tricky to define the “best hyperparameter”, which depends on the hyperparameter searching range and grids. Further, since many of these optimizers are sensitive to hyperparameters, some improvements reported for new optimizers may come from insufficient tuning for previous work.\nBenchmarking performance with random hyperparameter search It has been pointed out in several papers that tuning hyperparameter needs to be considered in evaluating optimizers (Schneider et al., 2019; Asi & Duchi, 2019), but having a formal evaluation protocol on this topic is nontrivial. Only recently, two papers Choi et al. (2019) and Sivaprasad et al. (2020) take hyperparameter tuning time into account when comparing SGD with Adam/Adagrad. However, their comparisons among optimizers are conducted on random hyperparameter search. We argue that these comparisons could over-emphasize the role of hyperparameter tuning, which could lead to a pessimistic and impractical performance benchmarking for optimizers. This is due to the following reasons: First, in the random search comparison, each bad hyperparameter has to run fully (e.g., 200 epochs). In practice, a user can always stop the program early for bad hyperparameters if having a limited time budget. For instance, if the learning rate for SGD is too large, a user can easily observe that SGD diverges in a few iterations and directly stops the current job. Therefore, the random search hypothesis will over-emphasize the role of hyperparameter tuning and does not align with a real user’s practical efficiency. Second, the performance of the best hyperparameter is crucial for many applications. For example, in many real-world applications, we need to re-train the model every day or every week with newly added data. So the best hyperparameter selected in the beginning might benefit all these re-train tasks rather than searching parameters from scratch. In addition, due to the expensive random search, random search based evaluation often focuses on the low-accuracy region2, while practically we care about the performance for getting reasonably good accuracy.\nOur contributions Given that hyperparameter tuning is either under-emphasized (assuming the best hyperparameters) or over-emphasize (assuming random search) in existing benchmarking protocols and comparisons, we develop new evaluation protocols to compare optimizers to reflect the real use cases better. Our evaluation framework includes two protocols. First, to evaluate the end-to-end training efficiency for a user to train the best model from scratch, we develop an efficient evaluation protocol to compare the accuracy obtained under various time budgets, including the hyperparameter tuning time. Instead of using the random search algorithm, we adopt the Hyperband (Li et al., 2017) algorithm for hyperparameter tuning since it can stop early for bad configurations and better reflect the real running time required by a user. Further, we also propose to evaluate the data addition training efficiency for a user to re-train the model with some newly added training data, with the knowledge of the best hyperparameter tuned in the previous training set. We also conduct human studies to study how machine learning researchers are tuning parameters in optimizers and how that aligns with our proposed protocols.\nBased on the proposed evaluation protocols, we study how much progress has recently proposed algorithms made compared with SGD or Adam. Note that most of the recent proposed optimizers have shown outperforming SGD and Adam under the best hyperparameters for some particular tasks, but it is not clear whether the improvements are still significant when considering hyper-parameter tuning, and across various tasks. To this end, we conduct comprehensive experiments comparing state-of-the-art training algorithms, including SGD (Robbins & Monro, 1951), Adam (Kingma & Ba, 2014), RAdam (Liu et al., 2019), Yogi (Zaheer et al., 2018), LARS (You et al., 2017), LAMB (You et al., 2019), and Lookahead (Zhang et al., 2019), on a variety of training tasks including image classification, generated adversarial networks (GANs), sentence classification (BERT fine-tuning), reinforcement learning and graph neural network training. Our main conclusions are: 1) On CIFAR-10 and CIFAR-100, all the optimizers including SGD are competitive. 2) Adaptive methods are generally better on more complex tasks (NLP, GCN, RL). 3) There is no clear winner among adaptive methods. Although RAdam is more stable than Adam across tasks, Adam is still a very competitive baseline even compared with recently proposed methods.\n2For instance, Sivaprasad et al. (2020) only reaches < 50% accuracy in their CIFAR-100 comparisons." }, { "heading": "2 RELATED WORK", "text": "Optimizers. Properties of deep learning make it natural to apply stochastic first order methods, such as Stochastic Gradient Descent (SGD) (Robbins & Monro, 1951). Severe issues such as a zig-zag training trajectory and a uniform learning rate have been exposed, and researchers have then drawn extensive attention to modify the existing SGD for improvement. Along this line of work, tremendous progresses have been made including SGDM (Qian, 1999), Adagrad (Duchi et al., 2011), RMSProp (Tieleman & Hinton, 2012), and Adam (Kingma & Ba, 2014). These methods utilize momentums to stabilize and speed up training procedures. Particularly, Adam is regarded as the default algorithm due to its outstanding compatibility. Then variants such as Amsgrad (Reddi et al., 2019), Adabound (Luo et al., 2019), Yogi (Zaheer et al., 2018), and RAdam (Liu et al., 2019) have been proposed to resolve different drawbacks of Adam. Meanwhile, the requirement of large batch training has inspired the development of LARS (You et al., 2017) and LAMB (You et al., 2019). Moreover, Zhang et al. (2019) has put forward a framework called Lookahead to boost optimization performance by iteratively updating two sets of weights.\nHyperparameter tuning methods. Random search and grid search (Bergstra & Bengio, 2012) can be a basic hyperparameter tuning method in the literature. However, the inefficiency of these methods stimulates the development of more advanced search strategies. Bayesian optimization methods including Bergstra et al. (2011) and Hutter et al. (2011) accelerate random search by fitting a black-box function of hyperparameter and the expected objective to adaptively guide the search direction. Parallel to this line of work, Hyperband (Li et al., 2017) focuses on reducing evaluation cost for each configuration and early terminates relatively worse trials. Falkner et al. (2018) proposes BOHB to combine the benefits of both Bayesian Optimization and Hyperband. All these methods still require huge computation resources. A recent work (Metz et al., 2020) has tried to obtain a list of potential hyperparameters by meta-learning from thousands of representative tasks. We strike a balance between effectiveness and computing cost and leverage Hyperband in our evaluation protocol to compare a wider range of optimizers." }, { "heading": "3 PROPOSED EVALUATION PROTOCOLS", "text": "In this section, we introduce the proposed evaluation framework for optimizers. We consider two evaluation protocols, each corresponding to an important training scenario:\n• Scenario I (End-to-end training): This is the general training scenario, where a user is given an unfamiliar optimizer and task, the goal is to achieve the best validation performance after several trials and errors. In this case, the evaluation needs to include hyperparameter tuning time. We develop an efficiency evaluation protocol to compare various optimizers in terms of CPE and peak performance. • Scenario II (Data-addition training): This is another useful scenario encountered in many applications, where the same model needs to be retrained regularly after collecting some fresh data. In this case, a naive solution is to reuse the previously optimal hyperparameters and retrain the model. However, since the distribution is shifted, the result depends on the sensitivity to that shift.\nWe describe the detailed evaluation protocol for each setting in the following subsections." }, { "heading": "3.1 END-TO-END TRAINING EVALUATION PROTOCOL", "text": "Before introducing our evaluation protocol for Scenario I, we first formally define the concept of optimizer and its hyperparameters. Definition 1. An optimizer is employed to solve a minimization problem minθ L(θ) and can be defined by a tuple o ∈ O = (U ,Ω), where O contains all types of optimizers. U is a specific update rule and Ω = (ω1, . . . , ωN ) ∈ RN represents a vector ofN hyperparameters. Search space of these hyperparameters is denoted by F . Given an initial parameter value θ0, together with a trajectory of optimization procedure Ht = {θs,L(θs),∇L(θs)}, the optimizer updates θ by\nθt+1 = U(Ht,Ω).\nWe aim to evaluate the end-to-end time for a user to get the best model, including the hyperparameter tuning time. A recent work (Sivaprasad et al., 2020) assumes that a user conducts random search for finding the best hyperparameter setting. Still, we argue that the random search procedure will over-emphasize the importance of hyperparameters when tuning is considered — it assumes a user\nnever stops the training even if they observe divergence or bad results in the initial training phase, which is unrealistic.\nFigure 1 illustrates why random search might not lead to a fair comparison of optimizers. In Figure 1, we are given two optimizers, A and B, and their corresponding loss w.r.t. hyperparameter. According to Sivaprasad et al. (2020), optimizer B is considered better than optimizer A under a constrained budget since most regions of the hyperparameter space of A outperforms B. For instance, suppose we randomly sample the same hyperparamter setting for A and B. The final config ω∗r (B) found under this strategy can have a lower expected loss than that of ω∗r (A), as shown in Figure 1a. However, there exists a more practical search strategy which can invalidate this statement with the assumption of a limited searching budget: a user can early terminate a configuration trial when trapped in bad results or diverging. Hence, we can observe in Figure 1b that for optimizer A, this strategy earlystops many configurations and only allow a limited number of trials to explore to the deeper stage. Therefore, the bad hyperparameters will not affect the overall efficiency of Optimizer A too much. In contrast, for optimizer B, performances of different hyperparameters are relatively satisfactory and hard to distinguish, resulting in similar and long termination time for each trial. Therefore, it may be easier for a practical search strategy p to find the best configuration ω∗p(A) of optimizer A than ω∗p(B), given the same constrained budget.\nThis example suggests that random search may over-emphasize the parameter sensitivity when benchmarking optimizers. To better reflect a practical hyperparameter tuning scenario, our evaluation assumes a user applies Hyperband (Li et al., 2017), a simple but effective hyperparameter tuning scheme to get the best model. Hyperband formulates hyperparameter optimization as a unique bandit problem. It accelerates random search through adaptive resource allocation and earlystopping, as demonstrated in Figure 1b. Compared with its more complicated counterparts such as BOHB (Falkner et al., 2018), Hyperband requires less computing resources and performs similarly within a constrained budget. The algorithm is presented in Appendix A.\nDespite different hyperparameter tuning algorithms, human tuning by experts is still regarded as the most effective. To verify and support that Hyperband is an effective method and even competitive with humans, we conduct a human study as follows: for image classification on CIFAR10, given 10 learning rate configurations of SGD in the grid [1.0 × 10−8, 1.0 × 10−7, 1.0 × 10−6, . . . , 10], participants are requested to search the best one at their discretion. Namely, they can stop or pause a trial any time and continue to evaluate a new configuration until they feel it has already reached the best performance. 10 participants are sampled randomly from Ph.D. students with computer science backgrounds.\nWe collect their tuning trajectories and average them as human performance, which is considered “optimal” in this human study. In Figure 2, we plot curves for hyperparameter tuning of human, Hyperband, random search, random search with an early stopping (ES) strategy in Sivaprasad et al. (2020), and Hyperband with ES. We find that Hyperband matches humans’ behavior better, while random search tends to trap in suboptimal configurations although random with early stopping can mitigate this issue to some extent. This finding shows the advantage of Hyperband over random\nsearch regardless of early stopping, and justifies the use of Hyperband in optimizer benchmarking. More details of this human study can be found in Appedix B.\nWith Hyperband incorporated in end-to-end training, we assume that each configuration is run sequentially and record the best performance obtained at time step t as Pt. Specifically, Pt represents the evaluation metric for each task, e.g., accuracy for image classification and return for reinforcement learning. {Pt}Tt=1 forms a trajectory for plotting learning curves on test set like Figure 3. Although it is intuitive to observe the performance of different optimizers according to such figures, summarizing a learning curve into a quantifiable, scalar value can be more insightful for evaluation. Thus, as shown in Eq. 1, we use λ-tunability defined in Sivaprasad et al. (2020) to further measure the performance of optimizers:\nλ-tunability = ∑T\nt=1 λt · Pt, where ∑ t λt = 1 and ∀tλt > 0. (1)\nOne intuitive way is to set λt = 1t=T to determine which optimizer can reach the best model performance after the whole training procedure. However, merely considering the peak performance is not a good guidance on the choice of optimizers. In practice, we tend to take into account the complete trajectory and exert more emphasis on the early stage. Thus, we employ the Cumulative Performance-Early weighting scheme where λt ∝ (T − i), to compute λ-tunablity instead of the extreme assignment λt = 1t=T . The value obtained is termed as CPE for simplicity.\nWe present our evaluation protocol in Algorithm 1. As we can see, end-to-end training with hyperparameter optimization is conducted for various optimizers on the given task. The trajectory {Pt}Tt=1 is recorded to compute the peak performance as well as CPE value. Note that the procedure is repeated M times to obtain a reliable result. We use M = 3 in all experiments. More details of time cost and acceleration of the algorithm can be found in Appendix E.\nAlgorithm 1 End-to-End Efficiency Evaluation Protocol Input: A set of optimizers O = {o : o = (U ,Ω)}, task a ∈ A, feasible search space F\n1: for o ∈ O do 2: for i = 1 to M do 3: Conduct hyperparameter search in F with the optimizer o using HyperBand on a 4: Record the performance trajectory {Pt}Tt=1 explored by HyperBand 5: Calculate the peak performance and CPE by Eq. 1 6: end for 7: Average peak and CPE values over M repetitions for the optimizer o 8: end for 9: Evaluate optimizers according to their peak and CPE values" }, { "heading": "3.2 DATA-ADDITION TRAINING EVALUATION PROTOCOL", "text": "In Scenario II, we have a service (e.g., a search or recommendation engine) and we want to re-train the model every day or every week with some newly added training data. One may argue that an online learning algorithm should be used in this case, but in practice online learning is unstable and industries still prefer this periodically retraining scheme which is more stable.\nIn this scenario, once the best hyperparameters were chosen in the beginning, we can reuse them for every training, so no hyperparameter tuning is required and the performance (including both efficiency and test accuracy) under the best hyperparameter becomes important. However, an implicit assumption made in this process is that “the best hyperparameter will still work when the training task slightly changes”. This can be viewed as transferability of hyperparameters for a particular optimizer, and our second evaluation protocol aims to evaluate this practical scenario.\nWe simulate data-addition training with all classification tasks, and the evaluation protocol works as follows: 1) Extract a subset Dδ containing partial training data from the original full dataset D with a small ratio δ; 2) Conduct a hyperparameter search on Dδ to find the best setting under this scenario; 3) Use these hyperparameters to train the model on the complete dataset; 4) Observe the potential change of the ranking of various optimizers before and after data addition. For step 4) when comparing different optimizers, we will plot the training curve in the full-data training stage in Section 4, and also summarize the training curve using the CPE value. The detailed evaluation protocol is described in Algorithm 2.\nAlgorithm 2 Data-Addition Training Evaluation Protocol Input: A set of optimizers O = {o : o = (U ,Ω)}, task a ∈ A with a full dataset D, a split ratio δ\n1: for o ∈ O do 2: for i = 1 to M do 3: Conduct hyperparameter search with the optimizer o using Hyperband on awith a partial\ndataset Dδ , and record the best hyperparameter setting Ωpartial found under this scenario 4: Apply the optimizer with Ωpartial on Dδ and D, then save the training curves 5: end for 6: Average training curves of o over M repetitions to compute CPE 7: end for 8: Compare performance of different optimizers under data-addition training" }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "Optimizers to be evaluated. As shown in Table 1, we consider 7 optimizers including nonadaptive methods using only the first-order momentum, and adaptive methods considering both firstorder and second-order momentum. We also provide lists of tunable hyperparameters for different optimizers in Table 1. Moreover, we consider following two combinations of tunable hyperparameters to better investigate the performance of different optimizers: a) only tuning initial learning rate with the others set to default values and b) tuning a full list of hyperparameters. A detailed description of optimizers as well as default values and search range of these hyperparameters can be found in Appendix E. Note that we adopt a unified search space for a fair comparison following Metz et al. (2020), to eliminate biases of specific ranges for different optimizers. The tuning budget of Hyperband is determined by three items: maximum resource (in this paper we use epoch) per configuration R, reduction factor η, and number of configurations nc. According to Li et al. (2017), a single Hyperband execution contains ns = blogη(R)c + 1 of SuccessiveHalving, each referred to as a bracket. These brackets take strategies from least to most aggressive early-stopping, and each one is designed to use approximately B = R · ns resources, leading to a finite total budget. The number of randomly sampled configurations in one Hyperband run is also fixed and grows with R. Then given R and η, nc determines the repetition times of Hyperband. We set η = 3 as this default value performs consistently well, and R to a value which each task usually takes for a complete run. For nc, it is assigned as what is required for a single Hyperband execution for all tasks, except for BERT fine-tuning, where a larger number of configurations is necessary due to a relatively small R. In Appendix E, we give assigned values of R, η, and nc for each task.\nTasks for benchmarking. For a comprehensive and reliable assessment of optimizers, we consider a wide range of tasks in different domains. When evaluating end-to-end training efficiency, we implement our protocol on tasks covering several popular and promising applications in Table 2. Apart from common tasks in computer vision and natural language processing, we introduce two extra tasks in graph neural network training and reinforcement learning. For simplicity, we will use the dataset to represent each task in our subsequent tables of experimental results. (For the reinforcement learning task, we just use the environment name.) The detailed settings and parameters for each task can be found in Appendix D." }, { "heading": "4.1 END-TO-END EFFICIENCY (SECNARIO I)", "text": "To evaluate end-to-end training efficiency, we adopt the protocol in Algorithm 1. Specifically, we record the average training trajectory with Hyperband {Pt}Tt=1 for each optimizer on benchmarking tasks, where Pt is the evaluation metric for each task (e.g., accuracy, return). We visualize these trajectories in Figure 3 for CIFAR10 and CIFAR100, and calculate CPE and peak performance in Table 3 and 9 respectively. More results for other tasks and the peak performance can be found in Appendix F. Besides, in Eq. 2 we compute performance ratio ro,a for each optimizer and each task, and then utilize the distribution function of a performance metric called performance profile ρo(τ) to summarize the performance of different optimizers over all the tasks. For tasks where a lower CPE is better, we just use ro,a = CPEo,a/min{CPE o,a} instead to guarantee ro,a ≥ 1. The function ρo(τ) for all optimizers is presented in Figure 4. Based on the definition of performance profile (Dolan & Moré, 2002), the optimizers with large probability ρo(τ) are to be preferred. In particular, the value of ρo(1) is the probability that one optimizer will win over the rest and can be a reference for selecting the proper optimizer for an unknown task. We also provided a probabilistic performance profile to summarize different optimizers in Figure 7 in Appendix F.\nro,a = max{CPEo,a : o ∈ O}\nCPEo,a , ρo(τ) =\n1 |A| size { a ∈ A : ro,a ≤ τ } . (2)\nOur findings are summarized below:\n• It should be emphasized from Table 3 and 9, that under our protocol based on Hyperband, SGD performs similarly to Adam in terms of efficiency as well as peak performance, and can even surpass it in some cases like training on CIFAR100. Under Hyperband, the best configuration of SGD is less tedious to find than random search because Hyperband can early-stop bad runs and thus they will affect less to the search efficiency and final performance. • For image classification tasks all the methods are competitive, while adaptive methods tend to perform better in more complicated tasks (NLP, GCN, RL). • There is no significant distinction among adaptive variants. Performance of adaptive optimizers tends to fall in the range within 1% of the best result. • According to performance profile in Figure 4, the RAdam achieves probability 1 with the smallest τ , and Adam is the second method achieving that. This indicates that RAdam and Adam are achieving relatively stable and consistent performance among these tasks." }, { "heading": "4.2 DATA-ADDITION TRAINING (SCENARIO II)", "text": "We then conduct evaluation on data-addition training based on the protocol in Algorithm 2. We choose four classification problems on CIFAR10, CIFAR100, MRPC and PPI since the setting do not apply to RL. We search the best hyperparameter configuration, denoted by Ωpartial, under the sub training set with the ratio δ = 0.3. Here we tune all hyperparameters. Then we directly apply Ωpartial on the full dataset for a complete training process. The training curves are shown in Figure 5, and we also summarize the training curve with CPE by Eq. 1 in Table 4. We have the following findings:\n• There is no clear winner in data-addition training. RAdam is outperforming other optimizers in 2/4 tasks so is slightly preferred, but other optimizers except Lookahead are also competitive (within 1% range) on at least 2/4 tasks. • To investigate whether the optimizer’s ranking will change when adding 70% data, we compare the training curve on the original 30% data versus the training curve on the full 100% data in Figure 5. We observe that the ranking of optimizers slightly changes after data addition." }, { "heading": "5 CONCLUSIONS AND DISCUSSIONS", "text": "In conclusion, we found there is no strong evidence that newly proposed optimizers consistently outperform Adam, while each of them may be good for some particular tasks. When deciding the choice of the optimizer for a specific task, people can refer to results in Table 3 and 9. If the task is contained in Table 2, he/she can directly choose the one with the best CPE or best peak performance based on his/her goal of the task (easy to tune or high final performance). On the other hand, even though the desired task is covered, people can also gain some insights from the results of the most similar task in Table 2, or refer to the performance profile in Figure 4 to pick adaptive methods like Adam. Besides choosing the optimizer, it will contribute to designing a new optimizer as well. Using our protocol to evaluate a new optimizer can show whether it has obvious improvement over existing optimizers, and can serve as a routine to judge the performance of the optimizer thoroughly.\nIn addition to the proposed two evaluation criteria, there could be other factors that affect the practical performance of an optimizer. First, the memory consumption is becoming important for training large DNN models. For instance, although Lookahead performs well in certain tasks, it requires more memory than other optimizers, restricting their practical use in some memory constrained applications. Another important criterion is the scalability of optimizers. When training with a massively distributed system, optimizing the performance of a large batch regime (e.g., 32K batch size for ImageNet) is important. LARS and LAMB algorithms included in our study are developed for large batch training. We believe it is another important metric for comparing optimizers worth studying further." }, { "heading": "A HYPERBAND", "text": "We present the whole algorithm for Hyperband in Algorithm 3, and you can refer to Li et al. (2017) for more details.\nAlgorithm 3 Hyperband Input: R, η Initialization: smax = blogη Rc , B = (smax + 1)R\n1: for s ∈ {smax, smax − 1, . . . , 0} do 2: n = dBR ηs (s+1)e, r = Rη −s 3: // begin SuccessiveHalving with (n, r) inner loop 4: T = random get configuration(n) 5: for i ∈ {0, . . . , s} do 6: ni = bnη−ic, ri = rηi 7: L = {run then return val loss(t, ri): t ∈ T} 8: T = top k(T, L, bni/ηc) 9: end for\n10: end for return Hyperparameter configuration with the smallest loss seen so far" }, { "heading": "B DETAILS OF HUMAN STUDY", "text": "In this human study, 10 participants are sampled from Ph.D. students with computer science backgrounds (machine learning backgrounds specifically). They are recruited as follows: We first asked the administrators to distribute the program of this human study to Ph.D. students in machine learning labs in our institutions. Provided the population, we assumed that they had prior knowledge about some basic machine learning experiments, such as image classification on MNIST and CIFAR10. They were requested to conduct hyperparameter tuning of learning rate based on their knowledge. They were informed that the target experiment was image classification on CIFAR10 with SGD, and the search range of learning rate was in the grid [1.0 × 10−8, 1.0 × 10−7, 1.0 × 10−6, . . . , 10]. Each configuration was run for 200 epochs at most. Moreover, we also told them that they could pause any configuration if they wanted to evaluate others, and even stop the whole tuning process only when they thought the accuracy could not be further improved and the number of total tuning epochs exceeded 600 at the same time. We collected results from 17 people and we\ndetermined the validity by checking whether the length of the trajectory was greater than 600 and removed 7 invalidity trajectories. Finally there remained 10 trajectories and we averaged them as the human performance in Figure 2." }, { "heading": "C OPTIMIZERS", "text": "Notations. Given a vector of parameters θ ∈ Rd, we denote a sub-vector of its i-th layer’s parameters by θ(i). {αt}Tt=1 is a sequence of learning rates during the optimization procedure of a horizon T . {φt, ψt}Tt=1 represents a sequence of functions to calculate the first-order and second-order momentum of the gradient gt, which aremt and vt respectively at time step t. Different optimization algorithms are usually specified by the choice of φ(·) and ψ(·). {rt}Tt=1 is an additional sequence of adaptive terms to modify the magnitude of the learning rate in some methods. For algorithms only using the first-order momentum, µ is the , while β1 and β2 are coefficients to compute the running averages m and v. is a small scalar (e.g., 1× 10−8) used to prevent division by 0. Generic optimization framework. Based on, we further develop a thorough generic optimization framework including an extra adaptive term in Algorithm 4. The debiasing term used in the original version of Adam is ignored for simplicity. Note that for {αt}Tt=1, different learning rate scheduling strategies can be adopted and the choice of scheduler is also regarded as a tunable hyperparameter. Without loss of generality, in this paper, we only consider a constant value and a linear decay (Shallue et al., 2018) in the following equation, introducing γ as a hyperparameter.\nαt = { α0, constant; α0 − (1− γ)α0 tT , linear decay.\nWith this generic framework, we can summarize several popular optimization methods by explicitly specifying mt, vt and rt in Table 5. It should be clarified that Lookahead is an exception of the generic framework. In fact it is more like a high-level mechanism, which can be incorporated with any other optimizer. However, as stated in Zhang et al. (2019), this optimizer is robust to inner optimization algorithm, k, and αs in Algorithm 5, we still include Lookahead here with Adam as the base for a more convincing and comprehensive evaluation. We consider Lookahead as a special adaptive method, and tune the same hyperparamters for it as other adaptive optimziers.\nAlgorithm 4 Generic framework of optimization methods Input: parameter value θ1, learning rate with scheduling {αt}, sequence of functions {φt, ψt, χt}Tt=1 to compute mt, vt, and rt respectively.\n1: for t = 1 to T do 2: gt = ∇ft(θt) 3: mt = φt(g1, · · · , gt) 4: vt = ψt(g1, · · · , gt) 5: rt = χt(θt,mt, vt) 6: θt+1 = θt − αtrtmt/ √ vt 7: end for\nAlgorithm 5 Lookahead Optimizer Input: Initial parameters θ0, objective function f , synchronization period k, slow weights step size αs, optimizer A\n1: for t = 1, 2, . . . do 2: Synchronize parameters θ̂t,0 ← θt−1 3: for i = 1, 2, . . . , k do 4: Sample minibatch of data d ∈ D 5: θ̂t,i ← θ̂t,i−1 +A(L, θ̂t,i−1, d) 6: end for 7: Perform outer update θt ← θt−1 + αs(θ̂t,k − θt−1) 8: end for\nreturn Parameters θ" }, { "heading": "D TASK DESCRIPTION", "text": "We make a concrete description of tasks selected for our optimizer evaluation protocol:\n• Image classifcation. For this task, we adopt a ResNet-50 (He et al., 2016) model on CIFAR10 and CIFAR100 with a batch size of 128 and the maximum epoch of 200 per trial. • VAE. We use a vanilla variational autoencoder in Kingma & Welling (2013) with five convolutional and five deconvolutional layers with a latent space of dimension 128 on CelebA. There are no dropout layers. Trained with a batch size of 144. • GAN. We train SNGAN with the same network architecture and objective function with spectral normalization for CIFAR10 in Miyato et al. (2018), and the batch size of the generator and the discriminator is 128 and 64 respectively. • Natural language processing. In this domain, we finetune RoBERTa-base on MRPC, one of the test suit in GLUE benchmark. For each optimizer, we set the maximal exploration budget to be 800 epochs. The batch size is 16 sentences. • Graph learning. Among various graph learning problems, we choose node classification as semisupervised classification. In GCN training, in there are multiple ways to deal with the neighborhood explosion of stochastic optimizers. We choose Cluster-GCN Chiang et al. (2019) as the backbone to handle neighborhood expansion and PPI as the dataset. • Reinforcement learning. We select Walker2d-v3 from OpenAI Gym (Brockman et al. (2016)) as our training environment, and PPO (Schulman et al. (2017)), implemented by OpenAI SpinningUp (Achiam (2018)), as the algorithm that required tuning. We use the same architectures for both action value network Q and the policy network π. We define 40,000 of environment interactions as one epoch, with a batch size of 4,000. The return we used is the highest average test return of an epoch during the training.\nE IMPLEMENTATION DETAILS\nImplementation details of our experiments are provided in this section. Specifically, we give the unified search space for all hyperparamters and their default values in Table 6. Note that we tune the learning rate decay factor for image classification tasks when tuning every hyperparamter. For the task on MRPC, γ is tuned for all experiments. In other cases, we only tune original hyperparamters without a learning rate scheduler.\nIn addition, Hyperband parameter values for each task are listed in Table 7. These parameters are assigned based on properties of different tasks.\nFor time cost of our evaluation protocols, it depends on how many budgets are available. Specifically, in our paper, the unit of time budget is one epoch, then the total time will be Bepoch ∗ Tepoch, where Bepoch is the total available budget and Tepoch is the running time for one epoch. There is no additional computational cost, i.e., running our protocol once takes the same time as running one hyperparameter search with Hyperband. In our experiment on CIFAR10, we roughly evaluated 200 hyperparameter configurations in one Hyperband running, while the same time can only allow about 50 configurations in random search.\nMoreover, we can further accelerate our evaluation protocol by resampling, shown in Algorithm 6. The basic idea is that we keep a library of different hyperparameter settings. At the beginning, the library is empty. And in each repetition, we sample a number of configurations required by running Hyperband once. During the simulation of Hyperband, we just retrieve the value from the library if the desired epoch of current configuration is contained in the library. Otherwise, we run this configuration based on Hyperband, and store the piece of the trajectory to the library.\nAlgorithm 6 Alternative End-to-End Efficiency Evaluation Protocol Input: A set of optimizers O = {o : o = (U ,Ω)}, task a ∈ A, search space F , library size S\n1: for o ∈ O do 2: Sample S configurations from F , initialize the library with an empty list for each setting 3: for i = 1 to M do 4: Simulate Hyperband with o using configurations re-sampled from the library on a 5: if the desired accuracy is pre-computed in the library then 6: Retrieve the value directly 7: else 8: Training normally, and store values to the library 9: end if\n10: Record the performance trajectory {Pt}Tt=1 explored by HyperBand 11: Calculate the peak performance and CPE by Eq. 1 12: end for 13: Average peak and CPE values over M repetitions for the optimizer o 14: end for 15: Evaluate optimizers according to their peak and CPE values" }, { "heading": "F ADDITIONAL RESULTS", "text": "More detailed experimental results are reported in this section.\nF.1 IMPACT OF η\nSince there is an extra hyperparameter, the reduction factor η in Hyperband, we conduct an experiment with different values (η = 2, 3, 4, 5) to observe the potential impact of this additional\nhyperparameter on our evaluation. Specifically, we use Hyperband to tune the learning rate for three optimizers, SGD, Adam, and Lookahead on CIFAR10, and results are presented in Table 8. As we can see, although the change of η may lead to different CPE values, the relative ranking among three optimizers remains unchanged. Besides, they all achieve comparable peak performance at the end of training. Considering the efficiency of Hyperband, we choose η = 3 based on the convention in Li et al. (2017) in all our experiments.\nF.2 END-TO-END TRAINING\nTable 9 shows peak performance for optimizers on each task. For GAN, we only conduct evaluation on optimizers tuning learning rate due to time limit, and present its CPE and peak performance in Table 10. There is also an end-to-end training curve for GAN on CIFAR10 in Figure 6. Figures for end-to-end training curves on the rest of tasks are shown in Figure 9 and 10.\nOptimizer GAN-CIFAR10↓ Tune learning rate only:\nSGD 113.25 (50.08) Adam 77.04 (25.14)\nRAdam 73.85 (19.61) Yogi 76.80 (25.36) LARS 157.71 (73.82) LAMB 68.47 (25.55)\nLookahead 65.61 (20.40)\nBesides the general performance profile, we present a probabilistic version described in Barreto et al. (2010) in Figure 7 to account for randomness. The probabilistic performance profile can be formulated as\nρ̄o(τ) = 1 |A| ∑ a P(r̄o,a ≤ τ), r̄o,a ∼ N ( µo,a ba , σ2o,a b2a ), (3)\nwhere µo,a and σo,a are the mean and standard deviation of CPE of the optimizer o on the task a respetively, and ba is just the best expectation of CPE on a among all optimizers. It can be seen in Figure 7 that the probabilistic performance profiles shows a similar trend to Figure 4.\nWe also attach two end-to-end training trajectories on CIFAR10 with the error in Figure 8. Since it is hard to distinguish optimizers with the standard deviation added, we just instead report the std of CPE and peak performance in Table 3 and 9.\nF.3 DATA ADDITION TRAINING\nWe provide training curves for data-addition training on full MRPC and PPI dataset in Figure 11." } ]
2,020
HOW MUCH PROGRESS HAVE WE MADE IN NEURAL NETWORK TRAINING? A NEW EVALUATION PROTO-
SP:8ff9e46f3d6f0c6d74158383600839bdd97478af
[ "The paper develops a method to learn a binary classifier based only on pairwise comparison data. For example, the classifier learns to classify pictures of people as \"adult\" versus \"child\" based on pairwise comparisons of the form \"person C is older than person X\". The authors derive their method based on an empirical risk minimization argument. The authors test their methods on four standard data sets (three MNIST variants and one more). They compare to some baselines including binary biased, noisy unbiased, and RankPruning. They try 4 different variations of their method. The Pcomp-Teacher model performs especially well." ]
Ordinary (pointwise) binary classification aims to learn a binary classifier from pointwise labeled data. However, such pointwise labels may not be directly accessible due to privacy, confidentiality, or security considerations. In this case, can we still learn an accurate binary classifier? This paper proposes a novel setting, namely pairwise comparison (Pcomp) classification, where we are given only pairs of unlabeled data that we know one is more likely to be positive than the other, instead of pointwise labeled data. Compared with pointwise labels, pairwise comparisons are easier to collect, and Pcomp classification is useful for subjective classification tasks. To solve this problem, we present a mathematical formulation for the generation process of pairwise comparison data, based on which we exploit an unbiased risk estimator (URE) to train a binary classifier by empirical risk minimization and establish an estimation error bound. We first prove that a URE can be derived and improve it using correction functions. Then, we start from the noisy-label learning perspective to introduce a progressive URE and improve it by imposing consistency regularization. Finally, experiments validate the effectiveness of our proposed solutions for Pcomp classification.
[]
[ { "authors": [ "Han Bao", "Gang Niu", "Masashi Sugiyama" ], "title": "Classification from pairwise similarity and unlabeled data", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Peter L Bartlett", "Shahar Mendelson" ], "title": "Rademacher and gaussian complexities: Risk bounds and structural results", "venue": "JMLR, 3(11):463–482,", "year": 2002 }, { "authors": [ "Tarin Clanuwat", "Mikel Bober-Irizar", "Asanobu Kitamoto", "Alex Lamb", "Kazuaki Yamamoto", "David Ha" ], "title": "Deep learning for classical Japanese literature", "venue": "arXiv preprint arXiv:1812.01718,", "year": 2018 }, { "authors": [ "Zheng-Hang Cui", "Nontawat Charoenphakdee", "Issei Sato", "Masashi Sugiyama" ], "title": "Classification from triplet comparison data", "venue": "Neural Computation,", "year": 2020 }, { "authors": [ "Marthinus C. du Plessis", "Gang Niu", "Masashi Sugiyama" ], "title": "Analysis of learning from positive and unlabeled data", "venue": "In NeurIPS,", "year": 2014 }, { "authors": [ "Marthinus C. du Plessis", "Gang Niu", "Masashi Sugiyama" ], "title": "Convex formulation for learning from positive and unlabeled data", "venue": "In ICML,", "year": 2015 }, { "authors": [ "Lei Feng", "Takuo Kaneko", "Bo Han", "Gang Niu", "Bo An", "Masashi Sugiyama" ], "title": "Learning with multiple complementary labels", "venue": "In ICML, pp. in press,", "year": 2020 }, { "authors": [ "Lei Feng", "Jia-Qi Lv", "Bo Han", "Miao Xu", "Gang Niu", "Xin Geng", "Bo An", "Masashi Sugiyama" ], "title": "Provably consistent partial-label learning", "venue": "NeurIPS,", "year": 2020 }, { "authors": [ "Noah Golowich", "Alexander Rakhlin", "Ohad Shamir" ], "title": "Size-independent sample complexity of neural networks", "venue": "arXiv preprint arXiv:1712.06541,", "year": 2017 }, { "authors": [ "Chen Gong", "Hong Shi", "Tong-Liang Liu", "Chuang Zhang", "Jian Yang", "Da-Cheng Tao" ], "title": "Loss decomposition and centroid estimation for positive and unlabeled learning", "venue": null, "year": 2019 }, { "authors": [ "Bo Han", "Quan-Ming Yao", "Xing-Rui Yu", "Gang Niu", "Miao Xu", "Wei-Hua Hu", "Ivor Tsang", "Masashi Sugiyama" ], "title": "Co-teaching: Robust training of deep neural networks with extremely noisy labels", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Kai-Ming He", "Xiang-Yu Zhang", "Shao-Qing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Takashi Ishida", "Gang Niu", "Weihua Hu", "Masashi Sugiyama" ], "title": "Learning from complementary labels", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Takashi Ishida", "Gang Niu", "Masashi Sugiyama" ], "title": "Binary classification for positive-confidence data", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Takashi Ishida", "Gang Niu", "Aditya Krishna Menon", "Masashi Sugiyama" ], "title": "Complementary-label learning for arbitrary losses and models", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Daniel M Kane", "Shachar Lovett", "Shay Moran", "Jiapeng Zhang" ], "title": "Active classification with comparison queries", "venue": "In FOCS, pp. 355–366", "year": 2017 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Ryuichi Kiryo", "Gang Niu", "Marthinus C. du Plessis", "Masashi Sugiyama" ], "title": "Positive-unlabeled learning with non-negative risk estimator", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Samuli Laine", "Timo Aila" ], "title": "Temporal ensembling for semi-supervised learning", "venue": "arXiv preprint arXiv:1610.02242,", "year": 2016 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Nan Lu", "Gang Niu", "Aditya K. Menon", "Masashi Sugiyama" ], "title": "On the minimal supervision for training any binary classifier from only unlabeled data", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Nan Lu", "Tian-Yi Zhang", "Gang Niu", "Masashi Sugiyama" ], "title": "Mitigating overfitting in supervised classification from two unlabeled datasets: A consistent risk correction approach", "venue": "In AISTATS,", "year": 2020 }, { "authors": [ "Jia-Qi Lv", "Miao Xu", "Lei Feng", "Gang Niu", "Xin Geng", "Masashi Sugiyama" ], "title": "Progressive identification of true labels for partial-label learning", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Shahar Mendelson" ], "title": "Lower bounds for the empirical minimization algorithm. TIT", "venue": null, "year": 2008 }, { "authors": [ "Aditya Menon", "Brendan Van Rooyen", "Cheng Soon Ong", "Bob Williamson" ], "title": "Learning from corrupted binary labels via class-probability estimation", "venue": "In ICML, pp", "year": 2015 }, { "authors": [ "Mehryar Mohri", "Afshin Rostamizadeh", "Ameet Talwalkar" ], "title": "Foundations of Machine Learning", "venue": null, "year": 2012 }, { "authors": [ "Vinod Nair", "Geoffrey E Hinton" ], "title": "Rectified linear units improve restricted boltzmann machines", "venue": "In ICML,", "year": 2010 }, { "authors": [ "Nagarajan Natarajan", "Inderjit S Dhillon", "Pradeep K Ravikumar", "Ambuj Tewari" ], "title": "Learning with noisy labels", "venue": "In NeurIPS, pp", "year": 2013 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "In NeurIPS Workshop on Deep Learning and Unsupervised Feature Learning,", "year": 2011 }, { "authors": [ "Gang Niu", "Wittawat Jitkrittum", "Bo Dai", "Hirotaka Hachiya", "Masashi Sugiyama" ], "title": "Squared-loss mutual information regularization: A novel information-theoretic approach to semi-supervised learning", "venue": "In ICML, pp", "year": 2013 }, { "authors": [ "Curtis G Northcutt", "Tailin Wu", "Isaac L Chuang" ], "title": "Learning with confident examples: Rank pruning for robust classification with noisy labels", "venue": null, "year": 2017 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga" ], "title": "Pytorch: An imperative style, highperformance deep learning library", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Vikas C Raykar", "Shipeng Yu", "Linda H Zhao", "Gerardo Hermosillo Valadez", "Charles Florin", "Luca Bogoni", "Linda Moy" ], "title": "Learning from crowds", "venue": null, "year": 2010 }, { "authors": [ "Tomoya Sakai", "Gang Niu", "Masashi Sugiyama" ], "title": "Semi-supervised auc optimization based on positive-unlabeled learning", "venue": "MLJ,", "year": 2018 }, { "authors": [ "Kazuhiko Shinoda", "Hirotaka Kaji", "Masashi Sugiyama" ], "title": "Binary classification from positive data with skewed confidence", "venue": "In IJCAI,", "year": 2020 }, { "authors": [ "Miao Sun", "Tony X Han", "Ming-Chang Liu", "Ahmad Khodayari-Rostamabad" ], "title": "Multiple instance learning convolutional neural networks for object recognition", "venue": "In ICPR,", "year": 2016 }, { "authors": [ "Antti Tarvainen", "Harri Valpola" ], "title": "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Hong-Xin Wei", "Lei Feng", "Xiang-Yu Chen", "Bo An" ], "title": "Combating noisy labels by agreement: A joint training method with co-regularization", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Jacob Whitehill", "Ting-fan Wu", "Jacob Bergsma", "Javier R Movellan", "Paul L Ruvolo" ], "title": "Whose vote should count more: Optimal integration of labels from labelers of unknown expertise", "venue": null, "year": 2009 }, { "authors": [ "Xiao-Bo Xia", "Tong-Liang Liu", "Nan-Nan Wang", "Bo Han", "Chen Gong", "Gang Niu", "Masashi Sugiyama" ], "title": "Are anchor points really indispensable in label-noise learning", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: A novel image dataset for benchmarking machine learning algorithms", "venue": "arXiv preprint arXiv:1708.07747,", "year": 2017 }, { "authors": [ "Li-Yuan Xu", "Junya Honda", "Gang Niu", "Masashi Sugiyama" ], "title": "Uncoupled regression from pairwise comparison data", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Yichong Xu", "Hongyang Zhang", "Kyle Miller", "Aarti Singh", "Artur Dubrawski" ], "title": "Noise-tolerant interactive learning using pairwise comparisons", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Yichong Xu", "Sivaraman Balakrishnan", "Aarti Singh", "Artur Dubrawski" ], "title": "Regression with comparisons: Escaping the curse of dimensionality with ordinal information", "venue": null, "year": 2020 }, { "authors": [ "Xi-Yu Yu", "Tong-Liang Liu", "Ming-Ming Gong", "Da-Cheng Tao" ], "title": "Learning with biased complementary labels", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Min-Ling Zhang", "Fei Yu", "Cai-Zhi Tang" ], "title": "Disambiguation-free partial label", "venue": "learning. TKDE,", "year": 2017 }, { "authors": [ "Ya-Lin Zhang", "Zhi-Hua Zhou" ], "title": "Multi-instance learning with key instance shift", "venue": "In IJCAI,", "year": 2017 }, { "authors": [ "Zhi-Hua Zhou" ], "title": "A brief introduction to weakly supervised learning", "venue": "National Science Review,", "year": 2018 }, { "authors": [ "Mohri" ], "title": "PC(f)∣∣∣ , where the second inequality holds due to Theorem 3. As suggested by Lemma 2, we need to further upper bound the right hand size of Eq", "venue": null, "year": 2012 } ]
[ { "heading": "1 INTRODUCTION", "text": "Traditional supervised learning techniques have achieved great advances, while they are demanding for precisely labeled data. In many real-world scenarios, it may be too difficult to collect such data. To alleviate this issue, a large number of weakly supervised learning problems (Zhou, 2018) have been extensively studied, including semi-supervised learning (Zhu & Goldberg, 2009; Niu et al., 2013; Sakai et al., 2018), multi-instance learning (Zhou et al., 2009; Sun et al., 2016; Zhang & Zhou, 2017), noisy-label learning (Han et al., 2018; Xia et al., 2019; Wei et al., 2020), partial-label learning (Zhang et al., 2017; Feng et al., 2020b; Lv et al., 2020), complementary-label learning (Ishida et al., 2017; Yu et al., 2018; Ishida et al., 2019; Feng et al., 2020a), positive-unlabeled classification (Gong et al., 2019), positive-confidence classification (Ishida et al., 2018), similarunlabeled classification (Bao et al., 2018), unlabeled-unlabeled classification (Lu et al., 2019; 2020), and triplet classification (Cui et al., 2020).\nThis paper considers another novel weakly supervised learning setting called pairwise comparison (Pcomp) classification, where we aim to perform pointwise binary classification with only pairwise comparison data, instead of pointwise labeled data. A pairwise comparison (x,x′) represents that the instance x has a larger confidence of belonging to the positive class than the instance x′. Such weak supervision (pairwise confidence comparison) could be much easier for people to collect than full supervision (pointwise label) in practice, especially for applications on sensitive or private matters. For example, it may be difficult to collect sensitive or private data with pointwise labels, as asking for the true labels could be prohibited or illegal. In this case, it could be easier for people to collect other weak supervision like the comparison information between two examples.\nIt is also advantageous to consider pairwise confidence comparisons in pointwise binary classification with class overlapping, where the labeling task becomes difficult, and even experienced labelers may provide wrong pointwise labels. Let us denote the labeling standard of a labeler as p̃(y|x) and assume that an instance x1 is more positive than another instance x2. Facing the difficult labeling task, different labelers may hold different labeling standards, p̃(y = +1|x1) > p̃(y = +1|x2) > 1/2, p̃(y = +1|x1) > 1/2 > p̃(y = +1|x2), and 1/2 > p̃(y = +1|x1) > p̃(y = +1|x2), thereby\nproviding different pointwise labels: (+1,+1), (+1,−1), (−1,−1). We can find that different labelers may provide inconsistent pointwise labels, while pairwise confidence comparisons are unanimous and accurate. One may argue that we could aggregate multiple labels of the same instance using crowdsourcing learning methods (Whitehill et al., 2009; Raykar et al., 2010). However, as not every instance will be labeled by multiple labelers, it is not always applicable to crowdsourcing learning methods. Therefore, our proposed Pcomp classification is useful in this case.\nOur contributions in this paper can be summarized as follows:\n• We propose Pcomp classification, a novel weakly supervised learning setting, and present a mathematical formulation for the generation process of pairwise comparison data.\n• We prove that an unbiased risk estimator (URE) can be derived, propose an empirical risk minimization (ERM) based method, and present an improvement using correction functions (Lu et al., 2020) for alleviating overftting when complex models are used.\n• We start from the noisy-label learning perspective to introduce the RankPruning method (Northcutt et al., 2017) that holds a progressive URE for solving our proposed Pcomp classification problem and improve it by imposing consistency regularization.\n• We experimentally demonstrate the effectiveness of our proposed solutions for Pcomp classification." }, { "heading": "2 PRELIMINARIES", "text": "Binary classification with pairwise comparisons and extra pointwise labels has been studied (Xu et al., 2017; Kane et al., 2017). Our paper focuses on a more challenging problem where only pairwise comparison examples are provided. Unlike previous studies (Xu et al., 2017; Kane et al., 2017) that leverage some pointwise labels to differentiate the labels of pairwise comparisons, our methods are purely based on ERM with only pairwise comparisons. In the next, we briefly introduce some notations and review the related problem formulations of binary classification, positive-unlabeled classification, and unlabeled-unlabeled classification.\nBinary Classification. Since our paper focuses on how to train a binary classifier from pairwise comparison data, we first review the problem formulation of binary classification. Let the feature space be X and the label space be Y = {+1,−1}. Suppose the collected dataset is denoted by D = {(xi, yi)}ni=1 where each example (xi, yi) is independently sampled from the joint distribution with density p(x, y), which includes an instance xi ∈ X and a label yi ∈ Y . The goal of binary classification is to train an optimal classifier f : X 7→ R by minimizing the following expected classification risk:\nR(f) = Ep(x,y) [ `(f(x), y) ] = π+Ep+(x) [ `(f(x),+1) ] + π−Ep−(x) [ `(f(x),−1) ] , (1)\nwhere ` : R × Y 7→ R+ denotes a binary loss function, π+ := p(y = +1) (or π− := p(y = −1)) denotes the positive (or negative) class prior probability, and p+(x) := p(x|y = +1) (or p−(x) := p(x|y = −1)) denotes the class-conditional probability density of the positive (or negative) data. ERM approximates the expectations over p+(x) and p−(x) by the empirical averages of positive and negative data and the empirical risk is minimized with respect to the classifier f .\nPositive-Unlabeled (PU) Classification. In some real-world scenarios, it may be difficult to collect negative data, and only positive (P) and unlabeled (U) data are available. PU classification aims to train an effective binary classifier in this weakly supervised setting. Previous studies (du Plessis et al., 2014; 2015; Kiryo et al., 2017) showed that the classification risk R(f) in Eq. (1) can be rewritten only in terms of positive and unlabeled data as\nR(f) = RPU(f) = π+Ep+(x) [ `(f(x),+1)− `(f(x),−1) ] + Ep(x) [ `(f(x),−1) ] , (2)\nwhere p(x) = π+p+(x) + π−p−(x) denotes the probability density of unlabeled data. This risk expression immediately allows us to employ ERM in terms of positive and unlabeled data.\nUnlabeled-Unlabeled (UU) Classification. The recent studies (Lu et al., 2019; 2020) showed that it is possible to train a binary classifier only from two unlabeled datasets with different class priors.\nLu et al. (2019) showed that the classification risk can be rewritten as R(f) = RUU(f) = Eptr(x) [ (1− θ′)π+\nθ − θ′ `(f(x),+1)− θ ′(1− π+) θ − θ′\n`(f(x),−1) ]\n+ Eptr′ (x′) [θ(1− π+) θ − θ′ `(f(x′),−1)− (1− θ)π+ θ − θ′ `(f(x′),+1) ] , (3)\nwhere θ and θ′ are different class priors of two unlabeled datasets, and ptr(x) and ptr′(x′) are the densities of two datasets of unlabeled data, respectively. This risk expression immediately allows us to employ ERM only from two sets of unlabeled data. For RUU(f) in Eq. (3), if we set θ = 1, θ′ = π+, and replace ptr(x) and ptr′(x′) by p+(x) and p(x) respectively, then we can recover RPU(f) in Eq. (2). Therefore, UU classification could be taken as a generalized framework of PU classification in terms of URE. Besides, Eq. (3) also recovers a complicated URE of similarunlabeled classification (Bao et al., 2018) by setting θ = π+ and θ′ = π2+/(2π 2 + − 2π+ + 1).\nTo solve our proposed Pcomp classification problem, we will present a mathematical formulation for the generation process of pairwise comparison data, based on which we will explore two UREs to train a binary classifier by ERM and establish the corresponding estimation error bounds." }, { "heading": "3 DATA GENERATION PROCESS", "text": "In order to derive UREs for performing ERM, we first formulate the underlying generation process of pairwise comparison data1, which consists of pairs of unlabeled data that we know which one is more likely to be positive. Suppose the provided dataset is denoted by D̃ = {(xi,x′i)}ni=1 where (xi,x′i) (with their unknown true labels (yi, y′i)) is expected to satisfy p(yi = +1|xi) > p(y′i = +1|x′i). It is clear that we could easily collect pairwise comparison data if the positive confidence (i.e., p(y = +1|x)) of each instance could be obtained. However, such information is much harder to obtain than class labels in real-world scenarios. Therefore, unlike some studies (Ishida et al., 2018; Shinoda et al., 2020) that assume the positive confidence of each instance is provided by the labeler, we only assume that the labeler has access to the labels of training data. Specifically, we adopt the assumption (Cui et al., 2020) that weakly supervised examples are first sampled from the true data distribution, but the labels are only accessible to the labeler. Then, the labeler would provide us weakly supervised information (i.e., pairwise comparison information) according to the labels of sampled data pairs. That is, for any pair of unlabeled data (x,x′), the labeler would tell us whether (x,x′) could be collected as a pairwise comparison for Pcomp classification, based on the labels (y, y′) rather than the positive confidences (p(y = +1|x), p(y = +1|x′)). Now, the question becomes: how does the labeler consider (x,x′) as a pairwise comparison for Pcomp classification, in terms of the labels (y, y′)? As shown in our previous example of binary classification with class overlapping, we could infer that the labels (y, y′) of our required pairwise comparison data (x,x′) for Pcomp classification can only be one of the three cases {(+1,−1), (+1,+1), (−1,−1)}, because the condition p(y = +1|x) ≥ p(y′ = +1|x′) is definitely violated if (y, y′) = (−1,+1). Therefore, we assume that the labeler would take (x,x′) as a pairwise comparison example in the dataset D̃, if the labels (y, y′) of (x,x′) belong to the above three cases. It is also worth noting that for a pair of data (x,x′) with labels (y, y′) = (−1,+1), the labeler would take (x′,x) as a pairwise comparison example. Because by exchanging the positions of (x,x′), (x′,x) would be associated with labels (+1,−1), which belong to the three cases. In summary, we assume that pairwise comparison data are sampled from those pairs of data whose labels belong to the three cases {(+1,−1), (+1,+1), (−1,−1)}. Based on the above described generation process of pairwise comparison data, we have the following theorem. Theorem 1. According to the generation process of pairwise comparison data described above, let\np̃(x,x′) = q(x,x′)\nπ2+ + π 2 − + π+π−\n, (4)\nwhere q(x,x′) = π2+p+(x)p+(x ′) + π2−p−(x)p−(x ′) + π+π−p+(x)p−(x ′). Then we have D̃ = {(xi,x′i)}ni=1 i.i.d.∼ p̃(x,x′).\n1In contrast to Xu et al. (2019) and Xu et al. (2020) which utilized pairwise comparison data to solve the regression problem, we focus on binary classification.\nThe proof is provided in Appendix A. Theorem 1 provides an explicit expression of the probability density of pairwise comparison data.\nNext, we would like to extract pointwise information from pairwise information, since our goal is to perform pointwise binary classification. Let π̃ = π2+ +π 2 −+π+π− = π+ +π 2 − = π 2 + +π− and we denote the pointwise data collected from D̃ = {(xi,x′i)}ni=1 by breaking the pairwise comparison relation as D̃+ = {xi}ni=1 and D̃− = {x′i}ni=1. Then we can obtain the following theorem.\nTheorem 2. Pointwise examples in D̃+ and D̃− are independently drawn from p̃+(x) and p̃−(x′), where\np̃+(x) = π+\nπ2− + π+ p+(x) + π2− π2− + π+ p−(x), p̃−(x ′) = π2+ π2+ + π− p+(x ′) + π− π2+ + π− p−(x ′).\nThe proof is provided in Appendix B. Theorem 2 shows the relationships between the pointwise densities and the class-conditional densities. Besides, it indicates that from pairwise comparison data, we can essentially obtain examples that are independently drawn from p̃+(x) and p̃−(x′)." }, { "heading": "4 THE PROPOSED METHODS", "text": "In this section, we explore two UREs to train a binary classifier by ERM from only pairwise comparison data with the above generation process." }, { "heading": "4.1 CORRECTED PCOMP CLASSIFICATION", "text": "As shown in Eq. (1), the classification risk R(f) could be separately expressed as the expectations over p+(x) and p−(x). Although we do not have access to the two class-conditional densities p+(x) and p−(x), we can represent them by our introduced pointwise densities p̃+(x) and p̃−(x). Lemma 1. We can express p+(x) and p−(x) in terms of p̃+(x) and p̃+(x) as\np+(x) = 1\nπ+\n( p̃+(x)− π−p̃−(x) ) , p−(x) = 1\nπ−\n( p̃−(x)− π+p̃+(x) ) .\nThe proof is provided in Appendix C. As a result of Lemma 1, we can express the classification risk R(f) using only pairwise comparison data sampled from p̃+(x) and p̃−(x). Theorem 3. The classification risk R(f) can be equivalently expressed as\nRPC(f) = Ep̃+(x) [ `(f(x),+1)− π+`(f(x),−1) ] + Ep̃−(x′) [ `(f(x′),−1)− π−`(f(x′),+1) ] .\n(5)\nThe proof is provided in Appendix D. In this way, we could train a binary classifier by minimizing the following empirical approximation of RPC(f):\nR̂PC(f) = 1\nn ∑n i=1 ( `(f(xi),+1)− π+`(f(xi),−1) + `(f(x′i),−1)− π−`(f(x′i),+1) ) . (6)\nEstimation Error Bound. Here, we establish an estimation error bound for the proposed URE. Let F = {f : X 7→ R} be the model class, f̂PC = arg minf∈F R̂PC(f) be the empirical risk minimizer, and f? = arg minf∈F R(f) be the true risk minimizer. Let R̃ + n (F) and R̃−n (F) be the Rademacher complexities (Bartlett & Mendelson, 2002) of F with sample size n over p̃+(x) and p̃−(x) respectively. Theorem 4. Suppose the loss function ` is ρ-Lipschitz with respect to the first argument (0 ≤ ρ ≤ ∞), and all functions in the model class F are bounded, i.e., there exists a positive constant Cb such that ‖f‖ ≤ Cb for any f ∈ F . Let C` := supz≤Cb,t=±1 `(z, t). Then for any δ > 0, with probability at least 1− δ, we have\nR(f̂PC)−R(f?) ≤ (1 + π+)4ρR̃+n (F) + (1 + π−)4ρR̃−n (F) + 6C` √ log 8δ 2n .\nThe proof is provided in Appendix E. Theorem 4 shows that our proposed method is consistent, i.e., as n→∞,R(f̂PC)→ R(f?), since R̃+n (F), R̃−n (F)→ 0 for all parametric models with a bounded norm such as deep neural networks trained with weight decay (Golowich et al., 2017; Lu et al., 2019). Besides, R̃+n (F) and R̃−n (F) can be normally bounded by CF/ √ n for a positive constant\nCF . Hence, we can further see that the convergence rate is Op(1/ √ n) where Op denotes the order in probability. This order is the optimal parametric rate for ERM without additional assumptions (Mendelson, 2008).\nRelation to UU Classification. It is worth noting that the URE of UU classification RUU(f) is quite general for binary classification with weak supervision. Hence we also would like to show the relationships between our proposed estimator RPC(f) and RUU(f). We demonstrate by the following corollary that under some conditions, RUU(f) is equivalent to RPC(f). Corollary 1. By setting ptr = p̃+(x), p′tr = p̃−(x), θ = π+/(1 − π+ + π2+), and θ′ = π2+/(1 − π+ + π 2 +), Eq. (3) is equivalent to Eq. (5), which means that RUU(f) is equivalent to RPC(f).\nWe omit the proof of Corollary 1 since it is straightforward to derive Eq. (5) from Eq. (3) by inserting the required notations.\nEmpirical Risk Correction. As shown in Lu et al. (2020), directly minimizing R̂PC(f) would suffer from overfitting when complex models are used due to the negative risk issue. More specifically, since negative terms are included in Eq. (6), the empirical risk can be negative even though the original true risk can never be negative. To ease this problem, they wrapped the terms in R̂UU(f) that cause a negative empirical risk by certain consistent correction functions such as the rectified linear unit (ReLU) function g(z) = max(0, z) and absolute value function g(z) = |z|. This solution could also be applied to R̂PC. In this way, we could obtain the following corrected empirical risk estimator:\nR̂cPC(f) = g ( 1 n ∑n i=1 ( `(f(xi),+1)− π−`(f(x′i),+1) )) + g ( 1 n ∑n i=1 ( `(f(x′i),−1)− π+`(f(xi),−1) )) . (7)" }, { "heading": "4.2 PROGRESSIVE PCOMP CLASSIFICATION", "text": "Here, we start from the noisy-label learning perspective to solve the Pcomp classification problem. Intuitively, we could simply perform binary classification by regarding the data from p̃+(x) as (noisy) positive data and the data from p̃−(x) as (noisy) negative data. However, this naive solution could be inevitably affected by noisy labels. In this scenario, we denote the noise rates as ρ− = p(ỹ = +1|y = −1) and ρ+ = p(ỹ = −1|y = +1), where ỹ is the observed (noisy) label and y is the true label, and the inverse noise rates as φ+ = p(y = −1|ỹ = +1) and φ− = p(y = +1|ỹ = −1). According to the defined generation process of pairwise comparison data, we have the following theorem. Theorem 5. The following equalities hold:\nφ+ = π2−\nπ2+ + π 2 − + π+π−\n, φ− = π2+\nπ2+ + π 2 − + π+π−\n,\nρ+ = π+\n2(π2+ + π 2 − + π+π−)\n, ρ− = π−\n2(π2+ + π 2 − + π+π−)\n.\nThe proof is provided in Appendix F.\nTheorem 5 shows that the noise rates can be obtained if we regard the Pcomp classification problem as the noisy-label learning problem. With known noise rates for noisy-label learning, it was shown (Natarajan et al., 2013; Northcutt et al., 2017) that a URE could be derived. Here, we adopt the RankPruning method (Northcutt et al., 2017) because it holds a progressive URE by selecting confident examples using the learning model and achieves state-of-the-art performance. Specifically, we denote by the dataset composed of all the observed positive data P̃ , i.e., P̃ = {xi}ni=1, where xi is independently sampled from p̃+(x). Similarly, the dataset composed of all the observed negative data is denoted by Ñ , i.e., Ñ = {x′i}ni=1, where x′i is independently sampled from p̃−(x′). Then,\nconfident examples will be selected from P̃ and Ñ by ranking the outputs of the model f . We denote the selected positive data from P̃ as P̃sel, and the selected negative data from Ñ as Ñsel:\nP̃sel = arg max P:|P|=(1−φ+)|P̃|\n∑ x∈{P∩P̃}\nf(x), Ñsel = arg min N :|N |=(1−φ−)|Ñ |\n∑ x∈{N∩Ñ} f(x).\nThen we show that if the model f satisfies the separability condition, i.e., for any true positive instance xp and for any true negative instance xn, we have f(xp) > f(xn). In other words, the model output of every true positive instance is always larger than that of every true negative instance, we could obtain a URE. We name it progressive URE, as the model f is progressively optimized.\nTheorem 6 (Theorem 5 in Northcutt et al. (2017)). Assume that the model f satisfies the above separability condition, then the classification risk R(f) can be equivalently expressed as\nRpPC(f) = Ep̃+(x) [`(f(x),+1)\n1− ρ+ I[x ∈ P̃sel]\n] + Ep̃−(x′) [`(f(x′),−1) 1− ρ− I[x′ ∈ Ñsel] ] ,\nwhere I[·] is the indicator function.\nIn this way, we have the following empirical approximation of RpPC:\nR̂pPC(f) = 1\nn ∑n i=1 (`(f(xi),+1) 1− ρ+ I[xi ∈ P̃sel] + `(f(x′i),−1) 1− ρ− I[x′i ∈ Ñsel] ) . (8)\nEstimation Error Bound. It worth noting that Northcutt et al. (2017) did not prove the learning consistency for the RankPruning method. Here, we establish an estimation error bound for this method, which guarantees the learning consistency. Let f̂pPC = arg minf∈F R̂pPC(f) be the empirical risk minimizer of the RankPruning method, then we have the following theorem.\nTheorem 7. Suppose the loss function ` is ρ-Lipschitz with respect to the first argument (0 ≤ ρ ≤ ∞), and all functions in the model class F are bounded, i.e., there exists a positive constant Cb such that ‖f‖ ≤ Cb for any f ∈ F . Let C` := supz≤Cb,t=±1 `(z, t). Then for any δ > 0, with probability at least 1− δ, we have\nR(f̂pPC)−R(f?) ≤ 2\n1− ρ+\n( 2ρR̃+n (F) + C` √ log 4δ 2n ) +\n2\n1− ρ−\n( 2ρR̃−n (F) + C` √ log 4δ 2n ) .\nThe proof is provided in Appendix G. Theorem 7 shows that the above method is consistent and this estimation error bound also attains the optimal convergence rate without any additional assumption (Mendelson, 2008), as analyzed in Theorem 4.\nRegularization. For the above RankPruning method, its URE is based on the assumption that the learning model could satisfy the separability condition. Thus, its performance heavily depends on the accuracy of the learning model. However, as the learning model is progressively updated, some of the selected confident examples may still contain label noise during the training process. As a result, the RankPruning method would be affected by incorrectly selected data. A straightforward improvement could be to improve the output quality of the learning model. Motivated by Mean Teacher used in semi-supervised learning (Tarvainen & Valpola, 2017), we also resort to a teacher model that is an exponential moving average of model snapshots, i.e., Θ′t = αΘ ′ t−1 + (1 − α)Θt, where Θ′ denotes the parameters of the teacher model, Θ denotes the parameters of the learning model, the subscript t denotes the training step, and α is a smoothing coefficient hyper-parameter. Such a teacher model could guide the learning model to produce high-quality outputs. To learn from the teacher model, we leverage consistency regularization Ω(f) = Ex [ ‖fΘ(x)− fΘ′(x)‖2 ] (Laine & Aila, 2016; Tarvainen & Valpola, 2017) to make the learning model consistent with the teacher model for improving the RankPruning method." }, { "heading": "5 EXPERIMENTS", "text": "In this section, we conduct experiments to evaluate the practical performance of our proposed methods on various datasets.\nDatasets. We use four popular benchmark datasets, including MNIST (LeCun et al., 1998), Fashion-MNIST (Xiao et al., 2017), Kuzushiji-MNIST (Clanuwat et al., 2018), and CIFAR10 (Krizhevsky et al., 2009). We train a multilayer perceptron (MLP) model with three hidden layers of width 300 and ReLU activation functions (Nair & Hinton, 2010) and batch normalization (Ioffe & Szegedy, 2015) on the first three datasets. We train ResNet-34 (He et al., 2016) on the CIFAR-10 dataset. We also use USPS and three datasets from the UCI machine learning repository (Blake & Merz, 1998) including Pendigits, Optdigits, and CNAE-9. We train a linear model on these datasets, since they are not large-scale datasets. The detailed descriptions of all used datasets with the corresponding models are provided in Appendix H. Since these datasets are specially used for multi-class classification, we manually transformed them into binary classification datasets (please see Appendix H for details). As we have shown in Theorem 2, the pairwise comparison examples can be equivalently transformed into pointwise examples, which are more convenient to generate. Therefore, we generate pointwise examples in experiments. Specifically, as Theorem 5 discloses the noise rates in our defined data generation process, we simply generate pointwise corrupted examples according to the noise rates.\nMethods. For our proposed Pcomp classification problem, we propose the following methods: Pcomp-Unbiased, which denotes the proposed method that minimizes R̂PC(f) in Eq. (6); PcompReLU, which denotes the proposed method that minimizes R̂cPC(f) in Eq. (7) with the ReLU function; Pcomp-ABS, which denotes the proposed method that minimizes R̂cPC(f) in Eq. (7) with the absolute value function; Pcomp-Teacher, which improves the RankPruning method by imposing consistency regularization to make the learning model consistent with a teacher model. Besides, we compare with the following baselines: Binary-Biased, which conducts binary classification by regarding the data from p̃+(x) as positive data and the data from p̃−(x) as negative data. This is a straightforward method to handle the Pcomp classification problem. In our setting, BinaryBiased reduces to the BER minimization method (Menon et al., 2015); Noisy-Unbiased, which is a noisy-label learning method that minimizes the empirical approximation of the URE proposed by Natarajan et al. (2013); RankPruning, which is a noisy-label learning method (Northcutt et al., 2017) that minimizes R̂pPC(f) in Eq. (8). For all learning methods, we take the logistic loss as the binary loss function ` (i.e., `(z) = ln(1 + exp(−z))), for fair comparisons. We implement our methods using PyTorch (Paszke et al., 2019) and use the Adam (Kingma & Ba, 2015) optimization method with mini-batch size set to 256 and the number of training epochs set to 100. All the experiments are conducted on GeForce GTX 1080 Ti GPUs.\nExperimental Setup. We test the performance of all learning methods under different class prior settings, i.e., π+ is selected from {0.2, 0.5, 0.8}. It is worth noting that we could estimate π+ according to our described data generation process. Specifically, we can exactly estimate π̃ by counting the fraction of collected pairwise comparison data in all the sampled pairs of data. Since π̃ = π2+ + π− = π 2 + + 1 − π+, we have π+ = 1/2 − √ π̃ − 3/4 (if π+ < π−) or π+ = 1/2 +√\nπ̃ − 3/4 (if π+ ≥ π−). Therefore, if we know whether π+ is larger than π−, we could exactly estimate the true class prior π+. For simplicity, we assume that the class prior π+ is known for all the methods. We repeat the sampling-and-training process 5 times for all learning methods on all datasets and record the mean accuracy with standard deviation (mean±std). Experimental Results with Complex Models. Table 1 records the classification performance of each method on the four benchmark datasets with different class priors. From Table 1, we have the following observations: 1) Binary-Biased always achieves the worst performance, which indicates that simply conducting binary classification cannot well solve our Pcomp classification problem; 2) Pcomp-Unbiased is is inferior to Pcomp-ABS and Pcomp-ReLU. This observation accords with what we have discussed, i.e., directly minimizing R̂PC(f) would suffer from overfitting when complex models are used because there are negative terms included in R̂PC(f) and the empirical risk can be negative during the training process. In contrast, Pcomp-ReLU and Pcomp-ABS employ consistent correction functions on R̂PC(f) so that the empirical risk will never be negative. Therefore, when complex models such as deep neural networks are used, Pcomp-ReLU and Pcomp-ABS are expected to outperform Pcomp-Unbiased; 3) Pcomp-Teacher achieves the best performance in most cases. This observation verifies the effectiveness of the imposed consistency regularization, which makes the learning model consistent with a teacher model, for improving the quality of selected confident examples by the RankPruning method; 4) It is worth noting that the standard deviations of Binary-Biased, Pcomp-Unbiased, and Noisy-Unbiased are sometimes higher than other methods. This is because the three methods suffer from overfitting when complex models are used, and the performance could be quite unstable in different trials. In addition, Noisy-Unbiased holds the accuracy of 80.00±0.00% on CIFAR-10 with class prior 0.2. This extreme case happens because Noisy-Unbiased always simply classifies all examples into the negative class due to the serious overfitting issue on a complex class-imbalanced dataset with a complex model ResNet-34.\nExperimental Results with Simple Models. Table 2 reports the classification performance of each method on the four UCI datasets with different class priors. From Table 2, we have the follow-\ning observations: 1) Binary-Biased achieves the worst performance in nearly all cases; 2) PcompUnbiased is slightly better than Pcomp-ReLU and Pcomp-ABS, because Pcomp-Unbiased does not suffer from overfitting when the linear model is used, and it is not necessary to use consistent correction functions anymore. Besides, Pcomp-Unbiased becomes comparable to Pcomp-Teacher and achieves the best performance in half of the cases; 3) Pcomp-Teacher is still better than RankPruning, while it is sometimes inferior to Pcomp-Unbiased. This is because the linear model is not as powerful as neural networks, and the selected confident examples may not be so reliable." }, { "heading": "6 CONCLUSION", "text": "In this paper, we proposed a novel weakly supervised learning setting called pairwise comparison (Pcomp) classification, where we aim to train a binary classifier from only pairwise comparison data, i.e., two examples that we know one is more likely to be positive than the other, instead of pointwise labeled data. Pcomp classification is useful for private classification tasks where we are not allowed to directly access labels and subjective classification tasks where labelers have different labeling standards. To solve the Pcomp classification problem, we presented a mathematical formulation for the generation process of pairwise comparison data, based on which we explored two unbiased risk estimators (UREs) to train a binary classifier by empirical risk minimization and established the corresponding estimation error bounds. We first proved that a URE can be derived and improved it using correction functions. Then, we started from the noisy-label learning perspective to introduce a progressive URE and improved it by imposing consistency regularization. Finally, experiments demonstrated the effectiveness of our proposed methods.\nIn future work, we will apply Pcomp classification to solve some challenging real-world problems like binary classification with class overlapping. In addition, we could also extend Pcomp classification to the multi-class classification setting by using the one-versus-all strategy. Suppose there are multiple classes, we are given pairs of unlabeled data that we know which one is more likely to belong to a specific class. Then, we can use the proposed methods in this paper to train a binary classifier for each class. Finally, by comparing the outputs of these binary classifiers, the predicted class can be determined." }, { "heading": "A PROOF OF THEOREM 1", "text": "It is clear that each pair of examples (x,x′) is independently drawn from the following data distribution:\np̃(x,x′) = p((x,x′) | (y, y′) ∈ Ỹ) = p((x,x ′), (y, y′) ∈ Ỹ)\np((y, y′) ∈ Ỹ) ,\nwhere p((y, y′) ∈ Ỹ) = π2+ + π2− + π+π− and\np(x,x′, (y, y′) ∈ Ỹ) = ∑\n(y,y′)∈Ỹ p(x,x′ | (y, y′)) · p(y, y′)\n= π2+p+(x)p+(x ′) + π2−p−(x)p−(x ′) + π+π−p+(x)p−(x ′).\nFinally, let p̃(x,x′) = p((x,x′) | (y, y′) ∈ Ỹ), the proof is completed." }, { "heading": "B PROOF OF THEOREM 2", "text": "In order to decompose the pairwise comparison data distribution into pointwise distribution, we marginalize p̃(x,x′) with respect to x or x′. Then we can obtain∫\np̃(x,x′)dx′ = 1\nπ̃\n( π2+p+(x) + π 2 −p−(x) + π+π−p+(x) ) =\nπ+ π2− + π+ p+(x) + π2− π2− + π+ p−(x)\n=p̃+(x),\nand ∫ p̃(x,x′)dx = 1\nπ̃\n( π2+p+(x ′) + π2−p−(x ′) + π+π−p−(x ′) )\n= π2+\nπ2+ + π− p+(x\n′) + π−\nπ2+ + π− p−(x\n′)\n=p̃−(x ′),\nwhich concludes the proof of Theorem 2." }, { "heading": "C PROOF OF LEMMA 1", "text": "Based on Theorem 2, we can obtain the following linear equation:[ p̃+(x) p̃−(x) ] = 1 π̃ [ π+ π 2 − π2+ π− ] [ p+(x) p−(x) ] .\nBy solving the above equation, we obtain\np+(x) = 1\nπ+ − π−π2+\n( π̃ · p̃+(x)− π−π̃ · p̃−(x) ) = 1\nπ+\n( p̃+(x)− π−p̃−(x) ) ,\np−(x) = 1\nπ− − π+π2−\n( π̃ · p̃−(x)− π+π̃ · p̃+(x) ) = 1\nπ−\n( p̃−(x)− π+p̃+(x) ) ,\nwhich concludes the proof of Lemma 1." }, { "heading": "D PROOF OF THEOREM 3", "text": "It is quite intuitive to derive R(f) = Ep(x,y) [ `(f(x), y) ] = π+Ep+(x) [ `(f(x),+1) ] + π−Ep−(x) [ `(f(x),−1)\n] = π+π̃\nπ+ − π−π2+ Ep̃+(x)\n[ `(f(x),+1) ] − π+π−π̃ π+ − π−π2+ Ep̃−(x′) [ `(f(x),+1) ] (Lemma 1)\n+ π−π̃\nπ− − π+π2− Ep̃−(x′)\n[ `(f(x),−1) ] − π+π−π̃ π− − π+π2− Ep̃+(x) [ `(f(x),−1) ] = Ep̃+(x) [ `(f(x),+1)− π+`(f(x),−1) ] + Ep̃−(x′) [ `(f(x),−1)− π−`(f(x),+1)\n] = RPC(f),\nwhich concludes the proof of Theorem 3." }, { "heading": "E PROOF OF THEOREM 4", "text": "First of all, we introduce the following notations: R+PC(f) = Ep̃+(x) [ `(f(x),+1)− π+`(f(x),−1) ] ,\nR̂+PC(f) = 1\nn n∑ i=1 ( `(f(xi),+1)− π+`(f(xi),−1) ) ,\nR−PC(f) = Ep̃−(x′) [ `(f(x′),−1)− π−`(f(x′),+1) ] ,\nR̂−PC(f) = 1\nn n∑ i=1 ( `(f(x′i),−1)− π−`(f(x′i),+1) ) .\nIn this way, we could simply represent RPC(f) and R̂PC(f) as\nRPC(f) = R + PC(f) +R − PC(f), R̂PC(f) = R̂ + PC(f) + R̂ − PC(f).\nThen we have the following lemma.\nLemma 2. The following inequality holds:\nR(f̂PC)−R(f?) ≤ 2 sup f∈F ∣∣∣R+PC(f)− R̂+PC(f)∣∣∣+ 2 sup f∈F ∣∣∣R−PC(f)− R̂−PC(f)∣∣∣ . (9) Proof. We could intuitively express R(f̂PC)−R(f?) as\nR(f̂PC)−R(f?) = R(f̂PC)− R̂PC(f̂PC) + R̂PC(f̂PC)− R̂PC(f?) + R̂PC(f?)−R(f?)\n= RPC(f̂PC)− R̂PC(f̂PC) + R̂PC(f̂PC)− R̂PC(f?) + R̂PC(f?)−RPC(f?)\n≤ sup f∈F ∣∣∣RPC(f)− R̂PC(f)∣∣∣+ 0 + sup f∈F ∣∣∣RPC(f)− R̂PC(f)∣∣∣ = 2 sup\nf∈F ∣∣∣RPC(f)− R̂PC(f)∣∣∣ ≤ 2 sup\nf∈F ∣∣∣R+PC(f)− R̂+PC(f)∣∣∣+ 2 sup f∈F ∣∣∣R−PC(f)− R̂−PC(f)∣∣∣ , where the second inequality holds due to Theorem 3.\nAs suggested by Lemma 2, we need to further upper bound the right hand size of Eq. (9). Before doing that, we introduce the uniform deviation bound, which is useful to derive estimation error bounds. The proof can be found in some textbooks such as Mohri et al. (2012) (Theorem 3.1).\nLemma 3. Let Z be a random variable drawn from a probability distribution with density µ, H = {h : Z 7→ [0,M ]} (M > 0) be a class of measurable functions, {zi}ni=1 be i.i.d. examples drawn from the distribution with density µ. Then, for any delta > 0, with probability at least 1− δ,\nsup h∈H ∣∣∣∣∣EZ∼µ[h(Z)]− 1n n∑ i=1 h(zi) ∣∣∣∣∣ ≤ 2Rn(H) +M √ log 2δ 2n ,\nwhere Rn(H) denotes the (expected) Rademacher complexity (Bartlett & Mendelson, 2002) of H with sample size n over µ. Lemma 4. Suppose the loss function ` is ρ-Lipschitz with respect to the first argument (0 < ρ <∞), and all the functions in the model class F are bounded, i.e., there exists a constant Cb such that ‖f‖∞ ≤ Cb for any f ∈ F . Let C` := supt=±1 `(Cb, t). For any δ > 0, with probability 1− δ,\nsup f∈F\n∣∣∣R+PC(f)− R̂+PC(f)∣∣∣ ≤ (1 + π+)2ρR̃+n (F) + (1 + π+)C` √ log 4δ 2n .\nProof. By the definition of R+PC(f) and R̂ + PC(f), we can obtain\nsup f∈F ∣∣∣R+PC(f)− R̂+PC(f)∣∣∣ ≤ sup f∈F ∣∣∣∣∣Ep̃+(x)[`(f(x),+1)]− 1n n∑ i=1 `(f(x),+1) ∣∣∣∣∣ + π+ sup\nf∈F ∣∣∣∣∣Ep̃+(x)[`(f(x),−1)]− 1n n∑ i=1 `(f(x),−1) ∣∣∣∣∣ . (10)\nBy applying Lemma 3, we have for any δ > 0, with probability 1− δ,\nsup f∈F ∣∣∣∣∣Ep̃+(x)[`(f(x),+1)]− 1n n∑ i=1 `(f(x),+1) ∣∣∣∣∣ ≤ 2R̃+n (` ◦ F) + C` √ log 2δ 2n , (11)\nand for any for any δ > 0, with probability 1− δ,\nsup f∈F ∣∣∣∣∣Ep̃+(x)[`(f(x),−1)]− 1n n∑ i=1 `(f(x),−1) ∣∣∣∣∣ ≤ 2R̃+n (` ◦ F) + C` √ log 2δ 2n , (12)\nwhere ` ◦ F means {` ◦ f | f ∈ F}. By Talagrand’s lemma (Lemma 4.2 in Mohri et al. (2012)),\nR̃+n (` ◦ F) ≤ ρR̃+n (F). (13)\nFinally, by combing Eqs. (10), (11), (12), and (13), we have for any δ > 0, with probability at least 1− δ,\nsup f∈F\n∣∣∣R+PC(f)− R̂+PC(f)∣∣∣ ≤ (1 + π+)2ρR̃+n (F) + (1 + π+)C` √ log 4δ 2n , (14)\nwhich concludes the proof of Lemma 4.\nLemma 5. Suppose the loss function ` is ρ-Lipschitz with respect to the first argument (0 < ρ <∞), and all the functions in the model class F are bounded, i.e., there exists a constant Cb such that ‖f‖∞ ≤ Cb for any f ∈ F . Let C` := supt=±1 `(Cb, t). For any δ > 0, with probability 1− δ,\nsup f∈F\n∣∣∣R−PC(f)− R̂−PC(f)∣∣∣ ≤ (1 + π−)2ρR̃−n (F) + (1 + π−)C` √ log 4δ 2n .\nProof. Lemma 5 can be proved similarly to Lemma 4.\nBy combining Lemma 2, Lemma 4, and Lemma 5, Theorem 4 is proved." }, { "heading": "F PROOF OF THEOREM 5", "text": "Suppose there are n pairs of paired data points, which means there are in total 2n data points. For our Pcomp classification problem, we could simply regard x sampled from p̃+(x) as (noisy) positive data and x′ sampled from p̃−(x′) as (noisy) negative data. Given n pairs of examples {(xi,x′i)}ni=1, for the n observed positive examples, there are actually n · p(y = +1|ỹ = +1) true positive examples; for the n observed negative examples, there are actually n · p(y = −1|ỹ = −1) true negative examples. From our defined data generation process in Theorem 1, it is intuitive to obtain\np(y = +1 | ỹ = +1) = π2+ + π+π−\nπ2+ + π 2 − + π+π−\n,\np(y = −1 | ỹ = −1) = π2− + π+π−\nπ2+ + π 2 − + π+π−\n.\nSince φ+ = p(y = −1 | ỹ = +1) = 1 − p(y = +1 | ỹ = +1) and φ− = p(y = +1 | ỹ = −1) = 1− p(y = −1 | ỹ = −1), we can obtain\nφ+ = p(y = −1 | ỹ = +1) = 1− π2+ + π+π−\nπ2+ + π 2 − + π+π−\n= π2−\nπ2+ + π 2 − + π+π−\n,\nφ− = p(y = +1 | ỹ = −1) = 1− π2− + π+π−\nπ2+ + π 2 − + π+π−\n= π2+\nπ2+ + π 2 − + π+π−\n.\nIn this way, we can further obtain the following noise transition ratios:\nρ+ = p(ỹ = −1 | y = +1) = p(y = +1 | ỹ = −1)p(ỹ = −1)\np(y = +1) = π+ 2(π2+ + π 2 − + π+π−) ,\nρ− = p(ỹ = +1 | y = −1) = p(y = −1 | ỹ = +1)p(ỹ = +1)\np(y = −1) = π− 2(π2+ + π 2 − + π+π−) ,\nwhere p(ỹ = 1) = p(ỹ = −1) = 12 , because we have the same number of observed positive examples and negative examples." }, { "heading": "G PROOF OF THEOREM 7", "text": "First of all, we introduce the following notations: R+pPC(f) = Ep̃+(x) [ `(f(x),+1)I[x ∈ PP̃] ] ,\nR̂+pPC(f) = 1\nn n∑ i=1 ( `(f(xi),+1)I[xi ∈ PP̃] ) ,\nR−pPC(f) = Ep̃−(x′) [ `(f(x′),−1)I[x′ ∈ NÑ] ] ,\nR̂−pPC(f) = 1\nn n∑ i=1 ( `(f(x′i),−1)I[x′i ∈ NÑ] ) .\nIn this way, we could simply represent Rppc(f) and R̂pPC(f) as\nRpPC(f) = 1\n1− ρ+ R+pPC(f) +\n1\n1− ρ− R−pPC(f), R̂pPC(f) =\n1\n1− ρ+ R̂+pPC(f) +\n1\n1− ρ− R̂−pPC(f).\nThen we have the following lemma.\nLemma 6. The following inequality holds:\nR(f̂pPC)−R(f?) ≤ 2\n1− ρ+ sup f∈F ∣∣∣R+pPC(f)− R̂+pPC(f)∣∣∣+ 21− ρ− supf∈F ∣∣∣R−pPC(f)− R̂−pPC(f)∣∣∣ .\n(15)\nProof. We omit the proof of Lemma 6 since it is quite similar to that of Lemma 2.\nAs suggested by Lemma 6, we need to further upper bound the right hand size of Eq. (15). According to Lemma 3, we have the following two lemmas. Lemma 7. Suppose the loss function ` is ρ-Lipschitz with respect to the first argument (0 < ρ <∞), and all the functions in the model class F are bounded, i.e., there exists a constant Cb such that ‖f‖∞ ≤ Cb for any f ∈ F . Let C` := supz≤Cb,t=±1 `(z, t). For any δ > 0, with probability 1− δ,\nsup f∈F\n∣∣∣R+pPC(f)− R̂+pPC(f)∣∣∣ ≤ 2ρR̃+n (F) + C` √ log 2δ 2n .\nLemma 8. Suppose the loss function ` is ρ-Lipschitz with respect to the first argument (0 < ρ <∞), and all the functions in the model class F are bounded, i.e., there exists a constant Cb such that ‖f‖∞ ≤ Cb for any f ∈ F . Let C` := supz≤Cb,t=±1 `(z, t). For any δ > 0, with probability 1− δ,\nsup f∈F\n∣∣∣R−pPC(f)− R̂−pPC(f)∣∣∣ ≤ 2ρR̃−n (F) + C` √ log 2δ 2n .\nWe omit the proofs of Lemma 7 and Lemma 8 since they are similar to that of Lemma 4.\nBy combing Lemma 6, Lemma 7, and Lemma 8, Theorem 7 is proved." }, { "heading": "H SUPPLEMENTARY INFORMATION OF EXPERIMENTS", "text": "Table 3 reports the specification of the used benchmark datasets and models.\nMNIST2 (LeCun et al., 1998). This is a grayscale image dataset composed of handwritten digits from 0 to 9 where the size of the each image is 28 × 28. It contains 60,000 training images and 10,000 test images. Because the original dataset has 10 classes, we regard the even digits as the positive class and the odd digits as the negative class.\nFashion-MNIST3 (Xiao et al., 2017). Similarly to MNIST, this is also a grayscale image dataset composed of fashion items (‘T-shirt’, ‘trouser’, ‘pullover’, ‘dress’, ‘sandal’, ‘coat’, ‘shirt’, ‘sneaker’, ‘bag’, and ‘ankle boot’). It contains 60,000 training examples and 10,000 test examples. It is converted into a binary classification dataset as follows:\n• The positive class is formed by ‘T-shirt’, ‘pullover’, ‘coat’, ‘shirt’, and ‘bag’. • The negative class is formed by ‘trouser’, ‘dress’, ‘sandal’, ‘sneaker’, and ‘ankle boot’.\nKuzushiji-MNIST4 (Netzer et al., 2011). This is another grayscale image dataset that is similar to MNIST. It is a 10-class dataset of cursive Japanese (“Kuzushiji”) characters. It consists of 60,000 training images and 10,000 test images. It is converted into a binary classification dataset as follows:\n• The positive class is formed by ‘o’, ‘su’,‘na’, ‘ma’, ‘re’. • The negative class is formed by ‘ki’,‘tsu’,‘ha’, ‘ya’,‘wo’.\nCIFAR-105 (Krizhevsky et al., 2009). This is also a color image dataset of 10 different objects (‘airplane’, ‘bird’, ‘automobile’, ‘cat’, ‘deer’, ‘dog’, ‘frog’, ‘horse’, ‘ship’, and ‘truck’), where the size of each image is 32× 32× 3. There are 5,000 training images and 1,000 test images per class. This dataset is converted into a binary classification dataset as follows:\n• The positive class is formed by ‘bird’, ‘deer’, ‘dog’, ‘frog’, ‘cat’, and ‘horse’. • The negative class is formed by ‘airplane’, ‘automobile’, ‘ship’, and ‘truck’.\n2http://yann.lecun.com/exdb/mnist/ 3https://github.com/zalandoresearch/fashion-mnist 4https://github.com/rois-codh/kmnist 5https://www.cs.toronto.edu/˜kriz/cifar.html\nUSPS, Pendigits, Optdigits. These datasets are composed of handwritten digits from 0 to 9. Because each of the original datasets has 10 classes, we regard the even digits as the positive class and the odd digits as the negative class.\nCNAE-9. This dataset contains 1,080 documents of free text business descriptions of Brazilian companies categorized into a subset of 9 categories cataloged in a table called National Classification of Economic Activities.\n• The positive class is formed by ‘2’, ‘4’, ‘6’ and ‘8’. • The negative class is formed by ‘1’, ‘3’, ‘5’, ‘7’ and ‘9’.\nFor MNIST, Kuzushiji-MNIST, and Fashion-MNIST, we set learning rate to 1e−3 and weight decay to 1e − 5. For CIFAR-10, we set learning rate to 1e − 3 and weight decay to 1e − 3. We also list the number of pointwise corrupted examples used for model training on each dataset: 30,000 for MNIST, Kuzushiji-MNIST, Fashion-MNIST, and CIFAR-10; 4,000 for USPS; 5,000 for Pendigits; 2,000 for Optdigits; 400 for CNAE-9." } ]
2,020
null
SP:0685dd85f87da44ee57de28dd64c6c06181cdc65
[ "The paper proposes an approach to employ successor representation combined with marginalized importance sampling. The basic idea exploited in the paper consists of expressing the occupancies in terms of the successor representation and to model it via a linear combination of some features. This allows handling, although approximately, continuous state-action spaces. After having derived the objective function, an experimental evaluation on both Mujoco and Atari domains is presented, including an ablation study." ]
Marginalized importance sampling (MIS), which measures the density ratio between the state-action occupancy of a target policy and that of a sampling distribution, is a promising approach for off-policy evaluation. However, current stateof-the-art MIS methods rely on complex optimization tricks and succeed mostly on simple toy problems. We bridge the gap between MIS and deep reinforcement learning by observing that the density ratio can be computed from the successor representation of the target policy. The successor representation can be trained through deep reinforcement learning methodology and decouples the reward optimization from the dynamics of the environment, making the resulting algorithm stable and applicable to high-dimensional domains. We evaluate the empirical performance of our approach on a variety of challenging Atari and MuJoCo environments.
[]
[ { "authors": [ "Leemon Baird" ], "title": "Residual algorithms: Reinforcement learning with function approximation", "venue": "In Machine Learning Proceedings", "year": 1995 }, { "authors": [ "André Barreto", "Will Dabney", "Rémi Munos", "Jonathan J Hunt", "Tom Schaul", "Hado P van Hasselt", "David Silver" ], "title": "Successor features for transfer in reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Andre Barreto", "Diana Borsa", "John Quan", "Tom Schaul", "David Silver", "Matteo Hessel", "Daniel Mankowitz", "Augustin Zidek", "Remi Munos" ], "title": "Transfer in deep reinforcement learning using successor features and generalised policy improvement", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Marc G Bellemare", "Yavar Naddaf", "Joel Veness", "Michael Bowling" ], "title": "The arcade learning environment: An evaluation platform for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2013 }, { "authors": [ "Richard Bellman" ], "title": "Dynamic Programming", "venue": null, "year": 1957 }, { "authors": [ "Dimitri P Bertsekas", "John N. Tsitsiklis" ], "title": "Neuro-Dynamic Programming", "venue": "Athena scientific Belmont, MA,", "year": 1996 }, { "authors": [ "Pablo Samuel Castro", "Subhodeep Moitra", "Carles Gelada", "Saurabh Kumar", "Marc G Bellemare" ], "title": "Dopamine: A research framework for deep reinforcement learning", "venue": "arXiv preprint arXiv:1812.06110,", "year": 2018 }, { "authors": [ "Peter Dayan" ], "title": "Improving generalization for temporal difference learning: The successor representation", "venue": "Neural Computation,", "year": 1993 }, { "authors": [ "Miroslav Dudı́k", "John Langford", "Lihong Li" ], "title": "Doubly robust policy evaluation and learning", "venue": "In Proceedings of the 28th International Conference on International Conference on Machine Learning,", "year": 2011 }, { "authors": [ "Mehrdad Farajtabar", "Yinlam Chow", "Mohammad Ghavamzadeh" ], "title": "More robust doubly robust off-policy evaluation", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Scott Fujimoto", "Herke van Hoof", "David Meger" ], "title": "Addressing function approximation error in actor-critic methods", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Scott Fujimoto", "Edoardo Conti", "Mohammad Ghavamzadeh", "Joelle Pineau" ], "title": "Benchmarking batch deep reinforcement learning algorithms", "venue": "arXiv preprint arXiv:1910.01708,", "year": 2019 }, { "authors": [ "Scott Fujimoto", "David Meger", "Doina Precup" ], "title": "Off-policy deep reinforcement learning without exploration", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Jason Gauci", "Edoardo Conti", "Yitao Liang", "Kittipat Virochsiri", "Yuchen He", "Zachary Kaden", "Vivek Narayanan", "Xiaohui Ye", "Zhengxing Chen", "Scott Fujimoto" ], "title": "Horizon: Facebook’s open source applied reinforcement learning platform", "venue": "arXiv preprint arXiv:1811.00260,", "year": 2018 }, { "authors": [ "Carles Gelada", "Marc G Bellemare" ], "title": "Off-policy deep reinforcement learning by bootstrapping the covariate shift", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Samuel J Gershman" ], "title": "The successor representation: its computational logic and neural substrates", "venue": "Journal of Neuroscience,", "year": 2018 }, { "authors": [ "Samuel J Gershman", "Christopher D Moore", "Michael T Todd", "Kenneth A Norman", "Per B Sederberg" ], "title": "The successor representation and temporal context", "venue": "Neural Computation,", "year": 2012 }, { "authors": [ "Christopher Grimm", "Irina Higgins", "Andre Barreto", "Denis Teplyashin", "Markus Wulfmeier", "Tim Hertweck", "Raia Hadsell", "Satinder Singh" ], "title": "Disentangled cumulants help successor representations transfer to new tasks", "venue": null, "year": 1911 }, { "authors": [ "Assaf Hallak", "Shie Mannor" ], "title": "Consistent on-line off-policy evaluation", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Matteo Hessel", "Joseph Modayil", "Hado Van Hasselt", "Tom Schaul", "Georg Ostrovski", "Will Dabney", "Dan Horgan", "Bilal Piot", "Mohammad Azar", "David Silver" ], "title": "Rainbow: Combining improvements in deep reinforcement learning", "venue": "arXiv preprint arXiv:1710.02298,", "year": 2017 }, { "authors": [ "Ehsan Imani", "Eric Graves", "Martha White" ], "title": "An off-policy policy gradient theorem using emphatic weightings", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "David Janz", "Jiri Hron", "Przemysław Mazur", "Katja Hofmann", "José Miguel Hernández-Lobato", "Sebastian Tschiatschek" ], "title": "Successor uncertainties: exploration and uncertainty in temporal difference learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Nan Jiang", "Lihong Li" ], "title": "Doubly robust off-policy value evaluation for reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Diederik Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Ilya Kostrikov", "Ofir Nachum", "Jonathan Tompson" ], "title": "Imitation learning via off-policy distribution matching", "venue": "arXiv preprint arXiv:1912.05032,", "year": 2019 }, { "authors": [ "Tejas D Kulkarni", "Ardavan Saeedi", "Simanta Gautam", "Samuel J Gershman" ], "title": "Deep successor reinforcement learning", "venue": "arXiv preprint arXiv:1606.02396,", "year": 2016 }, { "authors": [ "Aviral Kumar", "Justin Fu", "Matthew Soh", "George Tucker", "Sergey Levine" ], "title": "Stabilizing off-policy q-learning via bootstrapping error reduction", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Timothée Lesort", "Natalia" ], "title": "Dı́az-Rodrı́guez, Jean-Franois Goudou, and David Filliat. State representation learning for control: An overview", "venue": "Neural Networks,", "year": 2018 }, { "authors": [ "Lihong Li", "Remi Munos", "Csaba Szepesvari" ], "title": "Toward minimax off-policy value estimation", "venue": "In Artificial Intelligence and Statistics,", "year": 2015 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "arXiv preprint arXiv:1509.02971,", "year": 2015 }, { "authors": [ "Qiang Liu", "Lihong Li", "Ziyang Tang", "Dengyong Zhou" ], "title": "Breaking the curse of horizon: Infinitehorizon off-policy estimation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yao Liu", "Pierre-Luc Bacon", "Emma Brunskill" ], "title": "Understanding the curse of horizon in off-policy evaluation via conditional importance sampling", "venue": "arXiv preprint arXiv:1910.06508,", "year": 2019 }, { "authors": [ "Yao Liu", "Adith Swaminathan", "Alekh Agarwal", "Emma Brunskill" ], "title": "Off-policy policy gradient with state distribution correction", "venue": "arXiv preprint arXiv:1904.08473,", "year": 2019 }, { "authors": [ "Marlos C Machado", "Clemens Rosenbaum", "Xiaoxiao Guo", "Miao Liu", "Gerald Tesauro", "Murray Campbell" ], "title": "Eigenoption discovery through the deep successor representation", "venue": "arXiv preprint arXiv:1710.11089,", "year": 2017 }, { "authors": [ "Marlos C Machado", "Marc G Bellemare", "Michael Bowling" ], "title": "Count-based exploration with the successor representation", "venue": "arXiv preprint arXiv:1807.11622,", "year": 2018 }, { "authors": [ "Marlos C Machado", "Marc G Bellemare", "Erik Talvitie", "Joel Veness", "Matthew Hausknecht", "Michael Bowling" ], "title": "Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2018 }, { "authors": [ "Ashique Rupam Mahmood", "Huizhen Yu", "Richard S Sutton" ], "title": "Multi-step off-policy learning without importance sampling ratios", "venue": "arXiv preprint arXiv:1702.03006,", "year": 2017 }, { "authors": [ "Travis Mandel", "Yun-En Liu", "Sergey Levine", "Emma Brunskill", "Zoran Popovic" ], "title": "Offline policy evaluation across representations with applications to educational games", "venue": "In International Conference on Autonomous Agents and Multiagent Systems,", "year": 2014 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518(7540):529–533,", "year": 2015 }, { "authors": [ "Ida Momennejad", "Evan M Russek", "Jin H Cheong", "Matthew M Botvinick", "Nathaniel Douglass Daw", "Samuel J Gershman" ], "title": "The successor representation in human reinforcement learning", "venue": "Nature Human Behaviour,", "year": 2017 }, { "authors": [ "Ali Mousavi", "Lihong Li", "Qiang Liu", "Denny Zhou" ], "title": "Black-box off-policy estimation for infinitehorizon reinforcement learning", "venue": "arXiv preprint arXiv:2003.11126,", "year": 2020 }, { "authors": [ "Rémi Munos", "Tom Stepleton", "Anna Harutyunyan", "Marc Bellemare" ], "title": "Safe and efficient off-policy reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Ofir Nachum", "Yinlam Chow", "Bo Dai", "Lihong Li" ], "title": "Dualdice: Behavior-agnostic estimation of discounted stationary distribution corrections", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ofir Nachum", "Bo Dai", "Ilya Kostrikov", "Yinlam Chow", "Lihong Li", "Dale Schuurmans" ], "title": "Algaedice: Policy gradient from arbitrary experience", "venue": "arXiv preprint arXiv:1912.02074,", "year": 2019 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga" ], "title": "Pytorch: An imperative style, highperformance deep learning library", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Doina Precup", "Richard S Sutton", "Sanjoy Dasgupta" ], "title": "Off-policy temporal-difference learning with function approximation", "venue": "In International Conference on Machine Learning,", "year": 2001 }, { "authors": [ "R Tyrrell Rockafellar" ], "title": "Convex analysis", "venue": "Princeton university press,", "year": 1970 }, { "authors": [ "Tom Schaul", "John Quan", "Ioannis Antonoglou", "David Silver" ], "title": "Prioritized experience replay", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Samarth Sinha", "Jiaming Song", "Animesh Garg", "Stefano Ermon" ], "title": "Experience replay with likelihood-free importance weights", "venue": "arXiv preprint arXiv:2006.13169,", "year": 2020 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement learning: An introduction, volume 1", "venue": "MIT press Cambridge,", "year": 1998 }, { "authors": [ "Richard S Sutton", "Brian Tanner" ], "title": "Temporal-difference networks. In Advances in neural information processing", "venue": null, "year": 2005 }, { "authors": [ "Richard S Sutton", "Hamid Reza Maei", "Doina Precup", "Shalabh Bhatnagar", "David Silver", "Csaba Szepesvári", "Eric Wiewiora" ], "title": "Fast gradient-descent methods for temporal-difference learning with linear function approximation", "venue": "In International Conference on Machine Learning,", "year": 2009 }, { "authors": [ "Richard S Sutton", "Joseph Modayil", "Michael Delp", "Thomas Degris", "Patrick M Pilarski", "Adam White", "Doina Precup" ], "title": "Horde: a scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction", "venue": "In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume", "year": 2011 }, { "authors": [ "Richard S Sutton", "A Rupam Mahmood", "Martha White" ], "title": "An emphatic approach to the problem of off-policy temporal-difference learning", "venue": "The Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Adith Swaminathan", "Akshay Krishnamurthy", "Alekh Agarwal", "Miro Dudik", "John Langford", "Damien Jose", "Imed Zitouni" ], "title": "Off-policy evaluation for slate recommendation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Philip Thomas", "Emma Brunskill" ], "title": "Data-efficient off-policy policy evaluation for reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),", "year": 2012 }, { "authors": [ "Ahmed Touati", "Amy Zhang", "Joelle Pineau", "Pascal Vincent" ], "title": "Stable policy optimization via offpolicy divergence regularization", "venue": "arXiv preprint arXiv:2003.04108,", "year": 2020 }, { "authors": [ "Masatoshi Uehara", "Nan Jiang" ], "title": "Minimax weight and q-function learning for off-policy evaluation", "venue": "arXiv preprint arXiv:1910.12809,", "year": 2019 }, { "authors": [ "Hado Van Hasselt", "Arthur Guez", "David Silver" ], "title": "Deep reinforcement learning with double qlearning", "venue": "In AAAI,", "year": 2016 }, { "authors": [ "Tao Wang", "Michael Bowling", "Dale Schuurmans" ], "title": "Dual representations for dynamic programming and reinforcement learning", "venue": "IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning,", "year": 2007 }, { "authors": [ "Tao Wang", "Michael Bowling", "Dale Schuurmans", "Daniel J Lizotte" ], "title": "Stable dual dynamic programming", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "Yu-Xiang Wang", "Alekh Agarwal", "Miroslav Dudı́k" ], "title": "Optimal and adaptive off-policy evaluation in contextual bandits", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Eric Wiewiora", "Garrison W Cottrell", "Charles Elkan" ], "title": "Principled methods for advising reinforcement learning agents", "venue": "In Proceedings of the 20th International Conference on Machine Learning", "year": 2003 }, { "authors": [ "Tengyang Xie", "Yifei Ma", "Yu-Xiang Wang" ], "title": "Towards optimal off-policy evaluation for reinforcement learning with marginalized importance sampling", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Jingwei Zhang", "Jost Tobias Springenberg", "Joschka Boedecker", "Wolfram Burgard" ], "title": "Deep reinforcement learning with successor features for navigation across similar environments", "venue": "IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),", "year": 2017 }, { "authors": [ "Ruiyi Zhang", "Bo Dai", "Lihong Li", "Dale Schuurmans" ], "title": "Gendice: Generalized offline estimation of stationary values", "venue": "arXiv preprint arXiv:2002.09072,", "year": 2020 }, { "authors": [ "Shangtong Zhang", "Wendelin Boehmer", "Shimon Whiteson" ], "title": "Generalized off-policy actor-critic", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Shangtong Zhang", "Wendelin Boehmer", "Shimon Whiteson" ], "title": "Deep residual reinforcement learning", "venue": "In Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems,", "year": 2020 }, { "authors": [ "Shangtong Zhang", "Bo Liu", "Shimon Whiteson" ], "title": "Gradientdice: Rethinking generalized offline estimation of stationary values", "venue": "arXiv preprint arXiv:2001.11113,", "year": 2020 }, { "authors": [ "Yufan Zhao", "Michael R Kosorok", "Donglin Zeng" ], "title": "Reinforcement learning design for cancer clinical trials", "venue": "Statistics in medicine,", "year": 2009 }, { "authors": [ "Yuke Zhu", "Daniel Gordon", "Eric Kolve", "Dieter Fox", "Li Fei-Fei", "Abhinav Gupta", "Roozbeh Mottaghi", "Ali Farhadi" ], "title": "Visual semantic planning using deep successor representations", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Off-policy evaluation (OPE) is a reinforcement learning (RL) task where the aim is to measure the performance of a target policy from data collected by a separate behavior policy (Sutton & Barto, 1998). As it can often be difficult or costly to obtain new data, OPE offers an avenue for re-using previously stored data, making it an important challenge for applying RL to real-world domains (Zhao et al., 2009; Mandel et al., 2014; Swaminathan et al., 2017; Gauci et al., 2018).\nMarginalized importance sampling (MIS) (Liu et al., 2018; Xie et al., 2019; Nachum et al., 2019a) is a family of OPE methods which re-weight sampled rewards by directly learning the density ratio between the state-action occupancy of the target policy and the sampling distribution. This approach can have significantly lower variance than traditional importance sampling methods (Precup et al., 2001), which consider a product of ratios over trajectories, and is amenable to deterministic policies and behavior agnostic settings where the sampling distribution is unknown. However, the body of MIS work is largely theoretical, and as a result, empirical evaluations of MIS have mostly been carried out on simple low-dimensional tasks, such as mountain car (state dim. of 2) or cartpole (state dim. of 4). In comparison, deep RL algorithms have shown successful behaviors in high-dimensional domains such as Humanoid locomotion (state dim. of 376) and Atari (image-based).\nIn this paper, we present a straightforward approach for MIS that can be computed from the successor representation (SR) of the target policy. Our algorithm, the Successor Representation DIstribution Correction Estimation (SR-DICE), is the first method that allows MIS to scale to highdimensional systems, far outperforming previous approaches. In comparison to previous algorithms which rely on minimax optimization or kernel methods (Liu et al., 2018; Nachum et al., 2019a; Uehara & Jiang, 2019; Mousavi et al., 2020), SR-DICE requires only a simple convex loss applied to the linear function determining the reward, after computing the SR. Similar to the deep RL methods which can learn in high-dimensional domains, the SR can be computed easily using behavior-agnostic temporal-difference (TD) methods. This makes our algorithm highly amenable to deep learning architectures and applicable to complex tasks.\nOur derivation of SR-DICE also reveals an interesting connection between MIS methods and value function learning. The key motivation for MIS methods is, unlike traditional importance sampling methods, they can avoid variance with an exponential dependence on horizon, by re-weighting individual transitions rather than accumulating ratios along entire trajectories. We remark that while the MIS ratios only consider individual transitions, the optimization procedure is still subject to the\ndynamics of the underlying MDP. Subsequently, we use this insight to show a connection between a well-known MIS method, DualDICE (Nachum et al., 2019a), and Bellman residual minimization (Bellman, 1957; Baird, 1995), which can help explain some of the optimization properties and performance of DualDICE, as well as other related MIS methods.\nWe benchmark the performance of SR-DICE on several high-dimensional domains in MuJoCo (Todorov et al., 2012) and Atari (Bellemare et al., 2013), against several recent MIS methods. Our results demonstrate two key findings regarding high-dimensional tasks.\nSR-DICE significantly outperforms the benchmark algorithms. We attribute this performance gap to SR-DICE’s deep RL components, outperforming the MIS baselines in the same way that deep RL outperforms traditional methods on high-dimensional domains. Unfortunately, part of this performance gap is due to the fact that the baseline MIS methods scale poorly to challenging tasks. In Atari we find that the baseline MIS method exhibit unstable estimates, often reaching errors with many orders of magnitude.\nMIS underperforms deep RL. Although SR-DICE achieves a high performance, we find its errors are bounded by the quality of the SR. Consequently, we find that SR-DICE and the standard SR achieve a similar performance across all tasks. Worse so, we find that using a deep TD method, comparable to DQN (Mnih et al., 2015) for policy evaluation outperforms both methods. Although the performance gap is minimal, for OPE there lacks a convincing argument for SR-DICE, or any current MIS method, which introduce unnecessary complexity. However, this does not mean MIS is useless. We remark that the density ratios themselves are an independent objective which have been used for applications such as policy regularization (Nachum et al., 2019b; Touati et al., 2020), imitation learning (Kostrikov et al., 2019), off-policy policy gradients (Imani et al., 2018; Liu et al., 2019b; Zhang et al., 2019), and non-uniform sampling (Sinha et al., 2020). SR-DICE serves as a stable, scalable approach for computing these ratios. We provide extensive experimental details in the supplementary material and our code is made available." }, { "heading": "2 BACKGROUND", "text": "Reinforcement Learning. RL is a framework for maximizing accumulated reward of an agent interacting with its environment (Sutton & Barto, 1998). This problem is typically framed as a Markov Decision Process (MDP) (S,A,R, p, d0, γ), with state space S, action space A, reward function R, dynamics model p, initial state distribution d0 and discount factor γ. An agent selects actions according to a policy π : S ×A → [0, 1]. In this paper we address the problem of off-policy evaluation (OPE) problem where the aim is to measure the normalized expected per-step reward of the policy R(π) = (1− γ)Eπ [ ∑∞ t=0 γ\ntr(st, at)]. An important notion in OPE is the value function Qπ(s, a) = Eπ[ ∑∞ t=0 γ\ntr(st, at)|s0 = s, a0 = a], which measures the expected sum of discounted rewards when following π, starting from (s, a).\nWe define dπ(s, a) as the discounted state-action occupancy, the probability of seeing (s, a) under policy π with discount γ: dπ(s, a) = (1 − γ) ∑∞ t=0 γ t ∫ s0 d0(s0)pπ(s0 → s, t)π(a|s)d(s0), where pπ(s0 → s, t) is the probability of arriving at the state s after t time steps when starting from an initial state s0. This distribution is important as R(π) equals the expected reward r(s, a) under dπ:\nR(π) = E(s,a)∼dπ,r[r(s, a)]. (1)\nSuccessor Representation. The successor representation (SR) (Dayan, 1993) of a policy is a measure of occupancy of future states. It can be viewed as a general value function that learns a vector of the expected discounted visitation for each state. The successor representation Ψπ of a given policy π is defined as Ψπ(s′|s) = Eπ[ ∑∞ t=0 γ t 1(st = s\n′)|s0 = s]. Importantly, the value function can be recovered from the SR by summing over the expected reward of each state V π(s) = ∑ s′ Ψ\nπ(s′|s)Ea′∼π[r(s′, a′)]. For infinite state and action spaces, the SR can instead be generalized to the expected occupancy over features, known as the deep SR (Kulkarni et al., 2016) or successor features (Barreto et al., 2017). For a given encoding function φ : S×A → Rn, the deep SR ψπ : S × A → Rn is defined as the expected discounted sum over features from the encoding function φ when starting from a given state-action pair and following π:\nψπ(s, a) = Eπ [ ∞∑ t=0 γtφ(st, at) ∣∣∣∣s0 = s, a0 = a ] . (2)\nIf the encoding φ(s, a) is learned such that the original reward function is a linear function of the encoding r(s, a) = w>φ(s, a), then similar to the original formulation of SR, the value function can be recovered from a linear function of the SR: Qπ(s, a) = w>ψπ(s, a). The deep SR network ψπ is trained to minimize the MSE between ψπ(s, a) and φ(s, a) + γψ′(s′, a′) on transitions (s, a, s′) sampled from the data set. A frozen target network ψ′ is used to provide stability (Mnih et al., 2015; Kulkarni et al., 2016), and is updated to the current network ψ′ ← ψπ after a fixed number of time steps. The encoding function φ is typically trained by an encoder-decoder network (Kulkarni et al., 2016; Machado et al., 2017; 2018a).\nMarginalized Importance Sampling. Marginalized importance sampling (MIS) is a family of importance sampling approaches for off-policy evaluation in which the performance R(π) is evaluated by re-weighting rewards sampled from a data set D = {(s, a, r, s′)} ∼ p(s′|s, a)dD(s, a), where dD is an arbitrary distribution, typically but not necessarily, induced by some behavior policy. It follows that R(π) can computed with importance sampling weights on the rewards d\nπ(s,a) dD(s,a) : R(π) = E(s,a)∼dD,r [ dπ(s, a)\ndD(s, a) r(s, a)\n] . (3)\nThe goal of marginalized importance sampling methods is to learn the weights w(s, a) ≈ d π(s,a) dD(s,a)\n, using data contained in D. The main benefit of MIS is that unlike traditional importance methods, the ratios are applied to individual transitions rather than complete trajectories, which can reduce the variance of long or infinite horizon problems. In other cases, the ratios themselves can be used for a variety of applications which require estimating the occupancy of state-action pairs.\nDualDICE. Dual stationary DIstribution Correction Estimation (DualDICE) (Nachum et al., 2019a) is a well-known MIS method which uses a minimax optimization to learn the density ratios. The underlying objective which DualDICE aims to minimize is the following:\nmin f J(f) :=\n1 2 E(s,a)∼dD\n[ (f(s, a)− γEs′,π[f(s′, a′)]) 2 ] − (1− γ)Es0,a0∼π[f(s0, a0)]. (4)\nIt can be shown that Equation (4) is uniquely optimized by the MIS density ratio. However, since f(s, a) − γEπ[f(s′, a′)] is dependent on transitions (s, a, s′), there are two practical issues with this underlying objective. First, the objective contains a square within an expectation, giving rise to the double sampling problem (Baird, 1995), where the gradient will be biased when using only a single sample of (s, a, s′). Second, computing f(s, a) − γEs′,π[f(s′, a′)] for arbitrary state-action pairs, particularly those not contained in the data set, is non-trivial, as it relies on an expectation over succeeding states, which is generally inaccessible without a model of the environment. To address both concerns, DualDICE uses Fenchel duality (Rockafellar, 1970) to create the following minimax optimization problem:\nmin f max w\nJ(f, w) := E(s,a)∼dD,a′∼π,s′ [ w(s, a)(f(s, a)− γf(s′, a′))− 0.5w(s, a)2 ] − (1− γ)Es0,a0 [f(s0, a0)]. (5)\nSimilar to the original formulation, Equation (4), it can be shown that Equation (5) is minimized when w(s, a) is the desired density ratio." }, { "heading": "3 A REWARD FUNCTION PERSPECTIVE ON DISTRIBUTION CORRECTIONS", "text": "In this section, we present our behavior-agnostic approach to estimating MIS ratios, called the Successor Representation DIstribution Correction Estimation (SR-DICE). Our main insight is that MIS can be viewed as an optimization over the reward function, where the loss is uniquely optimized when the reward is the desired density ratio. We then apply our reward function perspective on a well-known MIS method, DualDICE (Nachum et al., 2019a), which enables us to observe difficulties in the optimization process and better understand related methods. All proofs for this section are left to Appendix A." }, { "heading": "3.1 THE SUCCESSOR REPRESENTATION DICE", "text": "We will now derive our MIS approach. Our derivation shows that by treating MIS as reward function optimization, we can obtain the desired density ratios can be obtained in a straightforward manner\nfrom the SR of the target policy. This pushes the challenging aspect of learning onto the computation of the SR, rather than optimizing the density ratio estimate. Furthermore, when tackling highdimensional tasks, we can leverage deep RL approaches (Mnih et al., 2015; Kulkarni et al., 2016) to make learning the SR stable, giving rise to a practical MIS method.\nOur aim is to determine the MIS ratios d π(s,a) dD(s,a)\n, using only data sampled from the data set D and the policy π. This presents a challenge as we have direct access to neither dπ nor dD. As a starting point, we begin by following the derivation of DualDICE (Nachum et al., 2019a). We first consider the convex function 12mx\n2 − nx, which is uniquely minimized by x∗ = nm . Now by replacing x with r̂(s, a), m with dD(s, a), and n with dπ(s, a), we have reformulated the convex function as the following objective:\nmin r̂(s,a)∀(s,a)\nJ(r̂) := 1\n2 E(s,a)∼dD\n[ r̂(s, a)2 ] − (1− γ)E(s,a)∼dπ [r̂(s, a)] . (6)\nWhile this objective is still impractical as it relies on expectations over both dD and dπ , from Nachum et al. (2019a) we can state the following about Equation (6).\nObservation 1 The objective J(r̂) is minimized when r̂(s, a) = d π(s,a) dD(s,a) , ∀(s, a).\nNow we will diverge from the derivation of DualDICE. Note our choice of notation, r̂(s, a), in Equation (6). Describing the objective in terms of a fictitious reward r̂ will allow us to draw on familiar relationships between rewards and value functions and build stronger intuition. Consider the equivalence between the value function over initial state-action pairs and the expectation of rewards over the state-action visitation of the policy (1 − γ)Es0,a0 [Qπ(s0, a0)] = Edπ [r(s, a)]. It follows that the expectation over dπ in Equation (6) can be replaced with a value function Q̂π over r̂:\nmin r̂(s,a)∀(s,a)\nJ(r̂) := 1\n2 E(s,a)∼dD\n[ r̂(s, a)2 ] − (1− γ)Es0,a0 [ Q̂π(s0, a0) ] . (7)\nUsing (1 − γ)Es0,a0 [ Q̂π(s0, a0) ] = Edπ [r̂(s, a)] provides a method for accessing the otherwise intractable dπ . This form of the objective is convenient because we can estimate the expectation over dD by sampling from the data set and Qπ can be computed using any policy evaluation method.\nWhile we can estimate both terms in Equation (7) with relative ease, the optimization problem is not directly differentiable and would require re-learning the value function Q̂π with every adjustment to the learned reward r̂. Fortunately, there exists a straightforward paradigm which enables direct reward function optimization known as successor representation (SR).\nConsider the relationship between the SR Ψπ of the target policy π and its value function Es0,a0 [Qπ(s0, a0)] = Es0 [V π(s0)] = Es0 [ ∑ s Ψ\nπ(s|s0)Eπ[r(s, a)]] in the tabular setting. It follows that we can create an optimization problem over the reward function r̂ from Equation (7):\nmin r̂(s,a)∀(s,a)\nJΨ(r̂) := 1\n2 E(s,a)∼dD\n[ r̂(s, a)2 ] − (1− γ)Es0 [∑ s Ψπ(s|s0)Ea∼π [r̂(s, a)] ] . (8)\nThis objective can be generalized to continuous states by considering the deep SR ψπ over features φ(s, a) and optimizing the weights of a linear function w. In this instance, the estimated density ratio r̂(s, a) is determined by w>φ(s, a) and we can optimize w by minimizing the following:\nmin w\nJ(w) := 1\n2 EdD\n[ (w>φ(s, a))2 ] − (1− γ)Es0,a0∼π [ w>ψπ(s0, a0) ] . (9)\nSince this optimization problem is convex, it has a closed form solution. DefineD0 as the set of start states contained in D. The unique optimizer of Equation (9) is as follows:\nmin w J(w) = (1− γ) |D|∑ (s,a)∈D φ(s, a) ∑ i φi(s, a) 1 |D0| ∑ s0∈D0 π(a0|s0)ψπ(s0, a0), (10)\nwhere φi is the ith entry of the vector φ. However, we may generally prefer iterative, gradientbased solutions for scalability. We call the combination of learning the deep SR followed by optimizing Equation (9) the Successor Representation stationary DIstribution Correction Estimation\nAlgorithm 1 SR-DICE 1: At each time step sample mini-batch of N transitions (s, a, r, s′) and start states s0 from D. 2: for t = 1 to T1 do 3: minφ,D 1 2 (D(φ(s, a))− (s, a))\n2. 4: for t = 1 to T2 do 5: minψπ 1 2 (φ(s, a) + γψ\n′(s′, a′)− ψπ(s, a))2. 6: for t = 1 to T3 do 7: minw 1 2 (w\n>φ(s, a))2 − (1− γ)w>ψπ(s0, a0). 8: Output: R(π) estimate |D|−1 ∑ (s,a,r)∈Dw >φ(s, a)r(s, a).\n# Encoding φ loss\n# Deep successor representation ψπ loss\n# Density ratio w loss (Equation (9))\n(SR-DICE). SR-DICE is split into three learning phases: (1) learning the encoding φ, (2) learning the deep SR ψπ , and (3) optimizing Equation (9). For the first two phases we follow standard practices from prior work (Kulkarni et al., 2016; Machado et al., 2018a), training the encoding φ via an encoder-decoder network to reconstruct the transition and training the deep SR ψπ using TD learning-style methods. We summarize SR-DICE in Algorithm 1. Additional implementation-level details can be found in Appendix D.\nAlthough it is difficult to make any guarantees on the accuracy of an approximate ψπ trained with deep RL techniques, if we assume ψπ is exact, then we can show that SR-DICE learns the least squares estimator to the desired density ratio.\nTheorem 1 Assuming (1 − γ)Es0,a0 [ψπ(s0, a0)] = E(s,a)∼dπ [φ(s, a)], then the optimizer w∗ of the objective J(w) is the least squares estimator of ∫ S×A ( w>φ(s, a)− d π(s,a) dD(s,a) )2 d(s, a).\nHence, the main sources of error in SR-DICE are learning the encoding φ and the deep SR ψπ . Notably, both of these steps are independent of the main optimization problem of learning w, as we have shifted the challenging aspects of density ratio estimation onto learning the deep SR. This leaves deep RL to do the heavy lifting. The remaining optimization problem, Equation (9), only involves directly updating the weights of a linear function, and unlike many other MIS methods, requires no tricky minimax optimization.\nSR-DICE can also be applied to any pre-existing SR, or included into standard deep RL algorithms (Mnih et al., 2015; Lillicrap et al., 2015; Hessel et al., 2017; Fujimoto et al., 2018) by treating the encoding φ as an auxiliary reward. This provides an alternate form of policy evaluation through MIS, or a method to access density ratios between the target policy and the sampling distribution, with possible applications to exploration, policy regularization, or unbiased off-policy gradients (Liu et al., 2019b; Nachum et al., 2019b; Touati et al., 2020)." }, { "heading": "3.2 REWARD FUNCTIONS & MIS: A CASE STUDY ON DUALDICE", "text": "One of the main attractions for MIS methods is they use importance sampling ratios which re-weight individual transitions rather than entire trajectories. While independent of the length of trajectories collected by the behavior policy, we remark the optimization problem is not independent of the implicit horizon defined by the discount factor γ and MIS methods are still subject to the dynamics of the underlying MDP. In SR-DICE we explicitly handle the dynamics of the MDP by learning the SR with TD learning methods. In this case study, we examine a well-known MIS method, DualDICE (Nachum et al., 2019a), and discuss how it propagates updates through the MDP by considering its relationship to residual algorithms which minimize the mean squared Bellman error (Baird, 1995). By viewing other MIS methods through the lens of reward function optimization, we can understand their connection to value-based methods, shedding light on their optimization properties and challenges.\nRecall the underlying objective of DualDICE:\nmin f J(f) :=\n1 2 E(s,a)∼dD\n[ (f(s, a)− γEs′,π[f(s′, a′)]) 2 ] − (1− γ)Es0,a0∼π[f(s0, a0)]. (11)\nBy viewing the problem as reward function optimization, we can transform DualDICE into a more familiar format that considers rewards and value functions. To begin, we state the following theorem.\nTheorem 2 Given an MDP (S,A, ·, p, d0, γ), policy π, and function f : S × A → R, define the reward function r : S ×A → R where r̂(s, a) = f(s, a)− γEs′,a′∼π[f(s′, a′)]. Then it follows that the value function Q̂π defined by the policy π, MDP, and reward function r̂, is the function f .\nThe proof follows naturally from the Bellman equation (Bellman, 1957). Informally, Theorem 2 states that any function f can be treated as an exact value function Q̂π , for a carefully chosen reward function r̂(s, a) = f(s, a)− γEs′,π[f(s′, a′)]. Theorem 2 provides two perspectives on DualDICE. By replacing terms in Equation (11) with rewards and value functions, it can be viewed as the same objective as Equation (7) from SR-DICE:\nmin r̂(s,a)∀(s,a)\nJ(r̂) := 1\n2 E(s,a)∼dD\n[ r̂(s, a)2 ] − (1− γ)Es0,a0 [ Q̂π(s0, a0) ] . (12)\nThe first insight from this relationship is that like SR-DICE, DualDICE can be viewed as reward function optimization and still requires some element of value learning. However, for DualDICE the form of the reward and value functions are unique. From Theorem 2, we remark that f(s0, a0) is always exactly Q̂π(s0, a0) without additional computation. This occurs because f(s0, a0) is not a function of the reward, rather, the rewards are defined as a function of f . When the reward function is adjusted, f(s0, a0) may remain unchanged and other rewards are adjusted to compensate.\nTo emphasize how DualDICE is subject to the properties of value learning, consider a second perspective on DualDICE taken from Theorem 2, where we replace f with Q̂π:\nmin Q̂π\nJ(Q̂π) := 1\n2 E(s,a)∼dD\n[( Q̂π(s, a)− γEs′,π [ Q̂π(s′, a′) ])2] ︸ ︷︷ ︸\nBellman residual minimization\n−(1− γ)Es0,a0 [ Q̂π(s0, a0) ] .\n(13)\nThe first term is equivalent to Bellman residual minimization (Bellman, 1957; Baird, 1995), where the reward is 0 for all state-action pairs. The second term attempts to maximize only the initial value function Q̂π(s0, a0). From a practical perspective this relationship is concerning as the first term relies on successfully propagating updates throughout the MDP to balance out with changes to the initial values, which may occur quickly. Consequently, in cases where DualDICE performs poorly,\n0.0 0.2 0.4 0.6 0.8 1.0\n0.6\n0.8\n1.0\nwe may see the initial values approach infinity.\nTo understand how this objective performs empirically, we measured the output of DualDICE on a basic OPE task with an identical behavior and target policy. In this case the true MIS ratio is 1.0 for all state-action pairs. Consequently, both the fictiuous reward EdD [f(s, a)−γEs′,π[f(s′, a′)]] and normalized initial value function (1 − γ)Es0,a0 [f(s0, a0)] should approach 1.0. In Figure 1, we graph both EdD [w(s, a)], where w(s, a) ≈ f(s, a) − γEs′,π[f(s′, a′)] is the ratio used by DualDICE (Equation (5)), and (1 − γ)Es0,a0 [f(s0, a0)] output by DualDICE.\nWhile on the easier task, Pendulum, the performance looks reasonable, on HalfCheetah we can see that (1 − γ)Es0,a0 [f(s0, a0)] greatly overestimates and EdD [w(s, a)] is highly unstable. This result is intuitive given the form of\nEquation (13), where the first term, which w(s, a) approximates, is pushed slowly towards 0 and the second term is pushed towards∞. On the lower dimensional problem, Pendulum, the objective is optimized more easily and both terms approach 1.0. On the harder problem, HalfCheetah, we can see how balancing residual learning, which is notoriously slow (Baird, 1995), with a maximization term on initial states creates a difficult optimzation procedure.\nThese results highlight the importance, and challenge, of propagating updates through the MDP. MIS methods are not fundamentally different than value-based methods, and viewing them as such may allow us to develop richer foundations for MIS." }, { "heading": "4 RELATED WORK", "text": "Off-Policy Evaluation. Off-policy evaluation is a well-studied problem with several families of approaches. One family of approaches is based on importance sampling, which re-weights trajectories by the ratio of likelihoods under the target and behavior policy (Precup et al., 2001). Importance sampling methods are unbiased but suffer from variance which can grow exponentially with the length of trajectories (Li et al., 2015; Jiang & Li, 2016). Consequently, research has focused on variance reduction (Thomas & Brunskill, 2016; Munos et al., 2016; Farajtabar et al., 2018) or contextual bandits (Dudı́k et al., 2011; Wang et al., 2017). Marginalized importance sampling methods (Liu et al., 2018) aim to avoid this exponential variance by considering the ratio in stationary distributions, giving an estimator with variance which is polynomial with respect to horizon (Xie et al., 2019; Liu et al., 2019a). Follow-up work has introduced a variety of approaches and improvements, allowing them to be behavior-agnostic (Nachum et al., 2019a; Uehara & Jiang, 2019; Mousavi et al., 2020) and operate in the undiscounted setting (Zhang et al., 2020a;c). In a similar vein, some OPE methods rely on emphasizing, or re-weighting, updates based on their stationary distribution (Sutton et al., 2016; Mahmood et al., 2017; Hallak & Mannor, 2017; Gelada & Bellemare, 2019), or learning the stationary distribution directly (Wang et al., 2007; 2008).\nSuccessor Representation. Introduced originally by Dayan (1993) as an approach for improving generalization in temporal-difference methods, successor representations (SR) were revived by recent work on deep successor RL (Kulkarni et al., 2016) and successor features (Barreto et al., 2017) which demonstrated that the SR could be generalized to a function approximation setting. The SR has found applications for task transfer (Barreto et al., 2018; Grimm et al., 2019), navigation (Zhang et al., 2017; Zhu et al., 2017), and exploration (Machado et al., 2018a; Janz et al., 2019). It has also been used in a neuroscience context to model generalization and human reinforcement learning (Gershman et al., 2012; Momennejad et al., 2017; Gershman, 2018). The SR and our work also relate to state representation learning (Lesort et al., 2018) and general value functions (Sutton & Tanner, 2005; Sutton et al., 2011)." }, { "heading": "5 EXPERIMENTS", "text": "To evaluate our method, we perform several off-policy evaluation (OPE) experiments on a variety of domains. The aim is to evaluate the normalized average discounted reward E(s,a)∼dπ,r[r(s, a)] of a target policy π. We benchmark our algorithm against two MIS methods, DualDICE (Nachum et al., 2019a) and GradientDICE (Zhang et al., 2020c), two deep RL approaches and the true return of the behavior policy. The first deep RL method is a DQN-style approach (Mnih et al., 2015) where actions are selected by π (denoted Deep TD) and the second is the deep SR where the weight w is trained to minimize the MSE between w>φ(s, a) and r(s, a) (denoted Direct-SR) (Kulkarni et al., 2016). Environment-specific experimental details are presented below and complete algorithmic and hyper-parameter details are included in the supplementary material.\nContinuous-Action Experiments. We evaluate the methods on a variety of MuJoCo environments (Brockman et al., 2016; Todorov et al., 2012). We examine two experimental settings. In both settings the target policy π and behavior policy πb are stochastic versions of a deterministic policy πd obtained from training the TD3 algorithm (Fujimoto et al., 2018). We evaluate a target policy π = πd +N (0, σ2), where σ = 0.1. • For the “easy” setting, we gather a data set of 500k transitions using a behavior policy πb = πd +N (0, σ2b ), where σb = 0.133. This setting roughly matches the experimental setting defined by Zhang et al. (2020a). • For the “hard” setting, we gather a significantly smaller data set of 50k transitions using a behavior policy which acts randomly with p = 0.2 and uses πd +N (0, σ2b ), where σb = 0.2, with p = 0.8.\nUnless specified otherwise, we use a discount factor of γ = 0.99 and all hyper-parameters are kept constant across environments. All experiments are performed over 10 seeds. We display the results of the “easy” setting in Figure 2 and the “hard” setting in Figure 3.\n0.04\n0.04\nAtari Experiments. We also test each method on several Atari games (Bellemare et al., 2013), which are challenging due to their high-dimensional image-based state space. Standard preprocessing steps are applied (Castro et al., 2018) and sticky-actions are used (Machado et al., 2018b) to increase difficulty and remove determinism. Each method is trained on a data set of one million time steps. The target policy is the deterministic greedy policy trained by Double DQN (Van Hasselt et al., 2016). The behavior policy is the -greedy policy with = 0.1. We use a discount factor of γ = 0.99. Experiments are performed over 3 seeds. Results are displayed in Figure 4. Additional experiments with different behavior policies can be found in the supplementary material.\nDiscussion. Across the board we find SR-DICE significantly outperforms the MIS methods. From the MSE graphs, we can see SR-DICE achieves much lower error in every task. Looking at the estimated values of R(π) in the continuous-action environments, Figure 3, we can see that SRDICE converges rapidly and maintains a stable estimate, while the MIS methods are particularly unstable, especially in the case of DualDICE. These observations are consistent in the Atari domain (Figure 4). Overall, we find the general trend in performance is Deep TD > SR-DICE = DirectSR > MIS. Notably Direct-SR and SR-DICE perform similarly in every task, suggesting that the limiting factor in SR-DICE is the quality of the deep successor representation.\nAblation. To study the robustness of SR-DICE relative to the competing methods, we perform an ablation study and investigate the effects of data set size, discount factor, and two different behavior policies. Unless specified otherwise, we use experimental settings matching the “hard” setting. We report the results in Figure 5. In the data set size experiment (a), SR-DICE perform well with as few as 5k transitions (5 trajectories). In some instances, the performance is unexpectedly improved with less data, although incrementally. For small data sets, the SR methods outperform Deep TD. One hypothesis is that the encoding acts as an auxiliary reward and helps stabilize learning in the low data regime. In (b) we report the performance over changes in discount factor. The relative ordering across methods is unchanged. In (c) we use a behavior policy of N (0, σ2b ), with σb = 0.5,\n0.04\n0.04\na much larger standard deviation than either setting for continuous control. The results are similar to the original setting, with an increased bias on the deep RL methods. In (d) we use the underlying deterministic policy as both the behavior and target policy. The baseline MIS methods perform surprisingly poorly, once again demonstrating their weakness on harder domains." }, { "heading": "6 CONCLUSION", "text": "In this paper, we introduce a method which can perform marginalized importance sampling (MIS) using the successor representation (SR) of the target policy. This is achieved by deriving an MIS formulation that can be viewed as reward function optimization. By using the SR, we effectively disentangle the dynamics of the environment from learning the reward function. This allows us to (a) use well-known deep RL methods to effectively learn the SR in challenging domains (Mnih et al., 2015; Kulkarni et al., 2016) and (b) provide a straightforward loss function to learn the density ratios without any optimization tricks necessary for previous methods (Liu et al., 2018; Uehara & Jiang, 2019; Nachum et al., 2019a; Zhang et al., 2020c). This reward function interpretation also provides insight into prior MIS methods by showing how they are connected to value-based methods. Our resulting algorithm, SR-DICE, outperforms prior MIS methods in terms of both performance and stability and is the first MIS method which demonstrably scales to high-dimensional problems.\nAs a secondary finding, our benchmarking shows that current MIS methods underperform more traditional value-based methods at OPE on high-dimensional tasks, suggesting that for practical applications, deep RL approaches should still be preferred. Regardless, outside of OPE there exists a wealth of possible applications for MIS ratios, from imitation (Kostrikov et al., 2019) to policy optimization (Imani et al., 2018; Liu et al., 2019b; Zhang et al., 2019) to mitigating distributional shift in offline RL (Fujimoto et al., 2019b; Kumar et al., 2019). For ease of use, our code is provided, and we hope our algorithm and insight will provide valuable contributions to the field." }, { "heading": "A DETAILED PROOFS.", "text": "" }, { "heading": "A.1 OBSERVATION 1", "text": "Observation 1 The objective J(r̂) is minimized when r̂(s, a) = d π(s,a) dD(s,a) , ∀(s, a).\nmin r̂(s,a)∀(s,a)\nJ(r̂) := 1\n2 E(s,a)∼dD\n[ r̂(s, a)2 ] − (1− γ)Es0,a0 [ Q̂π(s0, a0) ] (14)\n:= 1\n2 E(s,a)∼dD\n[ r̂(s, a)2 ] − E(s,a)∼dπ [r̂(s, a)] . (15)" }, { "heading": "Proof.", "text": "Take the partial derivative of J(r̂) with respect to r̂(s, a):\n∂\n∂r̂(s, a)\n( 1\n2 E(s,a)∼dD\n[ r̂(s, a)2 ] − E(s,a)∼dπ [r̂(s, a)] ) = dD(s, a)r̂(s, a)− dπ(s, a). (16)\nThen setting ∂J(r̂)∂r̂(s,a) = 0, we have that J(r̂) is minimized when r̂(s, a) = dπ(s,a) dD(s,a)\nfor all state-action pairs (s, a)." }, { "heading": "A.2 THEOREM 1", "text": "Theorem 1 Assuming (1 − γ)Es0,a0 [ψπ(s0, a0)] = E(s,a)∼dπ [φ(s, a)], then the optimizer w∗ of the objective J(w) is the least squares estimator of ∫ S×A ( w>φ(s, a)− d π(s,a) dD(s,a) )2 d(s, a).\nmin w\nJ(w) := 1\n2 EdD\n[ (w>φ(s, a))2 ] − (1− γ)Es0,a0∼π [ w>ψπ(s0, a0) ] . (17)" }, { "heading": "Proof.", "text": "From our assumption we have:\nmin w\nJ(w) := 1\n2 EdD\n[ (w>φ(s, a))2 ] − E(s,a)∼dπ [ w>φ(s, a) ] . (18)\nLet M = |S| × |A| and N be the feature dimension. Let φ(s, a) be a N × 1 feature vector and Φ the M × N matrix where each row corresponds to a φ(s, a)> vector. Let w be a N × 1 vector of parameters. Let dπ and dD be M × 1 vectors of the values of dπ(s, a) and dD(s, a) for all (s, a).\nFirst note the least squares estimator of ∫ S×A ( w>φ(s, a)− d π(s,a) dD(s,a) )2 d(s, a) = ∥∥∥Φw − dπdD ∥∥∥2 is ŵ = (ΦTΦ)−1ΦT d π\ndD , where the division is element-wise.\nNow consider our optimization problem:\nJ(w) = 0.5d>D(Φw) 2 − d>π Φw\n= 0.5d>D(Φw) TΦw − d>π Φw = 0.5d>Dw TΦTΦw − d>π Φw.\n(19)\nNow take gradient of J(w) with respect to w and set it equal to 0:\n∇w0.5d>DwTΦTΦw − d>π Φw = 0 d>Dw TΦTΦ = d>π Φd π\nwTΦTΦ = ( dπ dD )> Φ\nΦTΦw = Φ> dπ dD\nw = (ΦTΦ)−1Φ> dπ dD .\n(20)\nIt follows that the optimizer to J(w) is the least squares estimator." }, { "heading": "A.3 THEOREM 2", "text": "Theorem 2 Given an MDP (S,A, ·, p, d0, γ), policy π, and function f : S × A → R, define the reward function r : S ×A → R as r̂(s, a) = f(s, a)− γEs′,a′∼π[f(s′, a′)]. Then it follows that the value function Q̂π defined by the policy π, MDP, and reward function r̂, is the function f ." }, { "heading": "Proof.", "text": "Define r̂(s, a) = f(s, a) − γEπ[f(s′, a′)]. Then note for all state-action pairs (s, a) we have f(s, a) = r̂(s, a)+γEπ[f(s′, a′)] = f(s, a)−γEπ[f(s′, a′)]+γEπ[f(s′, a′)] = f(s, a), satisfying the Bellman equation. It follows that f = Q̂π by the uniqueness of the value function (Bertsekas & Tsitsiklis, 1996).\nThis result can also be obtained by considering state-action reward shaping (Wiewiora et al., 2003), and treating f as the potential function." }, { "heading": "B TABULAR SR-DICE", "text": "The experimental performance of SR-DICE, in particular its reliance on the SR can be partially explained by examining its tabular counterpart. In fact, we can show that the value estimate derived from SR-DICE is exactly equal to a value estimate derived directly from the SR.\nRecall the form of SR-DICE’s objective in a tabular setting, as a function of the SR Ψπ:\nmin r̂(s,a)∀(s,a)\nJΨ(r̂) := 1\n2 E(s,a)∼dD\n[ r̂(s, a)2 ] − (1− γ)Es0 [∑ s Ψπ(s|s0)Ea∼π [r̂(s, a)] ] . (21)\nWhich has the following gradient: ∇r̂(s,a)JΨ(r̂) := dD(s, a)r̂(s, a)− (1− γ) ∑ s0 p(s0)Ψ π(s|s0)π(a|s). (22)\nWe can compute this gradient from samples. Define D0 as the set of start states s0 ∈ D. It follows:\n∇r̂(s,a)JΨ(r̂) := 1\n|D| ∑ (s′,a′)∈D 1(s′ = s, a′ = a) r̂(s, a) − (1− γ) 1\n|D0| ∑ s0∈D0 Ψπ(s|s0)π(a|s).\n(23)\nSetting the above gradient to 0 and solving for r̂(s, a) we have the optimizer of JΨ(r̂).\nr̂(s, a) = (1− γ) 1 |D0| ∑ s0∈D0 Ψπ(s|s0)π(a|s) |D|∑ (s′,a′)∈D 1(s ′ = s, a′ = a) . (24)\nNow consider the MIS equation for estimating the objective R(π) = (1 − γ)Es0 [V π(s0)], where r̂ is an estimate of d\nπ(s,a) dD(s,a) :\n1 |D| ∑\n(s,a)∈D\ndπ(s, a) dD(s, a) r(s, a). (25)\nFor convenience, assume every state-action pair (s, a) is contained at least once in D. Although the result holds regardless, this assumption allows us to avoid some cumbersome details. Replace dπ(s,a) dD(s,a) with r̂ in Equation (25) and expand and simplify:\n1 |D| ∑\n(s,a)∈D\n(1− γ) 1 |D0| ∑ s0∈D0 Ψπ(s|s0)π(a|s) |D|∑ (s′,a′)∈D 1(s ′ = s, a′ = a) r(s, a) (26)\n= (1− γ) 1 |D0| ∑ s0∈D0 ∑ (s,a)∈D Ψπ(s|s0)π(a|s) 1∑ (s′,a′)∈D 1(s ′ = s, a′ = a) r(s, a) (27)\n= (1− γ) 1 |D0| ∑ s0∈D0 ∑ (s,a)∈S×A Ψπ(s|s0)π(a|s)r(s, a). (28)\nNoting that ∑\n(s,a)∈S×AΨ π(s|s0)π(a|s)r(s, a) = V π(s0) we can see that SR-DICE returns the\nsame solution as the SR solution for estimating R(π)." }, { "heading": "C ADDITIONAL EXPERIMENTS", "text": "In this section, we include additional experiments and visualizations, covering extra domains, additional ablation studies, run time experiments and additional behavior policies in the Atari domain." }, { "heading": "C.1 EXTRA CONTINUOUS DOMAINS", "text": "Although our focus is on high-dimensional domains, the environments, Pendulum and Reacher, have appeared in several related MIS papers (Nachum et al., 2019a; Zhang et al., 2020a). Therefore, we have included results for these domains in Figure 6. All experimental settings match the experiments in the main body, and are described fully in Appendix F.\n0.04" }, { "heading": "C.2 REPRESENTATION LEARNING & MIS", "text": "SR-DICE relies a disentangled representation learning phase where an encoding φ is learned, followed by the deep successor representation ψπ which are used with a linear vector w to estimate the density ratios. In this section we perform some experiments which attempt to evaluate the importance of representation learning by comparing their influence on the baseline MIS methods.\nAlternate representations. We examine both DualDICE (Nachum et al., 2019a) and GradientDICE (Zhang et al., 2020c) under four settings where we pass the representations φ and ψπ to their networks, where both φ and ψπ are learned in identical fashion to SR-DICE.\n(1) Input encoding φ, f(φ(s, a)), w(φ(s, a)). (2) Input SR ψπ , f(ψπ(s, a)), w(ψπ(s, a)). (3) Input encoding φ, linear networks, f>φ(s, a), w>φ(s, a). (4) Input SR ψπ , linear networks, f>ψπ(s, a), w>ψπ(s, a).\nSee Appendix E for specific details on the baselines. We report the results in Figure 7. For GradientDICE, no benefit is provided by varying the representations, although using the encoding φ matches the performance of vanilla GradientDICE regardless of the choice of network, providing some validation that φ is a reasonable encoding. Interestingly, for DualDICE, we see performance gains from using the SR ψπ as a representation: slightly as input, but significantly when used with linear networks. On the other hand, as GradientDICE performs much worse with the SR, it is clear that the SR cannot be used as a representation without some degree of forethought.\n0.02\n0.04\nIncreased capacity. As SR-DICE uses a linear function on top of a representation trained with the same capacity as the networks in DualDICE and GradientDICE, our next experiment examines if this additional capacity provides benefit to the baseline methods. To do, we expand each network in both baselines by adding an additional hidden layer. The results are reported in Figure 8. We find there is a very slight decrease in performance when using the larger capacity networks. This suggests the performance gap from SR-DICE over the baseline methods has little to do with model size.\n0.02\n0.04" }, { "heading": "C.3 TOY DOMAINS", "text": "We additional test the MIS algorithms on a toy random-walk experiment with varying feature representations, based on a domain from (Sutton et al., 2009).\nDomain. The domain is a simple 5-state MDP (x1, x2, x3, x4, x5) with two actions (a0, a1), where action a0 induces the transition xi → xi−1 and action a1 induces the transition xi → xi+1, with the state x1 looping to itself with action a0 and x5 looping to itself with action a5. Episodes begin in the state x1.\nTarget. We evaluate policy π which selects actions uniformly, i.e. π(a0|xi) = π(a1|xi) = 0.5 for all states xi. Our data set D contains all 10 possible state-action pairs and is sampled uniformly. We use a discount factor of γ = 0.99. Methods are evaluated on the average MSE between their estimate of d π\ndD on all state-action pairs and the ground-truth value, which is calculated analytically.\nHyper-parameters. Since we are mainly interested in a function approximation setting, each method uses a small neural network with two hidden layers of 32, followed by tanh activation functions. All networks used stochastic gradient descent with a learning rate α tuned for each method out of {1, 0.5, 0.1, 0.05, 0.01, 0.001}. This resulted in α = 0.05 for DualDICE, α = 0.1 for GradientDICE, and α = 0.05 for SR-DICE. Although there are a small number of possible data points, we use a batch size of 128 to resemble the regular training procedure. As recommended by the authors we use λ = 1 for GradientDICE (Zhang et al., 2020c), which was not tuned. For SR-DICE, we update the target network at every time step τ = 1, which was not tuned.\nSince there are only 10 possible state-action pairs, we use the closed form solution for the vector w (Equation (10)). Additionally, we skip the state representation phase of SR-DICE, instead learning the SR ψπ over the given representation of each state, such that the encoding φ = x. This allows us to test SR-DICE to a variety of representations rather than using a learned encoding. Consequently, with these choices, SR-DICE has no pre-training phase, and therefore, unlike every other graph in this paper, we report the results as the SR is trained, rather than as the vector w is trained.\nFeatures. To test the robustness of each method we examine three versions of the toy domain, each using a different feature representation over the same 5-state MDP. These feature sets are again taken from (Sutton et al., 2009).\n• Tabular features: states are represented by a one-hot encoding, for example x2 = [0, 1, 0, 0, 0]. • Inverted features: states are represented by the inverse of a one-hot encoding, for example x2 = [ 1 2 , 0, 1 2 , 1 2 , 1 2 ] . • Dependent features: states are represented by 3 features which is not sufficient to cover all states exactly. In this case x1 = [1, 0, 0], x2 = [ 1√2 , 1√ 2 , 0], x3 = [ 1√3 , 1√ 3 , 1√ 3 ], x4 = [0, 1√2 , 1√ 2 ],\nx5 = [0, 0, 1]. Since our experiments use neural networks rather than linear functions, this representation is mainly meant to test SR-DICE, where we skip the state representation phase for SR-DICE and use the encoding φ = x, limiting the representation of the SR.\n0.02\n0.04\nResults. We report the results in Figure 9. We remark on several observations. SR-DICE learns significantly faster than the baseline methods, likely due to its use of temporal difference methods in the SR update, rather than using an update similar to residual learning, which is notoriously\nslow (Baird, 1995; Zhang et al., 2020b). GradientDICE appears to still be improving, although we limit training at 50k time steps, which we feel is sufficient given the domain is deterministic and only has 5 states. Notably, GradientDICE also uses a higher learning rate than SR-DICE and DualDICE. We also find the final performance of SR-DICE is much better than DualDICE and GradientDICE in the domains where the feature representation is not particularly destructive, highlighting the easier optimization of SR-DICE. In the case of the dependent features, we find DualDICE outperforms SR-DICE after sufficient updates. However, we remark that this concern could likely be resolved by learning the features and that SR-DICE still outperforms GradientDICE. Overall, we believe these results demonstrate that SR-DICE’s strong empirical performance is consistent across simpler domains as well as the high dimensional domains we examine in the main body." }, { "heading": "C.4 RUN TIME EXPERIMENTS", "text": "In this section, we evaluate the run time of each algorithm used in our experiments. Although SRDICE relies on pre-training the deep successor representation before learning the density ratios, we find each marginalized importance sampling (MIS) method uses a similar amount of compute, due to the reduced cost of training w after the pre-training phase.\nWe evaluate the run time on the HalfCheetah environment in MuJoCo (Todorov et al., 2012) and OpenAI gym (Brockman et al., 2016). As in the main set of experiments, each method is trained for 250k time steps. Additionally, SR-DICE and Direct-SR train the encoder-decoder for 30k time steps and the deep successor representation for 100k time steps before training w. Run time is averaged over 3 seeds. All time-based experiments are run on a single GeForce GTX 1080 GPU and a Intel Core i7-6700K CPU. Results are reported in Figure 10.\nWe find the MIS algorithms run in a comparable time, regardless of the pre-training step involved in SR-DICE. This can be explained as training w in SR-DICE involves significantly less compute than DualDICE and GradientDICE which update multiple networks. On the other hand, the deep reinforcement learning approaches run in about half the time of SR-DICE." }, { "heading": "C.5 ATARI EXPERIMENTS", "text": "To better evaluate the algorithms in the Atari domain, we run two additional experiments where we swap the behavior policy. We observe similar trends as the experiments in the main body of the paper. In both experiments we keep all other settings fixed. Notably, we continue to use the same target policy, corresponding to the greedy policy trained by Double DQN (Van Hasselt et al., 2016), the same discount factor γ = 0.99, and the same data set size of 1 million.\nIncreased noise. In our first experiment, we attempt to increase the randomness of the behavior policy. As this can cause destructive behavior in the performance of the agent, we adopt an episodedependent policy which selects between the noisy policy or the deterministic greedy policy at the beginning of each episode. This is motivated by the offline deep reinforcement learning experiments from (Fujimoto et al., 2019a). As a result, we use an -greedy policy with p = 0.8 and the deter-\nministic greedy policy (the target policy) with p = 0.2. is set to 0.2, rather than 0.1 as in the experiments in the main body of the paper. Results are reported in Figure 11.\n0.04\nWe observe very similar trends to the original set of experiments. Again, we note DualDICE and GradientDICE perform very poorly, while SR-DICE, Direct-SR, and Deep TD achieve a reasonable, but biased, performance. In this setting, we still find the behavior policy is the closest estimate of the true value of R(π) .\nSeparate behavior policy. In this experiment, we use a behavior which is distinct from the target policy, rather than simply adding noise. This behavior policy is derived from an agent trained with prioritized experience replay and Double DQN (Schaul et al., 2016). Again, we use a -greedy policy, with = 0.1. We report the results in Figure 12.\n0.04\nAgain, we observe similar trends in performance. Notably, in the Asterix game, the performance of Direct-SR surpasses the behavior policy, suggesting off-policy evaluation can outperform the naı̈ve estimator in settings where the policy is sufficiently “off-policy” and distinct." }, { "heading": "D SR-DICE PRACTICAL DETAILS", "text": "In this section, we cover some basic implementation-level details of SR-DICE. Note that code is provided for additional clarity.\nSR-DICE uses two parametric networks, an encoder-decoder network to learn the encoding φ and a deep successor representation network ψπ . Additionally, SR-DICE uses the weights of a linear function w. SR-DICE begins by pre-training the encoder-decoder network and the deep successor representation before applying updates to w.\nEncoder-Decoder. This encoder-decoder network encodes (s, a) to the feature vector φ(s, a), which is then decoded by several decoder heads. For the Atari domain, we choose to condition the feature vector only on states φ(s), as the reward is generally independent of the action selection. This\nAlgorithm 2 SR-DICE 1: Input: Data set D, target policy π, number of iterations T1, T2, T3, mini-batch size N , tar-\nget update rate.\n2: for t = 1 to T1 do 3: Sample mini-batch of N transitions (s, a, r, s′) from D. 4: minφ,Ds′ ,Da,Dr λs′(Ds′(φ(s, a))− s ′)2 +λa(Da(φ(s, a))− a)2 + λr(Dr(φ(s, a))− r)2. 5: end for\n6: for t = 1 to T2 do 7: Sample mini-batch of N transitions (s, a, r, s′) from D. 8: Sample a′ ∼ π(s′). 9: minψπ (φ(s, a) + γψ′(s′, a′)− ψπ(s, a))2.\n10: If t mod target update rate = 0: ψ′ ← ψ. 11: end for\n12: for t = 1 to T3 do 13: Sample mini-batch of N transitions (s, a, r, s′) from D. 14: Sample mini-batch of N start states s0 from D. 15: Sample a0 ∼ π(s0). 16: minw 1 2 (w\n>φ(s, a))2 − (1− γ)w>ψπ(s0, a0). 17: end for\nTrain encoderdecoder\nTrain deep successor\nrepresentation\nLearn w\nchange applies to both SR-DICE and Direct-SR. Most design decisions are inspired by prior work (Machado et al., 2017; 2018a).\nFor continuous control, given a mini-batch transition (s, a, r, s′), the encoder-decoder network is trained to map the state-action pair (s, a) to the next state s′, the action a and reward r. The resulting loss function is as follows:\nmin φ,Ds′ ,Da,Dr\nL(φ,D) := λs′(Ds′(φ(s, a))− s′)2 + λa(Da(φ(s, a))− a)2 + λr(Dr(φ(s, a))− r)2.\n(29) We use λs′ = 1, λa = 1 and λr = 0.1.\nFor the Atari games, given a mini-batch transition (s, a, r, s′), the encoder-decoder network is trained to map the state s to the next state s′ and reward r, while penalizing the size of φ(s). The resulting loss function is as follows:\nmin φ,Ds′ ,Dr\nL(φ,D) := λs′(Ds′(φ(s))− s′)2 + λr(Dr(φ(s))− r)2 + λφφ(s)2. (30)\nWe use λs′ = 1, λr = 0.1 and λφ = 0.1.\nDeep Successor Representation. The deep successor representation ψπ is trained to estimate the accumulation of φ. The training procedure resembles standard deep reinforcement learning algorithms. Given a mini-batch of transitions (s, a, r, s′) the network is trained to minimize the following loss:\nmin ψπ L(ψπ) := (φ(s, a) + γψ′(s′, a′)− ψπ(s, a))2, (31)\nwhere ψ′ is the target network. A target network is a frozen network used to provide stability (Mnih et al., 2015; Kulkarni et al., 2016) in the learning target. The target network is updated to the current network ψ′ ← ψπ after a fixed number of time steps, or updated with slowly at each time step ψ′ ← τψπ + (1− τ)ψπ (Lillicrap et al., 2015). Marginalized Importance Sampling Weights. As described in the main body, we learn w by optimizing the following objective:\nmin w\nJ(w) := 1\n2 E(s,a)∼dD\n[ (w>φ(s, a))2 ] − (1− γ)Es0,a0∼π [ w>ψπ(s0, a0) ] . (32)\nThis is achieved by sampling state-action pairs uniformly from the data setD, alongside a mini-batch of start states s0, which are recorded at the beginning of each episode during data collection.\nWe summarize the learning procedure of SR-DICE in Algorithm 2." }, { "heading": "E BASELINES", "text": "In this section, we cover some of the practical details of each of the baseline methods." }, { "heading": "E.1 DUALDICE", "text": "Dual stationary DIstribution Correction Estimation (DualDICE) (Nachum et al., 2019a) uses two networks f and w. The general optimization problem is defined as follows:\nmin f max w\nJ(f, w) := E(s,a)∼dD,a′∼π,s′ [ w(s, a)(f(s, a)− γf(s′, a′))− 0.5w(s, a)2 ] − (1− γ)Es0,a0 [f(s0, a0)].\n(33)\nIn practice this corresponds to alternating single gradient updates to f and w. The authors suggest possible alternative functions to the convex function 0.5w(s, a)2 such as 23 |w(s, a)| 3 2 , however in practice we found 0.5w(s, a)2 performed the best." }, { "heading": "E.2 GRADIENTDICE", "text": "Gradient stationary DIstribution Correction Estimation (GradientDICE) (Zhang et al., 2020c) uses two networks f and w, and a scalar u. The general optimization problem is defined as follows:\nmin w max f,u\nJ(w, u, f) := (1− γ)Es0,a0 [f(s0, a0)] + γE(s,a)∼dD,a′∼π,s′ [w(s, a)f(s′, a′)]\n− E(s,a)∼dD [w(s, a)f(s, a)] + λ ( E(s,a)∼dD [uw(s, a)− u]− 0.5u2 ) .\n(34) Similarly to DualDICE, in practice this involves alternating single gradient updates to w, u and f . As suggested by the authors we use λ = 1." }, { "heading": "E.3 DIRECT-SR", "text": "Direct-SR is a policy evaluation version of deep successor representation (Kulkarni et al., 2016). The encoder-decoder network and deep successor representation are trained in the exact same manner as SR-DICE (see Section D). Then, rather than train w to learn the marginalized importance sampling ratios, w is trained to recover the original reward function. Given a mini-batch of transitions (s, a, r, s′), the following loss is applied:\nmin w L(w) := (r −w>φ(s, a))2. (35)" }, { "heading": "E.4 DEEP TD", "text": "Deep TD, short for deep temporal-difference learning, takes the standard deep reinforcement learning methodology, akin to DQN (Mnih et al., 2015), and applies it to off-policy evaluation. Given a mini-batch of transitions (s, a, r, s′) the Q-network is updated by the following loss:\nmin Qπ L(Qπ) := (r + γQ′(s′, a′)−Qπ(s, a))2, (36)\nwhere a′ is sampled from the target policy π(·|s′). Similarly, to training the deep successor representation, Q′ is a frozen target network which is updated to the current network after a fixed number of time steps, or incrementally at every time step." }, { "heading": "F EXPERIMENTAL DETAILS", "text": "All networks are trained with PyTorch (version 1.4.0) (Paszke et al., 2019). Any unspecified hyperparameter uses the PyTorch default setting.\nEvaluation. The marginalized importance sampling methods are measured by the average weighted reward from transitions sampled from a replay buffer 1N ∑ (s,a,r) w(s, a)r(s, a), with N = 10k,\nwhile the deep RL methods use (1−γ)M ∑ s0 Q(s0, π(a0)), where M is the number of episodes. Each OPE method is trained on data collected by some behavioral policy πb. We estimate the “true” normalized average discounted reward of the target and behavior policies from 100 roll-outs in the environment." }, { "heading": "F.1 CONTINUOUS-ACTION ENVIRONMENTS", "text": "Our agents are evaluated via tasks interfaced through OpenAI gym (version 0.17.2) (Brockman et al., 2016), which mainly rely on the MuJoCo simulator (mujoco-py version 1.50.1.68) (Todorov et al., 2012). We provide a description of each environment in Table 1.\nExperiments. Our experiments are framed as off-policy evaluation tasks in which agents aim to evaluate R(π) = E(s,a)∼dπ,r[r(s, a)] for some target policy π. In each of our experiments, π corresponds to a noisy version of a policy trained by a TD3 agent (Fujimoto et al., 2018), a commonly used deep reinforcement learning algorithm. Denote πd, the deterministic policy trained by TD3 using the author’s GitHub https://github.com/sfujim/TD3. The target policy is defined as: π + N (0, σ2), where σ = 0.1. The off-policy evaluation algorithms are trained on a data set generated by a single behavior policy πb. The experiments are done with two settings “easy” and “hard” which vary the behavior policy and the size of the data set. All other settings are kept fixed. For the “easy” setting the behavior policy is defined as:\nπb = πd +N (0, σ2b ), σb = 0.133, (37) and 500k time steps are collected (approximately 500 trajectories for most tasks). The “easy” setting is roughly based on the experimental setting from Zhang et al. (2020a). For the “hard” setting the behavior policy adds an increased noise and selects random actions with p = 0.2:\nπb = { πd +N (0, σ2b ), σb = 0.2 p = 0.8, Uniform random action p = 0.2,\n(38)\nand only 50k time steps are collected (approximately 50 trajectories for most tasks). For Pendulumv0 and Humanoid-v3, the range of actions is [−2, 2] and [−0.4, 0.4] respectively, rather than [−1, 1], so we scale the size of the noise added to actions accordingly. We set the discount factor to γ = 0.99. All continuous-action experiments are over 10 seeds.\nPre-training. Both SR-DICE and Direct-SR rely on pre-training the encoder-decoder and deep successor representation ψ. These networks were trained for 30k and 100k time steps respectively. As noted in Section C.4, even when including this pre-training step, both algorithm have a lower running time than DualDICE and GradientDICE.\nArchitecture. For fair comparison, we use the same architecture for all algorithms except for DualDICE. This a fully connected neural network with 2 hidden layers of 256 and ReLU activation functions. This architecture was based on the network defined in the TD3 GitHub and was not tuned. For DualDICE, we found tanh activation functions improved stability over ReLU.\nFor SR-DICE and SR-Direct we use a separate architecture for the encoder-decoder network. The encoder is a network with a single hidden layer of 256, making each φ(s, a) a feature vector of 256.\nThere are three decoders for reward, action, and next state, respectively. For the action decoder and next state decoder we use a network with one hidden layer of 256. The reward decoder is a linear function of the encoding, without biases. All hidden layers are followed by ReLU activation functions.\nNetwork hyper-parameters. All networks are trained with the Adam optimizer (Kingma & Ba, 2014). We use a learning rate of 3e−4, again based on TD3 for all networks except for GradientDICE, which we found required careful tuning to achieve a reasonable performance. For GradientDICE we found a learning rate of 1e−5 for f and w, and 1e−2 for u achieved the highest performance. For DualDICE we chose the best performing learning rate out of {1e− 2, 1e− 3, 3e− 4, 5e− 5, 1e− 5}. SR-DICE, Direct-SR and Deep TD were not tuned and use default hyper-parameters from deep RL algorithms. For training ψπ and Qπ for the deep reinforcement learning aspects of SR-DICE, Direct-SR and Deep TD we use a mini-batch size of 256 and update the target networks using τ = 0.005, again based on TD3. For all MIS methods, we use a mini-batch size of 2048 as described by (Nachum et al., 2019a). We found SR-DICE and DualDICE succeeded with lower mini-batch sizes but did not test this in detail. All hyper-parameters are described in Table 2.\nVisualizations. We graph the log MSE between the estimate of R(π) and the true R(π), where the log MSE is computed as log 0.5(X −R(π))2. We smooth the learning curves over a uniform window of 10. Agents were evaluated every 1k time steps and performance is measured over 250k time steps total. Markers are displayed every 25k time steps with offset for visual clarity." }, { "heading": "F.2 ATARI", "text": "We interface with Atari by OpenAI gym (version 0.17.2) (Brockman et al., 2016), all agents use the NoFrameskip-v0 environments that include sticky actions with p = 0.25 (Machado et al., 2018b).\nPre-processing. We use standard pre-processing steps based on Machado et al. (2018b) and Castro et al. (2018). We base our description on (Fujimoto et al., 2019a), which our code is closely based on. We define the following:\n• Frame: output from the Arcade Learning Environment. • State: conventional notion of a state in a MDP. • Input: input to the network. The standard pre-processing steps are as follows:\n• Frame: gray-scaled and reduced to 84× 84 pixels, tensor with shape (1, 84, 84). • State: the maximum pixel value over the 2 most recent frames, tensor with shape (1, 84, 84). • Input: concatenation over the previous 4 states, tensor with shape (4, 84, 84). The notion of time steps is applied to states, rather than frames, and functionally, the concept of frames can be abstracted away once pre-processing has been applied to the environment.\nThe agent receives a state every 4th frame and selects one action, which is repeated for the following 4 frames. If the environment terminates within these 4 frames, the state received will be the last 2 frames before termination. For the first 3 time steps of an episode, the input, which considers the\nprevious 4 states, sets the non-existent states to all 0s. An episode terminates after the game itself terminates, corresponding to multiple lives lost (which itself is game-dependent), or after 27k time steps (108k frames or 30 minutes in real time). Rewards are clipped to be within a range of [−1, 1]. Sticky actions are applied to the environment (Machado et al., 2018b), where the action at taken at time step t, is set to the previously taken action at−1 with p = 0.25, regardless of the action selected by the agent. Note this replacement is abstracted away from the agent and data set. In other words, if the agent selects action a at state s, the transition stored will contain (s, a), regardless if a is replaced by the previously taken action.\nExperiments. For the main experiments we use a behavior and target policy derived from a Double DQN agent (Van Hasselt et al., 2016), a commonly used deep reinforcement learning algorithm. The behavior policy is an -greedy policy with = 0.1 and the target policy is the greedy policy (i.e. = 0). In Section C.5 we perform two additional experiments with a different behavior policy. Otherwise, all hyper-parameters are fixed across experiments. For each, the data set contains 1 million transitions and uses a discount factor of γ = 0.99. Each experiment is evaluated over 3 seeds.\nPre-training. Both SR-DICE and Direct-SR rely on pre-training the encoder-decoder and deep successor representation ψ. Similar to the continuous-action tasks, these networks were trained for 30k and 100k time steps respectively.\nArchitecture. We use the same architecture as most value-based deep reinforcement learning algorithms for Atari, e.g. (Mnih et al., 2015; Van Hasselt et al., 2016; Schaul et al., 2016). This architecture is used for all networks, other than the encoder-decoder network, for fair comparison and was not tuned in any way.\nThe network has a 3-layer convolutional neural network (CNN) followed by a fully connected network with a single hidden layer. As mentioned in pre-processing, the input to the network is a tensor with shape (4, 84, 84). The first layer of the CNN has a kernel depth of 32 of size 8× 8 and a stride of 4. The second layer has a kernel depth of 32 of size 4 × 4 and a stride of 2. The third layer has a kernel depth of 64 of size 3 × 3 and a stride of 1. The output of the CNN is flattened to a vector of 3136 before being passed to the fully connected network. The fully connected network has a single hidden layer of 512. Each layer, other than the output layer, is followed by a ReLU activation function. The final layer of the network outputs |A| values where |A| is the number of actions. The encoder-decoder used by SR-DICE and SR-Direct has a slightly different architecture. The encoder is identical to the aforementioned architecture, except the final layer outputs the feature vector φ(s) with 256 dimensions and is followed by a ReLU activation function. The next state decoder uses a single fully connected layer which transforms the vector of 256 to 3136 and then is passed through three transposed convolutional layers each mirroring the CNN. Hence, the first layer has a kernel depth of 64, kernel size of 3× 3 and a stride of 1. The second layer has a kernel depth of 32, kernel size of 4 × 4 and a stride of 2. The final layer has a kernel depth of 32, kernel size of 8 × 8 and a stride of 4. This maps to a (1, 84, 84) tensor. All layers other than the final layer are followed by ReLU activation functions. Although the input uses a history of the four previous states, as mentioned in the pre-processing section, we only reconstruct the succeeding state without history. We do this because there is overlap in the history of the current input and the input corresponding to the next time step. The reward decoder is a linear function without biases.\nNetwork hyper-parameters. Our hyper-parameter choices are based on standard hyper-parameters based largely on (Castro et al., 2018). All networks are trained with the Adam optimizer (Kingma & Ba, 2014). We use a learning rate of 6.25e−5. Although not traditionally though of has a hyperparameter, in accordance to prior work, we modify used by Adam to be 1.5e−4. For w we use a learning rate of 3e−4 with the default setting of = 1e−8. For u we use 1e−3. We use a mini-batch size of 32 for all networks. SR-DICE, Direct-SR and Deep TD update the target network every 8k time steps. All hyper-parameters are described in Table 3.\nVisualizations. We use identical visualizations to the continuous-action environments. Graphs display the log MSE between the estimate of R(π) and the true R(π) of the target policy, where the log MSE is computed as log 0.5(X −R(π))2. We smooth the learning curves over a uniform window of 10. Agents were evaluated every 1k time steps and performance is measured over 250k time steps total. Markers are displayed every 25k time steps with offset for visual clarity." } ]
2,020
PRACTICAL MARGINALIZED IMPORTANCE SAMPLING
SP:98492c9032ac3381f5897bc6f17fd0f136546999
[ "This paper presents a randomized second-order smoothing certificate for providing robustness guarantees against adversarial attacks. By additionally using the gradient estimation of smoothed classifier, the proposed method has been shown to outperform the existing randomized smoothing certificate in practice. A variant of the method without explicitly estimating gradient vector has also been proposed to avoid the dependence of feature dimension in concentration analysis." ]
Randomized smoothing is a popular way of providing robustness guarantees against adversarial attacks: randomly-smoothed functions have a universal Lipschitz-like bound, allowing for robustness certificates to be easily computed. In this work, we show that there also exists a universal curvature-like bound for Gaussian random smoothing: given the exact value and gradient of a smoothed function, we compute a lower bound on the distance of a point to its closest adversarial example, called the Second-order Smoothing (SoS) robustness certificate. In addition to proving the correctness of this novel certificate, we show that SoS certificates are realizable and therefore tight. Interestingly, we show that the maximum achievable benefits, in terms of certified robustness, from using the additional information of the gradient norm are relatively small: because our bounds are tight, this is a fundamental negative result. The gain of SoS certificates further diminishes if we consider the estimation error of the gradient norms, for which we have developed an estimator. We therefore additionally develop a variant of Gaussian smoothing, called Gaussian dipole smoothing, which provides similar bounds to randomized smoothing with gradient information, but with much-improved sample efficiency. This allows us to achieve (marginally) improved robustness certificates on high-dimensional datasets such as CIFAR-10 and ImageNet. Code is available at https://github.com/alevine0/smoothing_second_ order.
[]
[ { "authors": [ "Cem Anil", "James Lucas", "Roger Grosse" ], "title": "Sorting out Lipschitz function approximation", "venue": "Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "Nicholas Carlini", "Guy Katz", "Clark Barrett", "David L Dill" ], "title": "Provably minimally-distorted adversarial examples", "venue": "arXiv preprint arXiv:1709.10207,", "year": 2017 }, { "authors": [ "Jeremy Cohen", "Elan Rosenfeld", "Zico Kolter" ], "title": "Certified adversarial robustness via randomized smoothing", "venue": "In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Mahyar Fazlyab", "Alexander Robey", "Hamed Hassani", "Manfred Morari", "George Pappas" ], "title": "Efficient and accurate estimation of lipschitz constants for deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Huijie Feng", "Chunpeng Wu", "Guoyang Chen", "Weifeng Zhang", "Yang Ning" ], "title": "Regularized training and tight certification for randomized smoothed classifier with provable robustness", "venue": "In The ThirtyFourth AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Sven Gowal", "Krishnamurthy Dvijotham", "Robert Stanforth", "Rudy Bunel", "Chongli Qin", "Jonathan Uesato", "Relja Arandjelovic", "Timothy Mann", "Pushmeet Kohli" ], "title": "On the effectiveness of interval bound propagation for training verifiably robust models", "venue": "arXiv preprint arXiv:1810.12715,", "year": 2018 }, { "authors": [ "Xiaowei Huang", "Marta Kwiatkowska", "Sen Wang", "Min Wu" ], "title": "Safety verification of deep neural networks", "venue": "In International Conference on Computer Aided Verification,", "year": 2017 }, { "authors": [ "Mathias Lecuyer", "Vaggelis Atlidakis", "Roxana Geambasu", "Daniel Hsu", "Suman Jana" ], "title": "Certified robustness to adversarial examples with differential privacy", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2019 }, { "authors": [ "Alexander Levine", "Sahil Singla", "Soheil Feizi" ], "title": "Certifiably robust interpretation in deep learning", "venue": "CoRR, abs/1905.12105,", "year": 2019 }, { "authors": [ "Bai Li", "Changyou Chen", "Wenlin Wang", "Lawrence Carin" ], "title": "Certified adversarial robustness with additive noise", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Qiyang Li", "Saminul Haque", "Cem Anil", "James Lucas", "Roger B Grosse", "Jörn-Henrik Jacobsen" ], "title": "Preventing gradient attenuation in lipschitz constrained convolutional networks", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Jeet Mohapatra", "Ching-Yun Ko", "Tsui-Wei Weng", "Pin-Yu Chen", "Sijia Liu", "Luca Daniel" ], "title": "Higherorder certification for randomized smoothing", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Aditi Raghunathan", "Jacob Steinhardt", "Percy S Liang" ], "title": "Semidefinite relaxations for certifying robustness to adversarial examples", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Hadi Salman", "Jerry Li", "Ilya Razenshteyn", "Pengchuan Zhang", "Huan Zhang", "Sebastien Bubeck", "Greg Yang" ], "title": "Provably robust deep learning via adversarially trained smoothed classifiers", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Sahil Singla", "Soheil Feizi" ], "title": "Second-order provable defenses against adversarial attacks", "venue": "In Proceedings of the 37th International Conference on Machine Learning (preproceedings),", "year": 2020 }, { "authors": [ "Vincent Tjeng", "Kai Y. Xiao", "Russ Tedrake" ], "title": "Evaluating robustness of neural networks with mixed integer programming", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Roman Vershynin" ], "title": "High-dimensional probability: An introduction with applications in data science, volume 47", "venue": null, "year": 2018 }, { "authors": [ "Aladin Virmaux", "Kevin Scaman" ], "title": "Lipschitz regularity of deep neural networks: analysis and efficient estimation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Martin J Wainwright" ], "title": "High-dimensional statistics: A non-asymptotic viewpoint, volume 48", "venue": null, "year": 2019 }, { "authors": [ "Eric Wong", "Zico Kolter" ], "title": "Provable defenses against adversarial examples via the convex outer adversarial polytope", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Huan Zhang", "Pengchuan Zhang", "Cho-Jui Hsieh" ], "title": "Recurjac: An efficient recursive algorithm for bounding jacobian matrix of neural networks and its applications", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "A topic of much recent interest in machine learning has been the design of deep classifiers with provable robustness guarantees. In particular, for an m-class classifier h : Rd → [m], the L2 certification problem for an input x is to find a radius ρ such that, for all δ with ‖δ‖2 < ρ, h(x) = h(x + δ). This robustness certificate serves as a lower bound on the magnitude of any adversarial perturbation of the input that can change the classification: therefore, the certificate is a security guarantee against adversarial attacks.\nThere are many approaches to the certification problem, including exact methods, which compute the precise norm to the decision boundary (Tjeng et al., 2019; Carlini et al., 2017; Huang et al., 2017) as well as methods for which the certificate ρ is merely a lower bound on the distance to the decision boundary (Wong & Kolter, 2018; Gowal et al., 2018; Raghunathan et al., 2018).\nOne approach that belongs to the latter category is Lipschitz function approximation. Recall that a function f : Rd → R is L-Lipschitz if, for all x, x′, |f(x) − f(x′)| ≤ L‖x − x′‖2. If a classifier is known to be a Lipschitz function, this immediately implies a robustness certificate. In particular, consider a binary classification for simplicity, where we use an L-Lipschitz function f as a classifier, using the sign of f(x) as the classification. Then for any input x, we are assured that the classification (i.e, the sign) will remain constant for all x′ within a radius |f(x)|/L of x. Numerous methods for training Lipschitz neural networks with small, known Lipschitz constants have been proposed. (Fazlyab et al., 2019; Zhang et al., 2019; Anil et al., 2019; Li et al., 2019b) It is desirable that the network be as expressive as possible, while still maintaining the desired Lipschitz property. Anil et al. (2019) in particular demonstrates that their proposed method can universally approximate Lipschitz functions, given sufficient network complexity. However, in practice, for the robust certification problem on large-scale input, randomized smoothing (Cohen et al., 2019) is the\ncurrent state-of-the-art method. The key observation of randomized smoothing (as formalized by (Salman et al., 2019; Levine et al., 2019)) is that, for any arbitrary base classifier function f : Rd → [0, 1], the function\nx→ Φ−1(pa) where pa(x) := E ∼N (0,σ2I) f(x + ) (1)\nis (1/σ)-Lipschitz, where N (0, σ2I) is a d-dimensional isometric Gaussian distribution with variance σ2 and Φ−1 is the inverse normal CDF function. As a result, given the smoothed classifier value pa(x) at x, one can calculate the certified radius ρ(x) = σΦ−1(pa(x)) in which pa(x) ≥ 0.5 (i.e., Φ−1(pa(x)) ≥ 0). This means that we can use pa(x) ∈ Rd → [0, 1] as a robust binary classifier (with one class assignment if pa(x) ≥ 0.5, and the other if pa(x) < 0.5). Cohen et al. (2019) shows that this is a tight certificate result for a classifier smoothed with Gaussian noise: given the value of pa(x), there exists a base classifier function f such that, if pa is the Gaussian-smoothed version of f , then there exists an x′ with ‖x− x′‖2 = ρ such that pa(x′) = 0.5. In other words, the certificate provided by (Cohen et al., 2019) is the largest possible certificate for Gaussian smoothing, given only the value of pa(x). Previous results (Li et al., 2019a; Lecuyer et al., 2019) provided looser bounds for Gaussian smoothing.\nSingla & Feizi (2020) have recently shown, for shallow neural networks, that, rather than globally bounding the (first-order) Lipschitz constant of the network, it is possible to achieve larger robustness certificates by instead globally bounding the Lipschitz constant of the gradient of the network. This second-order, curvature-based method takes advantage of the fact that the gradient at x can be computed easily via back-propagation, so certificates can make use of both f(x) and∇xf(x). This leads to a question: can we also use the gradient of a smoothed classifier ∇xpa(x) to improve smoothing-based certificates? In this work, we show that there is a universal curvature-like bound for all randomly-smoothed classifiers. Therefore, given pa(x) and∇xpa(x), we can compute larger certificates than is possible using the value of pa(x) alone. Moreover, our bound is tight in that, given only the pair (pa(x),∇xpa(x)), the certificate we provide is the largest possible certificate for Gaussian smoothing. We call our certificates “Second-order Smoothing” (SoS) certificates. As shown in Figure 1, the smoothing-based certificates which we can achieve using second-order smoothing represent relatively modest improvements compared to the first-order bounds. This is a meaningful negative result, given the tightness of our bounds, and is therefore useful in guiding (or limiting) future research into higher-order smoothing certificates. Additionally, this result shows that\nrandomized smoothing (or, specifically, functions in the form of Equation 1) can not be used to universally approximate Lipschitz functions: all randomly smoothed functions will have the additional curvature constraint described in this work.\nIf the base classifier f is a neural network, computing the expectation in Equation 1 analytically is not tractable. Therefore it is standard (Lecuyer et al., 2019; Cohen et al., 2019; Salman et al., 2019) to estimate this expectation using N random samples, and bound the expectation probabilistically. The certificate is then as a high-probability, rather than exact, result, using the estimated lower bound of pa(x). In Section 3.1, we discuss empirical estimation of the gradient norm of a smoothed classifier for second-order certification, and develop an estimator for this quantity, in which the number of samples required to estimate the gradient scales linearly with the dimensionality d of the input.1 In order to overcome this, in Section 4, we develop a modified form of Gaussian randomized smoothing, Gausian Dipole Smoothing, which allows for a dipole certificate, related to the secondorder certificate, to be computed. Unlike the second-order certificate, however, the dipole certificate has no explicit dependence of dimensionality in its estimation, and therefore can practically scale to real-world high-dimensional datasets." }, { "heading": "2 PRELIMINARIES, ASSUMPTIONS AND NOTATION", "text": "We use f(x) to represent a generic scalar-valued “base” function to be smoothed. In general, we assume f ∈ Rd → [0, 1]. However, for empirical estimation results (Theorem 3), we assume that f is a “hard” base classifier: f ∈ Rd → {0, 1}. This will be made clear in context. The smoothed version of f is notated as pa ∈ Rd → [0, 1], defined as in equation 1. Recall that Φ is the normal CDF function and Φ′ is the normal PDF function. In randomized smoothing for multi-class problems, the base classifier is typically a vector-valued function f ∈ Rd → {0, 1}m, ∑ c fc(x) = 1, where m is the number of classes. The final classification returned by the smoothed classifier is then given by a := arg maxc E fc(x + ). However, in most prominent implementations (Cohen et al., 2019; Salman et al., 2019), certificates are computed using only the smoothed value for the estimated top class a, where a is estimated using a small number N0 of initial random samples, before the final value of pa(x) is computed using N samples. The certificate then determines the radius in which pa(x′) will remain above 0.5: this guarantees that a will remain the top class, regardless of the other logits. While some works (Lecuyer et al., 2019; Feng et al., 2020) independently estimate each smoothed logit, this incurs additional estimation error as the number of classes increases. In this work, we assume that only estimates for the top-class smoothed logit pa(x) and its gradient ∇xpa(x) are available (although we briefly discuss the case with more estimated logits in Section 3.2). When discussing empirical estimation, we use η as the accepted probability of failure of an estimation method." }, { "heading": "3 SECOND-ORDER SMOOTHING CERTIFICATE", "text": "We now state our main second-order robustness certificate result: Theorem 1. For all x,x′ with ‖x− x′‖2 < ρ, and for all f : Rd → [0, 1],\npa(x ′) ≥ Φ ( Φ−1(a′ + pa(x))− ρ\nσ\n) − Φ ( Φ−1(a′)− ρ\nσ\n) (2)\nwhere a′ is the (unique) solution to\nΦ′(Φ−1(a′))− Φ′(Φ−1(a′ + pa(x))) = −σ‖∇xpa(x)‖2. (3) Further, for all pairs (pa(x), ‖∇xpa(x)‖2) which are possible, there exists a base classifier f and an adversarial point x′ such that Equation 2 is an equality. This implies that our certificate is realizable, and therefore tight.\nNote that the right-hand side of Equation 2 is monotonically decreasing with ρ: we can then compute a robustness certificate by simply setting pa(x′) = 0.5 and solving for the certified radius ρ. Also,\n1In a concurrent work initially distributed after the submission of this work, Mohapatra et al. (2020) have proposed an identical second-order smoothing certificate, along with a tighter empirical estimator for the gradient norm. In this estimator, the number of samples required scales with √ d.\na′ can be computed easily, because the left-hand side of Equation 3 is monotonic in a′. Evaluated certificate values are shown in Figure 1-b, and compared with first-order certificates.\nAll proofs are presented in Appendix A. Like in Cohen et al. (2019), we proceed by constructing the worst-case base classifier f given pa(x) and ‖∇xpa(x)‖2. This is the base classifier f which creates an adversarial point to the smoothed classifier as close as possible to x, given the constraints that pa(x) and ‖∇pa(x)‖2 are equal to their reported values. In Cohen et al. (2019), given only pa(x), this is simply a linear classifier. With the gradient norm, the worst case is that x lies in a region with class a which is a slice between two linear decision boundaries, both perpendicular to ∇pa(x). See Figure 3. Note that, by isometry and because ∇pa(x) is the only vector information we have, there is no benefit in certified radius to having the direction of ∇pa(x): the norm is sufficient. In the case of a linear classifier the gradient takes its maximum possible value: ‖∇xpa(x)‖2 = σ−1Φ′(Φ−1(pa(x)). This case is shown in Figure 3-a: if the gradient norm is equal to this value, the second-order certificate is identical to the first-order certificate (Cohen et al., 2019). However, if the gradient norm is smaller, then we cannot be in this\nworst-case linear-classifier scenario. Instead, the new “worst case” is constructed by introducing a second “wrong class” region opposite to the direction of the adversarial point (Figure 3-b). In the extreme case (Figure 3-c) where the gradient norm is zero, this is accomplished by balancing two adversarial regions in a “sandwich” around x.\nThis “sandwich” configuration reveals the relative weakness of gradient information in improving robustness certificates: having zero gradient does not require that the adversarial regions be evenly distributed around x. Rather, it is sufficient to distribute the adversarial probability mass 1− pa(x) into just two adversarial regions. Therefore, the certified radius, even in this most extreme case, is\nsimilar to the Cohen et al. (2019) certificate in the case with half as much adversarial probability mass (the first-order certificate for pa(x) := (1 + pa(x))/2). This can be seen in Figure 1-b: note that at pa(x) = 0.6, if the gradient norm is known to be zero, the certificate is slightly below the certificate for pa(x) = 0.8 with no gradient information. The second-order certificate when (pa(x) = 0.6, ‖∇xpa(x)|‖2 = 0) is in fact slightly below the first-order certificate for pa(x) = 0.8, because the Gaussian noise samples throughout all of space, so the smoothed classifier decision boundary is slightly affected by the adversarial region in the opposite direction of x.\nBecause we can explicitly construct “worst-case” classifiers which represent the equality case of Equation 2, our certificates are known to be tight: the reported certified radii are the largest possible certificates, if only pa(x) and ‖∇pa(x)‖2 are known. In Figure 2, we show how our second-order certificate behaves on a simple, two-dimensional, nonlinearly separable dataset, the classic Swiss Roll. The increases are marginal, mostly because the certificates using standard randomized smoothing are already fairly tight. On these data, the certified radii for the two classes are nearly touching in many places along the decision boundary. However, for the blue class, which is surrounded on multiple sides by the red class, there are noticeable increases in the certified radius. This is especially true for points near the center of the blue class, which are at the “top of the hill” of the blue class probability, and therefore have smaller gradient." }, { "heading": "3.1 GRADIENT NORM ESTIMATION", "text": "In order to use the second-order certificate in practice, we must first bound, with high-probability, the gradient norm ‖∇xpa(x)‖2 using samples from the base classifier f . Because Theorem 1 provides certificates that are strictly decreasing with ‖∇xpa(x)‖2, it is only necessary to lower bound ‖∇xpa(x)‖2 with high probability. Salman et al. (2019) suggest two ways of approximate the gradient vector ∇xpa(x) itself, both based on the following important observation:\n∇xpa(x) = E ∼N (0,σ2I) [∇xf(x + )] = E ∼N (0,σ2I) [ f(x + )]/σ2 (4)\nThese two methods are:\n1. At each sampled point, one can measure the gradient of f using back-propagation, and take the mean vector of these estimates.\n2. At each sampled point, one can multiply f(x + ) by the noise vector , and take the mean vector of these estimates.\nNote, however, that Salman et al. (2019) does not provide statistical bounds on these estimates: for our certificate application, we must do so. While we ultimately use an approach based on method 2, we will first briefly discuss method 1. The major obstacle to using method 1 is that it requires that the base classifier f itself to be a Lipschitz function, with a small Lipschitz constant. This can be understood from Markov’s inequality. For example, consider the value of some component z(x) := u · ∇f(x), where u is an arbitrary vector. Suppose N samples are taken, but that z is distributed such that:\nz(x + ) = { 0 with probability 1− 12N 2N with probability 12N\n(5)\nThis would be the case if f is a function that approximates a step function from 0 to 1, with a small buffer region of very high slope, for example. Note that the probability that any of the N samples measures the nonzero gradient component is < 0.5 , but the expected value of this component is in fact 1.0. This example shows that, in order to accurately estimate the gradient with high probability, the number of samples used must at least scale linearly with the maximum possible value of the gradient norm for f . For un-restricted deep neural networks, Lipschitz constants are NP-hard to compute, and upper bounds on them are typically very large (Virmaux & Scaman, 2018). Of course, we could use Lipschitz-constrained networks as described in Section 1 for the base classifier, but this would defeat the purpose of using randomized smoothing in the first place. Moreover, in standard “hard” randomized smoothing as typically implemented (Cohen et al., 2019; Salman et al., 2019), the range of f is {0, 1}, so f is non-differentiable: therefore, this back-propagation method can not be used at all.\nWe therefore use method 2. In particular, we reject the naive approach of estimating each component independently, taking a union bound, and the taking the norm: not only would the error in the normsquared scale with d as the error from each component accumulates, but there would be an additional dependence on d from the union bound: each component would have to be bounded with failure probability η/d, where η is the total failure probability for measuring the gradient norm. Note that this issue will also be encountered in method 1 above, but in that case, a loose upper bound could at least be achieved without this dependency using Jensen’s inequality (the mean of the norms of the gradient is larger than the norm of the mean).\nInstead, we estimate the norm-squared of the mean using a single, unbiased estimator. Note that:\n‖∇xE [f(x + )]‖22 = σ−4E [ f(x + )] · E [ f(x + )] = σ−4E [ f(x + )] · E ′ [ ′f(x + ′)] = σ−4E , ′ [( f(x + )) · ( ′f(x + ′))]\n(6)\nIn other words, we can estimate the norm-squared of the mean by taking pairs of smoothing samples, and taking the dot product of the noise vectors times the product of the sampled values. We show that this is a subexponential random variable (see Appendix), which gives us an asymptotically linear scaling of N with d:\nTheorem 2. Let V := E , ′ [( f(x+ )) · ( ′f(x+ ′))], and Ṽ be its empirical estimate. If n pairs of samples (= N/2) are used to estimate V , then, with probability at most η, E[V ]− Ṽ ≥ t, where:\nt =\n{ 4σ2 √ − dn ln(η) if −2 ln(η) ≤ dn\n− 4 √ 2σ2 n ln(η) if −2 ln(η) > dn (7)\nNote that in practice, we can use the same samples to estimate ‖∇xpa(x)‖2 as are used to estimate pa(x). However, this requires reducing the failure probability of each estimate to η′ = η/2, in order to use a union bound. This means that, if N is small (or d large), second-order smoothing can in fact give worse certificates than standard smoothing, because the benefit of a (loose, for N small) estimate of the gradient is less significant than the negative effect of reducing the estimate of pa(x). As shown in Figure 4-a, even for very large N and relatively small dimension, the empirical estimation significantly reduces the radii of certificates which can be calculated. See Section 5 for experimental results." }, { "heading": "3.2 UPPER-BOUND AND MULTI-CLASS CERTIFICATES", "text": "We can easily convert Theorem 1 into a tight upper bound on pa(x′) by simply evaluating it for f ′ = 1−f (and therefore p′a = 1−pa). If estimates and gradients are available for multiple classes,\nit would then be possible to achieve an even larger certificate, by setting the lower bound of the top logit equal to the upper bounds of each of the other logits. Note, however, that unlike first-order smoothing works (Lecuyer et al., 2019; Feng et al., 2020) which use this approach, it is not sufficient to compare against just the “‘runner-up” class, because other logits may have less restrictive upper-bounds due to having larger gradients. As discussed above, gradient norm estimation can be computationally expensive, so gradient estimation for many classes may not be feasible. Also, note that while this approach would produce larger, correct certificates, we do not claim that these would be tight certificates given the value and gradient information for all classes: the “worst case” constructions we describe above for a single logit might not be simultaneously construct-able for multiple logits." }, { "heading": "4 DIPOLE SMOOTHING", "text": "For large-scale image datasets, the dependence on d in Theorem 2 can create statistical barriers. However, the general approach of second-order smoothing, especially using the discrete estimation method (method 2) described above, has an interesting interpretation: rather than using simply the mean of f(x + ), we are also using the geometrical distribution of the values of f(x + ) in space to compute a larger certified bound. In particular, if we can show that points which are adversarial for the base classifier (points with f(x + ) = 0) are dispersed, then this will imply larger certificates, because it makes it impossible for a perturbation in a single direction to move x towards the adversarial region. Second-order smoothing, above, is merely an example of this.\nWe therefore introduce Gaussian Dipole smoothing. This is a method which, like second-order smoothing, also harnesses the geometrical distribution of the values of f(x) to improve certificates. However, unlike second-order smoothing, there is no explicit dependence on d in the empirical dipole smoothing bound. In this method, when we sample f(x+ ) when estimating pa(x), we also sample f(x− ). This allows us to compute two quantities:\nCS := E [f(x + )f(x− )] CN := E [f(x + )− f(x + )f(x− )]\n(8)\nThe certificate we can calculate is then as follows: Theorem 3. For all x,x′ with ‖x− x′‖2 < ρ, and for all f : Rd → [0, 1],\npa(x ′) ≥ Φ ( Φ−1(CN )− ρ\nσ\n) + Φ ( Φ−1( 1 + CS\n2 )− ρ σ\n) − Φ ( Φ−1( 1− CS\n2 )− ρ σ\n) (9)\nWe also compute this bound by constructing the worst possible classifier. In this case, the trick is that, if two adversarial sampled points are opposite one another (i.e., f(x+ ) = f(x− ) = 0) then they cannot both contribute to the same adversarial “direction”. In the worst case, the “reflected” adversarial points form a plane opposite the base classifier decision boundary (See Figure 4-b). In the extreme case where CN = 0, the “worst case” classifier is the same as for second-order smoothing.\nExperimentally, we simply need to lower-bound both CS and CN from samples. This reduces the precision of our estimates, for two reasons: we have half as many independent samples for the same number of evaluations we must perform, and we are bounding two quantities, which requires halving the error probability for each. However, unlike second-order smoothing, there is no dependence on d: this allows for practical certificates of real-world datasets." }, { "heading": "5 EXPERIMENTS", "text": "Experimental results are presented in Figures 5 and 6, with further results in Appendix B. Because both dipole and second-order certificates reduce the precision with which empirical quantities needed for certification can be estimated, but both provide strictly larger certificates at the population level, the key question becomes at what number of samples N does each higher-order method\nbecome beneficial. Note that in the figures, we are comparing the new methods to standard smoothing, using the same N for standard smoothing as for the new method. Due to the poor scaling of second-order certificates with dimension, we tested second-order smoothing on a low-dimensional dataset, 7 × 7 MNIST. However, significant increases to certificates were not seen until N = 107 even on this dataset. By contrast, dipole smoothing is beneficial for many images even when smaller numbers of smoothing samples are used. Because it scales to higher-dimensional data, we also tested Gaussian dipole smoothing on CIFAR-10 and ImageNet, where it led to modest improvements in certificates, in particular at N = 106. In Appendix C, we show the absolute, rather than relative, certified accuracy curves for the experiments shown in Figures 5 and 6. These plot show that higher-order smoothing techniques (SoS and Gaussian Dipole smoothing) are mostly beneficial for increasing the certificates of images with small certified radii. In cases where certificates are already large, increased estimation error can lead to a decrease in certificates, but this effect is small relative to the magnitudes of these certificates (typically < 1%)." }, { "heading": "6 CONCLUSION", "text": "In this work, we explored the limits of using gradient information to improve randomized smoothing certificates. In particular, we introduced second-order smoothing certificates and showed tight and realizable upper bounds on their maximum achievable benefits. We also proposed Gaussian dipole smoothing, a novel method for robustness certification, which can improve smoothing-based robustness certificates even on large-scale data sets. This introduces a broader question for future work: what other information about the spacial distribution of classes in randomized smoothing can be efficiently used to improve robustness certificates?" }, { "heading": "A PROOFS", "text": "" }, { "heading": "A.1 SIMPLE PROOF OF THEOREM 1 FROM COHEN ET AL. (2019)", "text": "We first provide a novel, simple, and intuitive proof for first-order randomized smoothing: this will allow us to develop methods and notations for later proofs. Theorem. Let ∼ N (0, σ2I). For all x,x′ with ‖x− x′‖2 < ρ, and for all f : Rd → [0, 1]:\nE [f(x′ + )] ≥ Φ ( Φ−1 (E [f(x + )])− ρ\nσ\n) (10)\nWhere Φ is the normal cdf function and Φ−1 is its inverse.\nProof. Let R = ‖x− x′‖2. Choose our basis so that x = 0 and x′ = [R, 0, 0, ..., 0]T (Note that by isometry, we still have ∼ N (0, σ2I)). Then define g : R→ [0, 1]:\ng(z) = E 2,..., n [f([z, 2, ..., n]T )] (11)\nNote that:\nE 1 [g( 1)] = E [f(x + )] E 1 [g(R+ 1)] = E [f(x′ + )]\n(12)\nNow, in one dimension, 1 ∼ N (0, σ), and so has a pdf of z → σ−1Φ′ ( z σ ) , where Φ′ is the normal pdf function. By the definition of expected value:\nE 1 [g( 1)] = ∫ ∞ −∞ g( 1)σ −1Φ′ ( 1 σ ) d 1\nE 1 [g(R+ 1)] = ∫ ∞ −∞ g(R+ 1)σ −1Φ′ ( 1 σ ) d 1 = ∫ ∞ −∞ g( 1)σ −1Φ′ ( 1 σ − R σ ) d 1 (13)\nWe perform a change of integration variables, using y = Φ ( 1 σ ) (and noting that dyd 1 =\nσ−1Φ′ ( 1 σ ) ):\nE 1 [g( 1)] = ∫ 1\n0\ng(σΦ−1(y))σ−1Φ′ ( 1 σ ) d 1 dy dy = ∫ 1 0 g(σΦ−1(y))dy\nE 1 [g(R+ 1)] = ∫ 1\n0\ng(σΦ−1(y))σ−1Φ′ ( 1 σ − R σ ) d 1 dy dy\n= ∫ 1 0 g(σΦ−1(y)) Φ′ ( Φ−1(y)− Rσ ) Φ′ (Φ−1(y)) dy\n(14)\nNote that: Φ′ ( Φ−1(y)− Rσ ) Φ′ (Φ−1(y)) = e− 1 2 (Φ −1(y)−Rσ ) 2 e− 1 2 (Φ −1(y))2 = eΦ −1(y)− R2 2σ2 (15)\nAlso, to simplify notation, define gΦ : [0, 1] → [0, 1] as gΦ(y) := g(σΦ−1(y)). Then we have (combining Equations 14 and 15):\nE 1 [g( 1)] = ∫ 1\n0\ngΦ(y)dy\nE 1 [g(R+ 1)] = ∫ 1\n0\ngΦ(y)e Φ−1(y)− R2 2σ2 dy\n(16)\nFix the expectation at x, E 1 [g( 1)], at a constantC, let us consider the function gΦ which minimizes the expectation at x′:\nE 1 [g(R+ 1)] ≥ min gΦ∈[0,1]→[0,1]∫ 1\n0 gΦ(y)dy=C\n∫ 1 0 gΦ(y)e Φ−1(y)− R2 2σ2 dy (17)\nHowever, note that eΦ −1(y)− R2 2σ2 increases monotonically with y. Then the minimum is achieved at:\ng∗Φ(y) = { 1 if y ≤ C 0 if y > C (18)\nIn terms of the function g(z), this is:\ng∗(z) = { 1 if z ≤ σΦ−1(C) 0 if z > σΦ−1(C) (19)\nThen we can evaluate the minimum, using the form of the integral given in Equation 13: E 1 [g(R+ 1)] ≥ ∫ ∞ −∞ g∗( 1)σ −1Φ′ ( 1 σ − R σ ) d 1\n= ∫ σΦ−1(C) −∞ σ−1Φ′ ( 1 σ − R σ ) d 1\n= Φ\n( σΦ−1(C)\nσ − R σ\n) − Φ ( −∞ σ − R σ ) = Φ ( Φ−1(C)− R\nσ\n) (20)\nBy the definition of C and Equation 12, this is: E [f(x′ + )] ≥ Φ ( Φ−1 (E [f(x + )])− R\nσ\n) ≥ Φ ( Φ−1 (E [f(x + )])− ρ\nσ\n) (21)\nwhich was to be proven." }, { "heading": "A.2 SECOND ORDER SMOOTHING", "text": "Theorem 1. Let ∼ N (0, σ2I). For all x,x′ with ‖x− x′‖2 < ρ, and for all f : Rd → [0, 1], E 1 [f(x′ + )] ≥ Φ ( Φ−1(a′ + E [f(x + )])− ρ\nσ\n) − Φ ( Φ−1(a′)− ρ\nσ\n) (22)\nWhere Φ is the normal cdf function, Φ−1 is its inverse, and a′ is the (unique) solution to\nΦ′(Φ−1(a′))− Φ′(Φ−1(a′ + E [f(x + )])) = −σ‖∇xE [f(x + )]‖2 (23) Further, for all pairs (E [f(x + )]), ‖∇xE [f(x + )]‖2) which are possible, there exists a base classifier f and an adversarial point x′ such that Equation 4 is an equality.\nAs show by Salman et al. (2019), we have, for all x′′ ∈ Rd: ∇x′′E [f(x′′ + )] = σ−2E [ f(x′′ + )] (24)\nUnder the choice of basis of the above proof (in particular, x = 0), when evaluated at x this becomes:\n∇xE [f(x + )] = σ−2E [ f( )] (25)\nLet u := [1, 0, 0, ..., 0]T , and define g as in the above proof. Note that:\n−‖∇xE [f(x + )]‖2 ≤u · ∇xE [f(x + )] =u · σ−2E [ f( )] =σ−2E [ 1f( )] =σ−2E 1 [ 1[E 2,.., df( )]] =σ−2E 1 [ 1g( 1)]\n(26)\nBy the definition of expectation, and again using the change of integration variables y := Φ ( 1 σ ) ,\nE 1 [ 1g( 1)] = ∫ 1\n0\nσΦ−1(y)g(σΦ−1(y))σ−1Φ′ ( 1 σ ) d 1 dy dy\n= ∫ 1 0 σΦ−1(y)g(σΦ−1(y))dy\n(27)\nDefine: C := E 1 [g( 1)] (= E [f(x + )) C ′ := E 1 [ 1g( 1)] (≥ −σ2‖∇xE [f(x + )]‖2)\n(28)\nThen, by Equations 16 and 27, and defining gΦ, as above,\nE 1 [g(R+ 1)] ≥ min gΦ∈[0,1]→[0,1]∫ 1\n0 gΦ(y)dy=C∫ 1\n0 σΦ−1(y)gΦ(y)dy=C ′\n∫ 1 0 gΦ(y)e Φ−1(y)− R2 2σ2 dy (29)\nNote that our constraints are linear in the space of functions; we can then introduce Lagrange multipliers:\nmin gΦ∈[0,1]→[0,1] ∫ 1 0 gΦ(y)e Φ−1(y)− R2 2σ2 dy − λ1 (∫ 1 0 gΦ(y)dy − C )\n− λ2 (∫ 1\n0\nσΦ−1(y)gΦ(y)dy − C ′ ) (30)\nmin gΦ∈[0,1]→[0,1] ∫ 1 0 gΦ(y)e Φ−1(y)− R2 2σ2 dy − λ1gΦ(y)− λ2σΦ−1(y)gΦ(y)dy + constants (31)\nmin gΦ∈[0,1]→[0,1] ∫ 1 0 gΦ(y) ( eΦ −1(y)− R2 2σ2 − λ1 − λ2σΦ−1(y) ) dy + constants\nThis is simply the inner product between gΦ and a function: the inner product is minimized by setting gΦ = 1 where the expression is negative, and gΦ = 0 where the expression is positive:\ng∗Φ(y) =\n{ 1 if eΦ −1(y)− R2 2σ2 ≤ λ1 + λ2σΦ−1(y)\n0 if eΦ −1(y)− R2 2σ2 > λ1 + λ2σΦ −1(y)\n(32)\nSign changes occur at:\nΦ−1(y) = −W −e− R22σ2−λ1λ2 λ2 − λ1 λ2\n(33)\nWhere W is the product-log (Lambert W) function. This returns zero, one, or two possible values, depending on the argument (zero values if the argument < −e−1, two values on [−e−1, 0), and one value on non-negative arguments). Therefore there are at most two sign changes. Also, note that as y → 1, Φ−1(y)→∞, taking the limit in Equation 32, we know that\ng∗Φ(1) = 0. Therefore, taking into account the constraint ∫ 1\n0 gΦ(y)dy = C, we know that g∗Φ is either:\n• 0 everywhere (and C = 0), if no sign changes • 1 at y < C, 0 otherwise, if one sign change • 1 on the interval [a, a+ C], 0 otherwise, for some a ∈ [0, 1− C], if two sign changes\nIn fact, the final case includes the first two, so all that we need to do now is find a to satisfy the C ′ constraint. This constraint (Equation 27) becomes:∫ a+C\na\nΦ−1(y)dy = C ′\nσ (34)\nBecause Φ−1(y) is monotone increasing, the LHS of Equation 34 is a monotone increasing function of a. Using the indefinite integral of Φ−1:∫\nΦ−1(y)dy = ∫ √ 2 erf−1(2y − 1)dy = − 1√\n2π e−(erf −1(2y−1))2 + C = −Φ′(Φ−1(y)) + C\nWhere erf−1 is the inverse error function. Then the constraint becomes:\nΦ′(Φ−1(a))− Φ′(Φ−1(a+ C)) = C ′\nσ (35)\nWe can now evaluate the value of the smoothed function at x′, again using the form of the integral given in Equation 13:\nE 1 [g(R+ 1)] ≥ ∫ ∞ −∞ g∗( 1)σ −1Φ′ ( 1 σ − R σ ) d 1\n= ∫ σΦ−1(a+C) σΦ−1(a) σ−1Φ′ ( 1 σ − R σ ) d 1\n= Φ ( Φ−1(a+ C)− R\nσ\n) − Φ ( Φ−1(a)− R\nσ\n) (36)\nIf we consider the form of this integral in Equation 29, which reduces to simply:∫ a+C a eΦ −1(y)− R2 2σ2 dy (37) we see that is is a monotonically increasing function in a. Furthermore, we have that the LHS of Equation 35 is monotonic in a. Therefore, if we define a′ as the solution to: Φ′(Φ−1(a′))− Φ′(Φ−1(a′ + C)) = −σ‖∇xE [f(x + )]‖2 (38) Then by Equation 26, we know a′ ≤ a. Then because the RHS of Equation36 is also monotonic in a,\nE 1 [g(R+ 1)] ≥ Φ (\nΦ−1(a′ + C)− R σ\n) − Φ ( Φ−1(a′)− R\nσ\n) (39)\nBy the definitions of g and C, and because the RHS is monotonically decreasing in R (See Equation 37), we can conclude the theorem as stated.\nFurther, we can conclude that an equality case is possible by noting that it is achieved by the function g∗(z) as described above: the minimal f∗(z) can then be constructed as f∗(z, ·, ·, ...) := g∗(z). Note that we also need Equation 26 to be tight: this is achieved where the adversarial direction x′ − x is parallel to the gradient of the smoothed function." }, { "heading": "A.3 PRACTICAL CERTIFICATION ALGORITHM", "text": "Define C := lower bound on C\nC ′ := upper bound on C ′ (40)\nNote that ‖∇xE [f(x + )]‖22 =\nσ−4E [ f( )] · E [ f( )] = σ−4E [ f( )] · E ′ [ ′f( ′)] = σ−4E , ′ [( f( )) · ( ′f( ′))]\n(41)\nTheorem 2. Let V := E , ′ [( f(x+ )) · ( ′f(x+ ′))], and Ṽ be its empirical estimate. If n pairs of samples (= N/2) are used to estimate V , then, with probability at most η, E[V ]− Ṽ ≥ t, where:\nt =\n{ 4σ2 √ − dn ln(η) if −2 ln(η) ≤ dn\n− 4 √ 2σ2 n ln(η) if −2 ln(η) > dn (42)\nWainwright (2019) gives the following condition for any centered (mean-zero) sub-exponential random variable X: Definition 1. A centered R.V. X is (a,b)-subexponential if:\nE[eλX ] ≤ ea 2λ2/2, ∀ λ ∈ [−b−1, b−1] (43)\nFirst, we establish bounds for · ′. (This can be considered a simplified case of Gaussian chaos of the second order, see Vershynin (2018)). For each i ∈ [d], i and ′i are independent Gaussian random variables. Recall the moment-generating function for a Gaussian:\nE[et i ] = eσ 2t2/2 ∀ t (44)\nThen for the product i ′i we have that:\nE[eλ i ′ i ] = E iE ′i [[e λ i ′ i ]] = E i [e 2 i (λ 2σ2/2)] (45)\nNote that this has a similar form to the moment generating function of the Chi-squared distribution for k = 1:\nE i [e 2 i t] = 1√ 1− 2σ2t\n(46)\nThen: E[eλ i ′ i ] = E i [e 2 i (λ 2σ2/2)] = 1√\n1− λ2σ4 ≤ eλ 2σ4 ∀λ2σ4 ≤ 1 2\n(47)\nWhere the final inequality can be shown by observing that, if λ2σ4 ≤ 1/2:\n1\n1− λ2σ4 = 1 +\nλ2σ4\n1− λ2σ4 ≤ 1 + 2λ2σ4 ≤ e2λ\n2σ4 (48)\nand taking square roots. Because i ′i is centered, this implies that i ′ i is (\n√ 2σ2, √ 2σ2)-\nsubexponential. Now, · ′ is simply the sum of d such identical, independent, centered subexponential variables: by (Wainwright (2019) Equation 2.18), we conclude that · ′ is ( √ 2σ2 √ d, √\n2σ2)subexponential. This implies:\nE[eλ · ′ ] ≤ e2σ 4dλ2/2, ∀ λ ∈ [−( √ 2σ2)−1, ( √ 2σ2)−1] (49)\nRecall that the quantity which we are measuring is ( f( )) · ( ′f( ′)). For notation convenience, let v( , ′) : Rd × Rd → {0, 1} be defined as f( )f( ′), so that the quantity of interest is\nV := · ′v( , ′) (50)\nWe further define a centered version of this quantity\nV ′ := V − E[V ] (51)\nWe now introduce an important lemma: Lemma 1. V ′ is (2 √ 2σ2 √ d, 2 √ 2σ2)-subexponential.\nProof. Define p := Pr\n, ′ [v( , ′) = 1] (52)\nThen: V = · ′v( , ′) = · ′ − (1− v( , ′)) · ′ (53)\nE[V ] = E[ · ′]− E[(1− v( , ′)) · ′] = −E[(1− v( , ′)) · ′] (54)\nE[V ] = E[ · ′v( , ′)] = pE[ · ′|v( , ′) = 1] = −(1− p)E[ · ′|v( , ′) = 0] (55) Therefore\nV ′ = · ′v( , ′)− E[ · ′v( , ′)] = · ′v( , ′)− v( , ′)E[ · ′v( , ′)]− (1− v( , ′))E[ · ′v( , ′)]\n(56)\nV ′ = · ′v( , ′)− pv( , ′)E[ · ′v( , ′)|v( , ′) = 1] + (1− p)(1− v( , ′))E[ · ′v( , ′)|v( , ′) = 0]\n(57)\nDefine: A := · ′v( , ′) + (1− v( , ′))E[ · ′|v( , ′) = 0] (58)\nB := −pv( , ′)E[ · ′v( , ′)|( , ′) = 1]− p(1− v( , ′))E[ · ′|v = 0] (59) V ′ = A+B (60)\nTrivially, we have:\nE[eλ · ′ ] = pE[eλ · ′ |v( , ′) = 1] + (1− p)E[eλ · ′ |v( , ′) = 0] (61)\nHowever, note that also: E[eλA] = pE[eλA|v( , ′) = 1] + (1− p)E[eλA|v( , ′) = 0]\nE[eλA] = pE[eλ · ′ |v( , ′) = 1] + (1− p)eλE[ ·\n′|v( , ′)=0] (62)\nThen by Jensen’s inequality, we have:\nE[eλA] ≤ E[eλ · ′ ] ∀λ (63)\nSimilarly: E[eλB ] = pE[eλB |v( , ′) = 1] + (1− p)E[eλB |v( , ′) = 0]\nE[eλB ] = pe−pλE[ · ′|v( , ′)=1] + (1− p)e−pλE[ ·\n′|v( , ′)=0] (64)\nAgain, by Jensen’s inequality: E[eλB ] ≤ E[e−pλ · ′ ] ∀λ (65)\nE[eλB ] ≤ E[e−pλ · ′ ] ≤ e2σ 4dp2λ2/2 ≤ e2σ 4dλ2/2, ∀ − pλ ∈ [−( √ 2σ2)−1, ( √ 2σ2)−1] (66)\nBecause p ≤ 1, we then have: E[eλB ] ≤ e2σ 4dλ2/2, ∀ λ ∈ [−( √ 2σ2)−1, ( √ 2σ2)−1] (67) In other words, we have shown that bothA andB are both ( √ 2σ2 √ d, √\n2σ2)-subexponential. Then, by Cauchy-Schwartz:\nE[eλV ′ ]\n= E[eλAeλB ] ≤ √ E[e2λA]E[e2λB ]\n≤ √ e8σ4dλ2/2e8σ4dλ2/2 ∀2λ ∈ [−( √ 2σ2)−1, ( √ 2σ2)−1]\n(68)\nE[eλV ′ ] ≤ e8σ 4dλ2/2 ∀λ ∈ [−(2 √ 2σ2)−1, (2 √\n2σ2)−1] (69) In other words, V ′ is (2 √ 2σ2 √ d, 2 √ 2σ2)-subexponential.\nFinally, using the form of the one-sided Bernstein tail bound for subexponential random variables given in Wainwright (2019), we have, given n measurements and an empirical mean estimate of V as Ṽ :\nPr(E[V ]− Ṽ ≥ t) ≤\n{ e −nt2 16dσ4 if t ≤ 2 √ 2σ2d\ne −nt 4 √ 2σ2 if t > 2 √ 2σ2d (70)\nThen, given a failure rate η, we can compute the minimum deviation t such that the failure probability is less than η:\nt =\n{ 4σ2 √ − dn ln(η) if −2 ln(η) ≤ dn\n− 4 √ 2σ2 n ln(η) if −2 ln(η) > dn (71)" }, { "heading": "A.4 DIPOLE SMOOTHING", "text": "Theorem 3. Let ∼ N (0, σ2I). For all x,x′ with ‖x − x′‖2 < ρ, and for all f : Rd → [0, 1], define:\nCS := E [f(x + )f(x− )] CN := E [f(x + )− f(x + )f(x− )]\n(72)\nThen:\nE 1 [f(x′ + )] ≥ Φ (\nΦ−1(CN )− ρ σ ) + Φ ( Φ−1( 1 + CS\n2 )− ρ σ ) − Φ ( Φ−1( 1− CS\n2 )− ρ σ\n) (73)\nWhere Φ is the normal cdf function and Φ−1 is its inverse.\nProof. As in the proof of Theorem A.1, let R = ‖x− x′‖2, and choose our basis so that x = 0 and x′ = [R, 0, 0, ..., 0]T . First, for f : Rd → [0, 1], we define a decomposition into symmetric and non-symmetric components, fS , fN : Rd → [0, 1]:\nfS( ) := f( )f(− ) fN ( ) := f( )− f( )f(− )\n(74)\nNote that f( ) = fS( ) + fN ( ) and also that fS( ) = fS(− ). Define gS(z), gN (z) : R→ [0, 1] by analogy to Equation 12. By linearity of expectation, note that g(z) = gS(z) + gN (z). Also note that:\ngS(−z) = E 2,..., n [fS([−z, 2, ..., n]T )] = E− 2,...,− n [[fS([−z,− 2, ...,− n]T )] = E 2,..., n [[fS([−z,− 2, ...,− n]T )] = E 2,..., n [[fS([z, 2, ..., n]T )] = gS(z)\n(75)\nSimilarly, define gSΦ and g N Φ . We still have:\ngΦ(y) = g(σΦ −1(y)) = gS(σΦ−1(y)) + gN (σΦ−1(y)) = gSΦ(y) + g N Φ (y) (76)\nAlso (using Φ−1(y) = −Φ−1(1− y)):\ngSΦ(y) = g S(σΦ−1(y)) = gS(−σΦ−1(1− y)) = gS(σΦ−1(1− y)) = gSΦ(1− y) (77)\nNote that all of the mechanics of the proof of Theorem 1 can be applied to f, fS and fN . Following Equation 13, we have:\nCS = E [fS(x + )] = ∫ 1\n0\ngSΦ(y)dy\nCN = E [fN (x + )] = ∫ 1\n0\ngNΦ (y)dy\nC := E [f(x + )] = ∫ 1\n0\ngΦ(y)dy = ∫ 1 0 gSΦ(y) + g N Φ (y)dy = C S + CN\n(78)\nWe may then write the minimization in Equation 17, fixing CN and CS as constants separately2:\nE 1 [g(R+ 1)] ≥ min gSΦ,g\nN Φ ∈[0,1]→[0,1]∫ 1\n0 gSΦ(y)dy=C S∫ 1 0 gNΦ (y)dy=C N\n∫ 1 0 gΦ(y)e Φ−1(y)− R2 2σ2 dy\n= min gSΦ,g\nN Φ ∈[0,1]→[0,1]∫ 1\n0 gSΦ(y)dy=C S∫ 1 0 gNΦ (y)dy=C N\n∫ 1 0 (gSΦ(y) + g N Φ (y))e Φ−1(y)− R2 2σ2 dy\n= min gSΦ∈[0,1]→[0,1]∫ 1 0 gSΦ(y)dy=C S\n∫ 1 0 gSΦ(y)e Φ−1(y)− R2 2σ2 dy\n+ min gNΦ ∈[0,1]→[0,1]∫ 1 0 gNΦ (y)dy=C N\n∫ 1 0 gNΦ (y)e Φ−1(y)− R2 2σ2 dy\n(79)\nThe second minimum can be computed as in the proof of Theorem 1, it is simply Φ ( Φ−1(CN )− Rσ ) . For the first minimum, we consider the additional constraint, that gSΦ(y) = gSΦ(1− y). Then we can rewrite the integral as:∫ 1 0 gSΦ(y)e Φ−1(y)− R2 2σ2 dy\n=\n∫ 1 2\n0\ngSΦ(y)e Φ−1(y)− R2 2σ2 dy + ∫ 1 1 2 gSΦ(1− y)e Φ−1(y)− R2 2σ2 dy\n=\n∫ 1 2\n0\ngSΦ(y)e Φ−1(y)− R2 2σ2 dy + ∫ 0 1 2 gSΦ(y ′)eΦ −1(1−y′)− R2 2σ2 (−1)dy′\n=\n∫ 1 2\n0\ngSΦ(y)e Φ−1(y)− R2 2σ2 dy +\n∫ 1 2\n0\ngSΦ(y ′)e−Φ\n−1(y′)− R2 2σ2 dy′\n= e− R2 2σ2\n∫ 1 2\n0\ngSΦ(y) [ eΦ −1(y) + e−Φ −1(y) ] dy\n= 2e− R2 2σ2\n∫ 1 2\n0\ngSΦ(y) cosh(Φ −1(y))dy\n(80)\nSo the minimization becomes:\nmin gSΦ∈[0, 12 ]→[0,1]∫ 1 2\n0 g S Φ(y)dy= 1 2C S\n2e− R2 2σ2\n∫ 1 2\n0\ngSΦ(y) cosh(Φ −1(y))dy (81)\nNote that cosh(Φ−1(y)) is a monotonically decreasing function of y on the range [0, 12 ]. Then the minimum is achieved by the function:\ngS∗Φ (y) =\n{ 0 if y < 1−C S\n2\n1 if 1−C S\n2 ≤ y ≤ 1 2\n(82)\nWhere the value in the domain [ 12 , 1] can be computed using g S∗ Φ (1 − y) = gS∗Φ (y) In terms of the function gS(z), this is:\ngS∗(z) =\n{ 1 if |z| ≤ σΦ−1( 1+C S\n2 )\n0 if |z| > σΦ−1( 1+C S 2 ) (83)\n2Note that we are not considering all applicable constraints here: in particular we are not restricting the range of gΦ(z) to [0, 1] explicitly. However, the lower bound presented here must be at least as low as the lower bound with this additional constraint, so the inequality is still valid. Also, this constraint does in fact hold in the final construction.\nWe can now evaluate the integral, again using the form of the integral given in Equation 13: E 1 [gS(R+ 1)] ≥ ∫ ∞ −∞ gS∗( 1)σ −1Φ′ ( 1 σ − R σ ) d 1\n= ∫ σΦ−1( 1+CS2 ) −σΦ−1( 1+CS2 ) σ−1Φ′ ( 1 σ − R σ ) d 1\n= Φ ( Φ−1( 1 + CS\n2 )− R σ\n) − Φ ( −Φ−1(1 + C S\n2 )− R σ ) = Φ ( Φ−1( 1 + CS\n2 )− R σ\n) − Φ ( Φ−1( 1− CS\n2 )− R σ\n) (84)\nSo, combining the gS and gN terms, we have: E 1 [g(R+ 1)] ≥ Φ (\nΦ−1(CN )− R σ ) + Φ ( Φ−1( 1 + CS\n2 )− R σ ) − Φ ( Φ−1( 1− CS\n2 )− R σ\n) (85)\nFrom Equation 12, and noting that the last two terms together are monotonically decreasing3 with R, we complete the proof." }, { "heading": "B ADDITIONAL EXPERIMENTS", "text": "Here, we present experiments at a wider range of parameters. For all figures, images misclassified or not certified for both the baseline and the tested method are not counted: the total test set size is 1000 for MNIST, 500 for CIFAR and ImageNet with N = 105, and 100 for CIFAR and ImageNet with N = 106. For all experiments, N0 = 100. Also, note that we test independently for the baseline and higher-order methods (i.e., we use different smoothing samples). This is necessary to compare fairly to dipole smoothing, where the sampling method is different; however, it does lead to some noise, especially at N = 105.\nB.1 NOISE LEVEL σ.\nWe see (Figures 7 and 8) that at a smaller level of noise (σ = 0.12), the effect of higher-order smoothing is diminished. This can be understood in terms of the curves in Figure 1-b: lower noise leads to more inputs with higher pa, which reduces the benefit of the higher-order certificate. Conversely, higher noise increases the effects of the higher-order certificates, although it also leads to decreased total accuracy. The dipole certificate underperforms at N = 105, σ = 0.5: this is likely due to the increase in estimation error, which becomes significant near pa = 0.5.\nB.2 DIMENSIONALITY d\nTo test second-order smoothing on a lower-dimensional dataset, we performed PCA on the 7 × 7 MNIST images, and classified using the top 10 principal components. (d = 10). Results are shown in Figures 9, 10, and 11. We see that, at N = 106, second-order smoothing has a marginal positive impact at this smaller scale." }, { "heading": "B.3 DIPOLE SMOOTHING ON CIFAR-10", "text": "In Figure 12, we see experiments on CIFAR-10 using dipole smoothing, for a range of σ ∈ {0.12, 0.25, 0.50, 1.00} and N ∈ {105, 106}. Note that dipole smoothing appears to be beneficial even at N = 105 on CIFAR-10, at all noise levels ≥ 0.25.\n3To see this, note the form of the integral of gS equal to these terms given in Equation 80" }, { "heading": "B.4 DIPOLE SMOOTHING ON IMAGENET", "text": "In Figure 13, we see experiments on ImageNet using dipole smoothing, for a range of σ ∈ {0.25, 0.50, 1.00} and N ∈ {105, 106}. There is an anomalous result for σ = 0.50, N = 105, in that this is the only case where dipole smoothing appears to perform worse than standard smoothing. However, this turns out to be a computational artifact. At both σ = 0.50 and σ = 0.25, there are a large number of images where every smoothing sample is correctly classified, so pa is as close to 1 as the measurement bounds allow. Note that if pa truly equals 1, the certified radius is infinite, so in this domain, the reported certificate is entirely a function of the estimation error. Because dipole smoothing reduces measurement precision, these samples have somewhat smaller certified radii under dipole smoothing, especially at small N . However, this gap should be exactly proportional to σ. The cause of the anomaly is the fact that our code (adapted from (Cohen et al., 2019)) records each radius to three significant figures. At σ = 0.5, for an image where all noise samples are correctly classified, the ratio of the dipole smoothing radius to the standard smoothing radius is reported as 1.89/1.91 = 98.95%, while for σ = 0.25 it is reported as 0.947/0.953 = 99.37%. This explains the large number of samples with reported > 1% decrease in certificates for σ = 0.50, N = 105." }, { "heading": "C ABSOLUTE CERTIFICATES FOR MAIN-TEXT EXPERIMENTS", "text": "In Figures 14 and 15, we show the absolute, rather than relative, values of the certificates reported in the main text, compared to the baseline first-order randomized smoothing. We see that the benefit of the proposed techniques is greatest for images with small absolute certificates, and that, on CIFAR10 and ImageNet, there is some disadvantage to dipole smoothing on the largest possible certificates, where all smoothing samples are classified correctly. This is because, for these images, the certificate depends entirely on estimation error." } ]
2,020
null
SP:4590c3a3d2a389f0d09fb308793c06855ac02fea
[ "The authors propose interpreting the decision of a black-box (BB) image classifier using diverse counterfactual explanations. The proposed model consists of a pre-trained β-TCVAE, which learns to extract a disentangled latent representation for the input image. To generate explanations for a given image, the model optimizes to find n latent perturbations. Each decoded output from β-TCVAE is similar to the original image and produces a desired outcome from the BB classifier. To ensure the diversity among the n latent perturbations, the model minimizes the pairwise similarity loss between the latent perturbations. The model further performs spectral clustering to partition the latent space into different attributes. Thus, at inference time, for the same input image, multiple counterfactual images can be generated as explanations by changing different dimensions of the latent space. The experiments demonstrate the realistic quality of the explanations and their ability to discover bias in the BB classifier. " ]
Explainability of machine learning models has gained considerable attention within our research community given the importance of deploying more reliable machine-learning systems. Explanability can also be helpful for model debugging. In computer vision applications, most methods explain models by displaying the regions in the input image that they focus on for their prediction, but it is difficult to improve models based on these explanations since they do not indicate why the model fail. Counterfactual methods, on the other hand, indicate how to perturb the input to change the model prediction, providing details about the model’s decision-making. Unfortunately, current counterfactual methods make ambiguous interpretations as they combine multiple biases of the model and the data in a single counterfactual interpretation of the model’s decision. Moreover, these methods tend to generate trivial counterfactuals about the model’s decision, as they often suggest to exaggerate or remove the presence of the attribute being classified. Trivial counterfactuals are usually not valuable, since the information they provide is often already known to the system’s designer. In this work, we propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss to uncover multiple valuable explanations about the model’s prediction. Further, we introduce a mechanism to prevent the model from producing trivial explanations. Experiments on CelebA and Synbols demonstrate that our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods. We will make the code public.
[]
[ { "authors": [ "Julius Adebayo", "Justin Gilmer", "Michael Muelly", "Ian Goodfellow", "Moritz Hardt", "Been Kim" ], "title": "Sanity checks for saliency maps", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale gan training for high fidelity natural image synthesis", "venue": "arXiv preprint arXiv:1809.11096,", "year": 2018 }, { "authors": [ "Q. Cao", "L. Shen", "W. Xie", "O.M. Parkhi", "A. Zisserman" ], "title": "Vggface2: A dataset for recognising faces across pose and age", "venue": "In International Conference on Automatic Face and Gesture Recognition,", "year": 2018 }, { "authors": [ "Chun-Hao Chang", "Elliot Creager", "Anna Goldenberg", "David Duvenaud" ], "title": "Explaining image classifiers by counterfactual generation", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ricky TQ Chen", "Xuechen Li", "Roger B Grosse", "David K Duvenaud" ], "title": "Isolating sources of disentanglement in variational autoencoders", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Piotr Dabkowski", "Yarin Gal" ], "title": "Real time image saliency for black box classifiers", "venue": "arXiv preprint arXiv:1705.07857,", "year": 2017 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "In Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Emily Denton", "Ben Hutchinson", "Margaret Mitchell", "Timnit Gebru" ], "title": "Detecting bias with generative counterfactual face attribute augmentation", "venue": "arXiv preprint arXiv:1906.06439,", "year": 2019 }, { "authors": [ "Amit Dhurandhar", "Pin-Yu Chen", "Ronny Luss", "Chun-Chen Tu", "Paishun Ting", "Karthikeyan Shanmugam", "Payel Das" ], "title": "Explanations based on the missing: Towards contrastive explanations with pertinent negatives", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ruth C Fong", "Andrea Vedaldi" ], "title": "Interpretable explanations of black boxes by meaningful perturbation", "venue": "In International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Hao Fu", "Chunyuan Li", "Xiaodong Liu", "Jianfeng Gao", "Asli Celikyilmaz", "Lawrence Carin" ], "title": "Cyclical annealing schedule: A simple approach to mitigating kl vanishing", "venue": null, "year": 1903 }, { "authors": [ "Yarin Gal", "Jiri Hron", "Alex Kendall" ], "title": "Concrete dropout. In Advances in neural information processing", "venue": null, "year": 2017 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Yash Goyal", "Ziyan Wu", "Jan Ernst", "Dhruv Batra", "Devi Parikh", "Stefan Lee" ], "title": "Counterfactual visual explanations", "venue": "arXiv preprint arXiv:1904.07451,", "year": 2019 }, { "authors": [ "Riccardo Guidotti", "Anna Monreale", "Fosca Giannotti", "Dino Pedreschi", "Salvatore Ruggieri", "Franco Turini" ], "title": "Factual and counterfactual explanations for black box decision making", "venue": "IEEE Intelligent Systems,", "year": 2019 }, { "authors": [ "Riccardo Guidotti", "Anna Monreale", "Stan Matwin", "Dino Pedreschi" ], "title": "Black box explanation by learning image exemplars in the latent feature space", "venue": "In Joint European Conference on Machine Learning and Knowledge Discovery in Databases,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Computer Vision and Pattern Recognition,", "year": 2021 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Xianxu Hou", "Linlin Shen", "Ke Sun", "Guoping Qiu" ], "title": "Deep feature consistent variational autoencoder", "venue": "In Winter Conference on Applications of Computer Vision,", "year": 2017 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Computer Vision and Pattern recognition,", "year": 2017 }, { "authors": [ "Farhad Imani", "Ruimin Chen", "Evan Diewald", "Edward Reutzel", "Hui Yang" ], "title": "Deep learning of variant geometry in layerwise imaging profiles for additive manufacturing quality control", "venue": "Journal of Manufacturing Science and Engineering,", "year": 2019 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Simon Jégou", "Michal Drozdzal", "David Vazquez", "Adriana Romero", "Yoshua Bengio" ], "title": "The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation", "venue": "In Computer Vision and Pattern Recognition Workshops,", "year": 2017 }, { "authors": [ "Finn V Jensen" ], "title": "An introduction to Bayesian networks, volume 210", "venue": "UCL press London,", "year": 1996 }, { "authors": [ "Shalmali Joshi", "Oluwasanmi Koyejo", "Been Kim", "Joydeep Ghosh" ], "title": "xgems: Generating examplars to explain black-box models", "venue": "arXiv preprint arXiv:1806.08867,", "year": 2018 }, { "authors": [ "Amir-Hossein Karimi", "Gilles Barthe", "Borja Balle", "Isabel Valera" ], "title": "Model-agnostic counterfactual explanations for consequential decisions", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "Ilyes Khemakhem", "Diederik Kingma", "Ricardo Monti", "Aapo Hyvarinen" ], "title": "Variational autoencoders and nonlinear ica: A unifying framework", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Alexandre Lacoste", "Pau Rodrı́guez López", "Frédéric Branchaud-Charron", "Parmida Atighehchian", "Massimo Caccia", "Issam Hadj Laradji", "Alexandre Drouin", "Matthew Craddock", "Laurent Charlin", "David Vázquez" ], "title": "Synbols: Probing learning algorithms with synthetic datasets", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Yann LeCun", "Bernhard Boser", "John S Denker", "Donnie Henderson", "Richard E Howard", "Wayne Hubbard", "Lawrence D Jackel" ], "title": "Backpropagation applied to handwritten zip code recognition", "venue": "Neural Computation,", "year": 1989 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In International Conference on Computer Vision,", "year": 2015 }, { "authors": [ "Francesco Locatello", "Stefan Bauer", "Mario Lucic", "Gunnar Raetsch", "Sylvain Gelly", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "Challenging common assumptions in the unsupervised learning of disentangled representations", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Francesco Locatello", "Ben Poole", "Gunnar Rätsch", "Bernhard Schölkopf", "Olivier Bachem", "Michael Tschannen" ], "title": "Weakly-supervised disentanglement without compromises", "venue": "arXiv preprint arXiv:2002.02886,", "year": 2020 }, { "authors": [ "James Lucas", "George Tucker", "Roger Grosse", "Mohammad Norouzi" ], "title": "Understanding posterior collapse in generative latent variable models. 2019", "venue": null, "year": 2021 }, { "authors": [ "Scott M Lundberg", "Su-In Lee" ], "title": "A unified approach to interpreting model predictions", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Ramaravind K Mothilal", "Amit Sharma", "Chenhao Tan" ], "title": "Explaining machine learning classifiers through diverse counterfactual explanations", "venue": "In Conference on Fairness, Accountability, and Transparency,", "year": 2020 }, { "authors": [ "John Ashworth Nelder", "Robert WM Wedderburn" ], "title": "Generalized linear models", "venue": "Journal of the Royal Statistical Society: Series A (General),", "year": 1972 }, { "authors": [ "Boris Oreshkin", "Pau Rodrı́guez López", "Alexandre Lacoste" ], "title": "Tadam: Task dependent adaptive metric for improved few-shot learning", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel" ], "title": "Deep k-nearest neighbors: Towards confident, interpretable and robust deep learning", "venue": "arXiv preprint arXiv:1803.04765,", "year": 2018 }, { "authors": [ "Martin Pawelczyk", "Klaus Broelemann", "Gjergji Kasneci" ], "title": "Learning model-agnostic counterfactual explanations for tabular data", "venue": "In Proceedings of The Web Conference", "year": 2020 }, { "authors": [ "Ethan Perez", "Florian Strub", "Harm De Vries", "Vincent Dumoulin", "Aaron Courville" ], "title": "Film: Visual reasoning with a general conditioning layer", "venue": "arXiv preprint arXiv:1709.07871,", "year": 2017 }, { "authors": [ "Rafael Poyiadzi", "Kacper Sokol", "Raul Santos-Rodriguez", "Tijl De Bie", "Peter Flach" ], "title": "Face: feasible and actionable counterfactual explanations", "venue": "In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society,", "year": 2020 }, { "authors": [ "J.R. Quinlan" ], "title": "Induction of decision trees", "venue": "Machine Learning,", "year": 1986 }, { "authors": [ "Prajit Ramachandran", "Barret Zoph", "Quoc V Le" ], "title": "Searching for activation functions", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Marco Tulio Ribeiro", "Sameer Singh", "Carlos Guestrin" ], "title": "why should i trust you?” explaining the predictions of any classifier", "venue": "In International Conference on Knowledge Discovery and Data Mining,", "year": 2016 }, { "authors": [ "German Ros", "Sebastian Ramos", "Manuel Granados", "Amir Bakhtiary", "David Vazquez", "Antonio M Lopez" ], "title": "Vision-based offline-online perception paradigm for autonomous driving", "venue": "In Winter Conference on Applications of Computer Vision,", "year": 2015 }, { "authors": [ "Chris Russell" ], "title": "Efficient search for diverse coherent explanations", "venue": "In Conference on Fairness, Accountability, and Transparency,", "year": 2019 }, { "authors": [ "Ramprasaath R Selvaraju", "Michael Cogswell", "Abhishek Das", "Ramakrishna Vedantam", "Devi Parikh", "Dhruv Batra" ], "title": "Grad-cam: Visual explanations from deep networks via gradientbased localization", "venue": "In International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Avanti Shrikumar", "Peyton Greenside", "Anshul Kundaje" ], "title": "Learning important features through propagating activation differences", "venue": "arXiv preprint arXiv:1704.02685,", "year": 2017 }, { "authors": [ "Karen Simonyan", "Andrea Vedaldi", "Andrew Zisserman" ], "title": "Deep inside convolutional networks: Visualising image classification models and saliency maps", "venue": "arXiv preprint arXiv:1312.6034,", "year": 2013 }, { "authors": [ "Sumedha Singla", "Brian Pollack", "Junxiang Chen", "Kayhan Batmanghelich" ], "title": "Explanation by progressive exaggeration", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Jost Tobias Springenberg", "Alexey Dosovitskiy", "Thomas Brox", "Martin Riedmiller" ], "title": "Striving for simplicity: The all convolutional net", "venue": "arXiv preprint arXiv:1412.6806,", "year": 2014 }, { "authors": [ "X Yu Stella", "Jianbo Shi" ], "title": "Multiclass spectral clustering", "venue": "In null,", "year": 2003 }, { "authors": [ "Dmitry Ulyanov", "Andrea Vedaldi", "Victor Lempitsky" ], "title": "Instance normalization: The missing ingredient for fast stylization", "venue": "arXiv preprint arXiv:1607.08022,", "year": 2021 }, { "authors": [ "Arnaud Van Looveren", "Janis Klaise" ], "title": "Interpretable counterfactual explanations guided by prototypes", "venue": "arXiv preprint arXiv:1907.02584,", "year": 2019 }, { "authors": [ "David Vázquez", "Jorge Bernal", "F Javier Sánchez", "Gloria Fernández-Esparrach", "Antonio M López", "Adriana Romero", "Michal Drozdzal", "Aaron Courville" ], "title": "A benchmark for endoluminal scene segmentation of colonoscopy", "venue": "images. Journal of healthcare engineering,", "year": 2017 }, { "authors": [ "Bolei Zhou", "Aditya Khosla", "Agata Lapedriza", "Aude Oliva", "Antonio Torralba" ], "title": "Object detectors emerge in deep scene cnns", "venue": "arXiv preprint arXiv:1412.6856,", "year": 2014 }, { "authors": [ "Bolei Zhou", "Aditya Khosla", "Agata Lapedriza", "Aude Oliva", "Antonio Torralba" ], "title": "Learning deep features for discriminative localization", "venue": "In Computer Vision and Pattern Recognition,", "year": 2021 } ]
[ { "heading": "1 INTRODUCTION", "text": "Consider a face authentication system for unlocking a device. In case of non-authentications (possible false-negative predictions), this system could provide generic advices to its user such as “face the camera” or “remove any face occlusions”. However, these may not explain the reason for the possible malfunction. To provide more insights regarding its decisions, the system could instead provide information specific to the captured image (its input data). It might list the input features that most contributed to its decision (e.g., a region of the input image), but this feature could be “face”, which is trivial and does not suggest an alternative action to its user. Further, it provides little useful information about the model. Instead, non-trivial explanations may be key for better understanding and diagnosing the system— including the data it was trained on— and improving its reliability. Such explanations might improve systems across a wide variety of domains including in medical imaging [58], automated driving systems [48], and quality control in manufacturing [22].\nThe explainability literature aims to understand the decisions made by a machine learning (ML) model such as the aformentionned face authentication system. Counterfactual explanation methods [11, 13, 4] can help discover the limitations of a ML model by uncovering data and model biases. The counterfactual explanation methods provide perturbed versions of the input data that emphasize features that contributed most to the ML model’s output. For example, if an authentication system is not recognizing a user wearing sunglasses then the system could generate an alternative image of the user’s face without sunglasses that would be correctly recognized. This is different from other types of explainability methods such as feature importance methods [50, 51, 4] and boundary approximation methods [47, 37]. The former highlight salient regions of the input but do not indicate how the ML model could achieve a different prediction. The second family of methods produce\nexplanations that are limited to linear approximations of the ML model. Unfortunately, these linear approximations are often inaccurate. In contrast, counterfactual methods suggest changes in the input that would lead to a change in the corresponding output, providing information not only about where the change should be but also what the change should be.\nCounterfactual explanations should be actionable, i.e., a user should be able to act on it. An actionable explanation would suggest feasible changes like removing sunglasses instead of unrealistic ones like adding more eyes to the user’s face. Counterfactual explanations that are valid, proximal, and sparse are more likely to be actionable [49, 38]. That is, a counterfactual explanation that changes the outcome of the ML model (valid) by changing the minimal number of input features (sparse), while remaining close to the input (proximal). Generating a set of diverse explanations increases the likelihood of finding an actionable explanation [49, 38]. A set of counterfactuals is diverse if each one proposes to change a different set of attributes. Intuitively, each of these explanations shed light on a different action that user can take to change the ML model’s outcome.\nCurrent counterfactual generation methods like xGEM [26] generates a single explanation that is far from the input. Thus, it fails to be proximal, sparse, and diverse. Progressive Exaggeration (PE) [53] provides higher-quality explanations more proximal than xGEM, but it still fails to provide a diverse set of explanations. In addition, the image generator of PE is trained on the same data as the ML model in order to detect biases thereby limiting their applicability. Moreover, like the previous methods in the literature, these two methods tend to produce trivial explanations. For instance, an explanation that suggests to increase the ‘smile’ attribute of a ‘smile’ classifier for an already-smiling face is trivial and it does not explain why a misclassification occurred. In this work, we focus on diverse valuable explanations (DiVE), that is, valid, proximal, sparse, and non-trivial.\nWe propose Diverse Valuable Explanations (DiVE), an explainability method that can interpret a ML model by identifying sets of valuable attributes that have the most effect on the ML model’s output. DiVE produces multiple counterfactual explanations which are enforced to be valuable, and diverse resulting in more actionable explanations than the previous literature. Our method first learns a generative model of the data using a β-TCVAE [5] to obtain a disentangled latent representation which leads to more proximal and sparse explanations. In addition, the VAE is not required to be trained on the same dataset as the ML model to be explained. DiVE then learns a latent perturbation using constraints to enforce diversity, sparsity, and proximity. In order to generate nontrivial explanations, DiVE leverages the Fisher information matrix of its latent space to focus its search on the less influential factors of variation of the ML model. This mechanism enables the discovery of spurious correlations learned by the ML model.\nWe provide experiments to assess whether our explanations are more valuable and diverse than current state-of-the-art. First, we assess their validity on the CelebA dataset [33] and provide quantitative and qualitative results on a bias detection benchmark [53]. Second, we show that the generated explanations are more proximal in terms of Fréchet Inception Distance (FID) [19], which is a measure of similarity between two datasets of images commonly used to evaluate the generation quality of GAN. In addition, we evaluate the latent space closeness and face verification accuracy, as reported by Singla et al. [53]. Third, we assess the sparsity of the generated counterfactuals by computing the average change in facial attributes. Fourth, we show that DiVE is more successful at finding more non-trivial explanations than previous methods and baselines. In the supplementary material we provide additional results on the out-of-distribution performance of DiVE.\nWe summarize the contributions of this work as follows: 1) We propose DiVE, an explainability method that can interpret a ML model by identifying the attributes that have the most effect on its output. 2) DiVE achieves state of the art in terms of the validity, proximity, and sparsity of its explanations, detecting biases on the datasets, and producing multiple explanations for an image. 3) We identify the importance of finding non-trivial explanations and we propose a new benchmark to evaluate how valuable the explanations are. 4) We propose to leverage the Fisher information matrix of the latent space for finding spurious features that produce non-trivial explanations." }, { "heading": "2 RELATED WORK", "text": "Explainable artificial intelligence (XAI) is a suite of techniques developed to make either the construction or interpretation of model decisions more accessible and meaningful. Broadly speaking, there are two branches of work in XAI, ad-hoc and post-hoc. Ad-hoc methods focus on making mod-\nels interpretable, by imbuing model components or parameters with interpretations that are rooted in the data themselves [45, 39, 25]. Unfortunately, most successful machine learning methods, including deep learning ones, are uninterpretable [6, 32, 18, 24].\nPost-hoc methods aim to explain the decisions of non interpretable models. These methods can be categorized as non-generative and generative. Non-generative methods use information from a ML model to identify the features most responsible for an outcome for a given input. Approaches like [47, 37, 41] interpret ML model decisions by using derived information to fit a locally interpretable model. Others use the gradient of the ML model parameters to perform feature attribution [59, 60, 52, 54, 50, 1, 51], sometimes by employing a reference distribution for the features [51, 11]. This has the advantage of identifying alternative feature values that when substituted for the observed values would result in a different mode outcome. These methods are limited to small contiguous regions of features with high influence on the target model outcome. In so doing, they can struggle to provide plausible changes of the input that are actionable by an user in order to correct a certain output or bias of the model. Generative methods such as [7, 5, 4] propose plausible modifications of the input that change the model decision. However the generated perturbations are usually found in pixel space and thus are bound to masking small regions of the image without necessarily having a semantic meaning. Closest to our work are generative counterfactual explanation methods [26, 9, 15, 53] which synthesize perturbed versions of observed data that result in a corresponding change of the model prediction. While these methods provide valid and proximal explanations for a model outcome, they fail to provide a diverse set of non-trivial explanations. Mothilal et al. [38] addressed the diversity problem by introducing a diversity constraint between a set of randomly initialized counterfactuals (DICE). However, DICE shares the same problems as [7, 4] since perturbations are directly performed on the observed feature space, and does not take into account trivial explanations.\nIn this work we propose DiVE, a counterfactual explanation method that generates a diverse set of valid, proximal, sparse, and non-trivial explanations. Appendix A provides a more exhaustive review of the related work." }, { "heading": "3 PROPOSED METHOD", "text": "We propose DiVE, an explainability method that can interpret a ML model by identifying the latent attributes that have the most effect on its output. Summarized in Figure 1, DiVE uses an encoder, a decoder, and a fixed weight ML model. The ML model could be any function for which we have access to its gradients. In this work, we focus on a binary image classifier in order to produce visual explanations. DiVE consists of two main steps. First, the encoder and the decoder are trained in an unsupervised manner to approximate the data distribution on which the ML model was trained. Unlike PE [53], our encoder-decoder model does not need to train on the same dataset that the ML model was trained on. Second, we optimize a set of vectors i to perturb the latent representation z generated by the trained encoder. The details of the optimization procedure are provided in Algorithm 1 in the Appendix. We use the following 3 main losses for this optimization: a counterfactual loss LCF that attempts to fool the ML model, an proximity loss Lprox that constrains the explanations with respect to the number of changing attributes, and a diversity loss Ldiv that enforces the explainer to generate diverse explanations with only one confounding factor for each of them. Finally, we propose several strategies to mask subsets of dimensions in the latent space to prevent the explainer from producing trivial explanations. Next we explain the methodology in more detail." }, { "heading": "3.1 OBTAINING MEANINGFUL REPRESENTATIONS.", "text": "Given a data sample x ∈ X , its corresponding target y ∈ {0, 1}, and a potentially biased ML model f(x) that approximates p(y|x), our method finds a perturbed version of the same input x̃ that produces a desired probabilistic outcome ỹ ∈ [0, 1], so that f(x̃) = ỹ. In order to produce semantically meaningful counterfactual explanations, perturbations are performed on a latent representation z ∈ Z ⊆ Rd of the input x. Ideally, each dimension in Z represents a different semantic concept of the data, i.e., the different dimensions are disentangled.\nFor training the encoder-decoder architecture we use β-TCVAE [5] since it has been shown to obtain competitive disentanglement performance [34]. It follows the same encoder-decoder structure as the\nVAE [30], i.e., the input data is first encoded by a neural network qφ(z|x) parameterized by φ. Then, the input data is recovered by a decoder neural network pθ(x|z), parameterized by θ. Using a prior p(z) and a uniform distribution over the indexes of the dataset p(i), the original VAE loss is:\nLV AE = Ep(i)Eq(z|xi)[log pθ(xi|z)]− Ep(i)DKL (qφ(z|xi)||p(z)) , (1)\nwhere the first term is the reconstruction loss and the second is the average divergence from the prior. The core difference of β-TCVAE is the decomposition of this average divergence:\nEp(i)DKL ( qφ(z|xi)||p(z) ) → DKL (qφ(z,xi)||qφ(z)pθ(xi)) + ∑ jDKL (qφ(zj)||p(zj))\n+ β ·DKL ( qφ(z)|| ∏ jqφ(zj) ) , (2)\nwhere the arrow represents a modification of the left terms and equality is obtained when β = 1. The third term on the right hand side is called total correlation and measures the shared information between all empirical marginals qφ(zj) = Ep(i)qφ(zj |xi). By using β > 1, this part is amplified and encourages further decorrelations between the latent variables and leads to better disentanglement.\nIn addition to β-TCVAE, we use the perceptual reconstruction loss from Hou et al. [20]. This replaces the pixel-wise reconstruction loss in Equation 1 by a perceptual reconstruction loss, using the hidden representation of a pre-trained neural network R. Specifically, we learn a decoder Dθ generating an image i.e., x̃ = Dθ(z), and this image is re-encoded in a hidden representation: h = R(x̃), and compared to the original image in the same space using a normal distribution. The reconstruction loss of Equation 1 now becomes:\nEp(i)Eq(z|xi)[logN (R(xi);R(Dθ(z)), I)], (3)\nOnce trained, the weights of the encoder-decoder are fixed for the rest of the steps of our algorithm." }, { "heading": "3.2 INTERPRETING THE ML MODEL", "text": "In order to find weaknesses in the ML model, DiVE searches for a collection of n latent perturbation { i}ni=1 such that the decoded output x̃i = Dθ(z+ i) yields a specific response from the ML model, i.e., f(x̃) = ỹ for any chosen ỹ ∈ [0, 1]. We optimize i’s by minimizing:\nLDiVE(x, ỹ, { i}ni=1) = ∑ i LCF(x, ỹ, i) + λ · ∑ i Lprox(x, i) + α · Ldiv({ i}ni=1), (4)\nwhere λ, and α determine the relative importance of the losses. The minimization is performed with gradient descent and the complete algorithm can be found in Algorithm 1 in Appendix D. We now describe the different loss terms.\nCounterfactual loss. The goal of this loss function is to identify a change of latent attributes that will cause the ML model f to change it’s prediction. For example, in face recognition, if the classifier\ndetects that there is a smile present whenever the hair is brown, then this loss function is likely to change the hair color attribute. This is achieved by sampling from the decoder x̃ = Dθ(z + ), and optimizing the binary cross-entropy between the target ỹ and the prediction f(x̃):\nLCF(x, ỹ, ) = ỹ · log(f(x̃)) + (1− ỹ) · log(1− f(x̃)). (5)\nProximity loss. The goal of this loss function is to constrain the reconstruction produced by the decoder to be similar in appearance and attributes as the input. It consists of the following two terms,\nLprox(x, ) = ||x− x̃||1 + γ · || ||1, (6) where γ is a scalar weighting the relative importance of the two terms. The first term ensures that the explanations can be related to the input by constraining the input and the output to be similar. The second term aims to identify a sparse perturbation to the latent space Z that confounds the ML model. This constrains the explainer to identify the least amount of attributes that affect the classifier’s decision in order to produce sparse explanations. Diversity loss. This loss prevents the multiple explanations of the model from being identical. For instance, if gender and hair color are spuriously correlated with smile, the model should provide images either with different gender or different hair color. To achieve this, we jointly optimize for a collection of n perturbations { i}ni=1 and minimize their pairwise similarity:\nLdiv({ i}ni=1) = √√√√∑ i6=j ( Ti ‖ i‖2 j ‖ j‖2 )2 . (7)\nThe method resulting of optimizing Eq. 4 (DiVE) results in diverse counterfactuals that are more valid, proximal, and sparse. However, it may still produce trivial explanations, such as exaggerating a smile to explain a smile classifier without considering other valuable biases in the ML model such as hair color. While the diversity loss encourages the orthogonality of the explanations, there might still be several latent variables required to represent all variations of smile. Beyond trivial counterfactual explanations. To find non-trivial explanations, we propose to prevent DiVE from perturbing the most influential latent factors of Z on the ML model. We estimate the influence of each of the latent factors with the average Fisher information matrix:\nF = Ep(i)Eqφ(z|xi)Ep(y|z)∇z ln p(y|z) ∇z ln p(y|z) T , (8)\nwhere p(y = 1|z) = f(Dθ(z)), and p(y = 0|z) = 1− f(Dθ(z)). The diagonal values of F express the relative influence of each of the latent dimensions on the classifier output. Since the most influential dimensions are likely to be related to the main attribute used by the classifier, we propose to prevent Eq. 4 from perturbing them in order to find more surprising explanations. Thus when producing n explanations, we sort Z by the magnitude of the diagonal, we partition it into n contiguous chunks that will be optimized for each of the explanations. We call this method DiVEFisher.\nHowever, DiVEFisher does not guarantee that the different partitions of Z all the factors concerning a trivial attribute are grouped together. Thus, we propose to partition Z into subsets of latent factors that interact with each other when changing the predictions of the ML model. Such interaction can be estimated using F as an affinity measure. We use spectral clustering [55] to obtain a partition of Z . This partition is represented as a collection of mask {mi}ni=1, where mi ∈ {0, 1}d represents which dimensions of Z are part of cluster i. Finally, these masks are used in Equation 4 to bound each i to its subspace i.e., ′i = i ◦mi, where ◦ represents element wise multiplication. Since these masks are orthogonal, this effectively replaces Ldiv. In Section 4, we highlight the benefits of this clustering approach by comparing to other baselines. We call this method DiVEFisherSpectral." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "In this section, we evaluate the described methods on 5 different aspects: (1) the validity of the generated explanations as well as the ability to discover biases within the ML model and the data (Section 4.1); (2) their proximity in terms of FID, latent space closeness, and face verification accuracy (Section 4.2); (3) the sparsity of the generated counterfactuals (Section 4.3); and (4) the ability to identify diverse non-trivial explanations for image misclassifications made by the ML model (Section 4.4); (5) the out-of-distribution performance of DiVE (Section 4.4).\nExperimental Setup. As common procedure [26, 9, 53], we perform experiments on the CelebA database [33]. CelebA is a large-scale dataset containing more than 200K celebrity facial images. Each image is annotated with 40 binary attributes such as “Smiling”, “Male”, and “Eyeglasses”. These attributes allow us to evaluate counterfactual explanations by determining whether they could highlight spurious correlations between multiple attributes such as “lipstick” and “smile”. In this setup, explainability methods are trained in the training set and ML models are explained on the validation set. The hyperparameters of the explainer are searched by cross-validation on the training set. We use the same train and validaton splits as PE [53]. Explainers do not have access to the labeled attributes during training.\nWe test the out-of-distribution (OOD) performance of DiVE with the Synbols dataset [31]. Synbols is an image generator with characters from the Unicode standard and the wide range of artistic fonts provided by the open font community. This provides us to better control on the features present in each set when compared to CelebA. We generate 100K black and white of 32×32 images from 48 characters in the latin alphabet and more than 1K fonts. We use the character type to create disjoint sets for OOD training and we use the fonts to introduce biases in the data. We provide a sample of the dataset in Figure 8 in Appendix I.\nWe compare four versions of our method to three existing methods. DiVE, resulting of optimizing Eq. 4. DiVEFisher, which extends DiVE by using the Fisher information matrix introduced in Eq. 8. DiVEFisherSpectral, which extends DiVEFisher with spectral clustering. We introduce two additional ablations of our method, DiVE-- and DiVERandom. DiVE-- is equivalent to DiVE but using a pixelbased reconstruction loss instead of the perceptual loss. DiVERandom uses random masks instead of using the Fisher information. Finally, we compare our baselines with xGEM as described in Joshi et al. [26], xGEM+, which is the same as xGem but uses the same auto-encoding architecture as DiVE, and PE as described by Singla et al. [53]. For our methods, we provide implementation details, architecture description, and algorithm in Appendix D." }, { "heading": "4.1 VALIDITY AND BIAS DETECTION", "text": "We evaluate DiVE’s ability to detect biases in the data. We follow the same procedure as PE [53], and train two binary classifiers for the attribute “Smiling”. The first one is trained on a biased version of CelebA where all the male celebrities are smiling and all the female are not smiling (fbiased). The second one is trained on the unbiased version of the data (funbiased). Both classifiers are evaluated on the CelebA validation set. Also following Singla et al. [53], we train an oracle classifier (foracle) based on VGGFace2 [3] which obtains perfect accuracy on the gender attribute. The hypothesis is that if “Smiling” and gender are confounded by the classifier, so should be the explanations. Therefore, we could identify biases when the generated examples not only change the target attribute but also the confounded one. To generate the counterfactuals, DiVE produces perturbations until it changes the original prediction of the classifier (e.g. “Smiling” to “Non-Smiling”).\nWe follow the procedure introduced in [26, 53] and report a confounding metric for bias detection in In Table 1. The columns Smiling and Non-Smiling indicate the target class for counterfactual generation. The rows Male and Female contain the proportion of counterfactuals that are classified by the oracle as Male and Female. We can see that the generated explanations for fbiased are classified\nmore often as Male when the target attribute is Smiling and Female when the target attribute is NonSmiling. The confounding metric, denoted as overall, is the fraction of generated explanations for which the gender was changed with respect to the original image. It thus reflect the magnitude of the the bias as approximated by the explainers.\nSingla et al. [53] consider that a model is better than another if the confounding metric is the highest on fbiased and the lowest on funbiased. However, they assume that fbiased always predicts the Gender based on Smile. Instead, we propose to evaluate the confounding metric by comparing it to the empirical bias of the model, denoted as ground truth in the Table 1. Details provided in Appendix J.\nWe observe that DiVE is more successful than PE at detecting biases although the generative model of DiVE was not trained with the biased data. While xGEM+ has a higher success rate at detecting biases in some cases, it produces lower-quality images that are far from the input. In Figure 5 in Appendix B, we provide samples generated by our method with the two classifiers and compare them to PE and xGEM+. We found that gender changes with the “Smiling” attribute with fbiased while for funbiased it stayed the same. In addition, we also observed that for fbiased the correlation between “Smile” and “Gender” is higher than for PE. It can also be observed that xGEM+ fails to retain the identity of the person in x when compared to PE and our method." }, { "heading": "4.2 COUNTERFACTUAL EXPLANATION PROXIMITY", "text": "We evaluate the proximity of the counterfactual explanations using FID scores [19] as described by Singla et al. [53]. The scores are based on the target attributes “Smiling” and “Young”, and are divided into 3 categories: Present, Absent, and Overall. Present considers explanations for which the ML model outputs a probability greater than 0.9 for the target attribute. Absent refers to explanations with a probability lower than 0.1. Overall considers all the successful counterfactuals, which changed the original prediction of the ML model.\nWe report these scores in Table 2 for all 3 categories. DiVE produces the best quality counterfactuals, surpassing PE by 6.3 FID points for the “Smiling” target and 19.6 FID points for the “Young” target in the Overall category. DiVE obtains lower FID than xGEM+ which shows that the improvement not only comes from the superior architecture of our method. Further, there are two other factors that explain the improvement of DiVE’s FID. First, the β-TCVAE decomposition of the KL divergence improves the disentanglement ability of the model while suffering less reconstruction degradation than the VAE. Second, the perceptual loss makes the image quality constructed by DiVE to be comparable with that of the GAN used in PE. In addition, Table 4 in the Appendix shows that DiVE is more successful at preserving the identity of the faces than PE and xGEM and thus at producing feasible explanations. These results suggest that the combination of disentangled latent features and the regularization of the latent features help DiVE to produce the minimal perturbations of the input that produce a successful counterfactual.\nIn Figure 5 in Appendix B we show qualitative results obtained by targeting different probability ranges for the output of the ML model as described in PE. As seen in Figure 5, DiVE produces more natural-looking facial expressions than xGEM+ and PE. Additional results for “Smiling” and “Young” are provided in Figures 3 and 4 in the Appendix B." }, { "heading": "4.3 COUNTERFACTUAL EXPLANATION SPARSITY", "text": "Explanations that produce sparse changes in the attributes of the image are more probable to be actionable. In this section we quantitatively compare the amount of valid and sparse counterfactuals provided by different baselines. Table 3 shows the results for a classifier model trained on the at-\ntribute Young of the CelebA dataset.1 The first row shows the number of attributes that each method change in average to generate a valid counterfactual. Methods that require to change less attributes are likely to be more actionable. We observe that DiVE changes less attributes on average than xGEM+. We also observe that DiVEFisherSpectral is the method that changes less attributes among all the baselines. To better understand the effect of disentangled representations, we also report results for a version of xGEM+ with the β-TCVAE backbone (xGEM++). We do not observe significant effects on the sparsity of the counterfactuals. In fact, a fine-grained decomposition of concepts in the latent space could lead to lower the sparsity." }, { "heading": "4.4 BEYOND TRIVIAL EXPLANATIONS", "text": "Previous works on counterfactual generations tend to produce trivial input perturbations to change the output of the ML model. That is, they tend to increase/decrease the presence of the attribute that the classifier is predicting. For instance, in Figure 5 all the explainers put a smile on the input face in order to increase the probability for “smile”. While that is correct, this explanation does not provide much insight about the potential weaknesses of the ML model. Instead, in this work we emphasize producing non-trivial explanations, that are different from the main attribute that the ML model has been trained to identify. These kind of explanations provide more insight about the factors that affect the classifier and thus provide cues on how to improve the model or how to fix incorrect predictions.\nTo evaluate this, we propose a new benchmark that measures a method’s ability to generate valuable explanations. For an explanation to be valuable, it should 1) be misclassified by the ML model (valid), 2) not modify the main attribute being classified (non-trivial), and 3) not have diverged too much from the original sample (proximal). A misclassification provides insights into the weaknesses of the model. However, the counterfactual is even more insightful when it stays close to the original image as it singles-out spurious correlations learned by the ML model. Because it is costly to provide human evaluation of an automatic benchmark, we approximate both the proximity and the real class with the VGGFace2-based oracle. We choose the VGGFace2 model as it is less likely to share the same biases as the ML model, since it was trained for a different task than the ML model with an order of magnitude more data. We conduct a human evaluation experiment in Appendix F, and we find a significant correlation between the oracle and the human predictions. For 1) and 2) we deem that an explanation is successful if the ML model and the oracle make different predictions about the counterfactual. E.g., the top counterfactuals in Figure 1 are not deemed successful explanations because both the ML model and the oracle agree on its class, however the two in the bottom row are successful because only the oracle made the correct prediction. These explanations where generated by DiVEFisherSpectral. As for 3) we measure the proximity with the cosine distance between the sample and the counterfactual in the feature space of the oracle.\nWe test all methods from Section 4 on a subset of the CelebA validation set described in Appendix E. We report the results of the full hyperparameter search (see Appendix E) in Figure 2a. The vertical axis shows the success rate of the explainers, i.e., the ratio of valid explanations that are non-trivial. This is the misclassification rate of the ML model on the explanations. The dots denote the mean performances and the curves are computed with Kernel Density Estimation (KDE). On average, DiVE improves the similarity metric over xGEM+ highlighting the importance of disentangled representations for identity preservation. Moreover, using information from the diagonal of the Fisher Information Matrix as described in Eq. 8 further improves the explanations as shown by the higher success rate of DiVEFisher over DiVE and DiVERandom. Thus, preventing the model from perturbing the most influential latent factors helps to uncover spurious correlations that affect the ML model. Finally, the proposed spectral clustering of the full Fisher Matrix attains the best performance validating that the latent space partition can guide the gradient-based search towards better explanations. We reach the same conclusions in Table 3, where we provide a comparison with PE for the attribute Young. In addition, we provide results for a version of xGEM+ with more disentangled latent factos (xGEM++). We find that disentangled representations provide the explainer with a more precise control on the semantic concepts being perturbed, which increases the success rate of the explainer by 16%.\nOut-of-distribution generalization. In the previous experiments, the generative model of DiVE was trained on the same data distribution (i.e., CelebA faces) as the ML model. We test the out-\n1The code and pre-trained models of PE are only available for the attribute Young.\n.\nof-distribution performance of DiVE by training its auto-encoder on a subset of the latin alphabet of the Synbols dataset [31]. Then, counterfactual explanations are produced for a different disjoint subset of the alphabet. To evaluate the effectiveness of DiVE in finding biases on the ML model, we introduce spurious correlations in the data. Concretely, we assign different fonts to each of the letters in the alphabet as detailed in Appendix I. In-distribution (ID) results are reported in Figure 2b for reference, and OD results are reported in Figure 2c. We observe that DiVE is able to find valuable countefactuals even when the VAE was not trained on the same data distribution. Moreover, results are consistent with the CelebA experiment, with DiVE outperforming xGEM+ and Fiser information-based methods outperforming the rest." }, { "heading": "5 LIMITATIONS AND FUTURE WORK", "text": "This work shows that a good generative model can provide interesting insights on the biases of a ML model. However, this relies on a properly disentangled representation. In the case where the generative model would be heavily entangled it would fail to produce explanations with a sparse amount of features. However, our approach can still tolerate a small amount of entanglement, yielding a small decrease in interpretability. We expect that progress in identifiability [35, 28] will increase the quality of representations. With a perfectly disentangled model, our approach could still miss some explanations or biases. E.g., with the spectral clustering of the Fisher, we group latent variables and only produce a single explanation per group in order to present explanations that are conceptually different. This may leave behind some important explanations, but the user can simply increase the number of clusters or the number of explanation per clusters for a more in-depth analysis.\nIn addition to the challenge of achieving disentangled representations, finding the optimal hyperparameters for the VAE and their generalization out of the training distribution is an open problem. Moreover, if the generative model is trained on biased data, one could expect the counterfactuals to be biased as well. However, as we show in Figure 2c, our model still finds non-trivial explanations when applied out of distribution. In that way, it could be trained on a larger unlabeled dataset to overcome possible biases caused by the lack of annotated data.\nAlthough the generative model plays an important role to produce actionable counterfactuals in the computer vision domain domain, our work could be extended to other domains. For example, Eq. 4 could be applied to find non-trivial explanations on tabular data by directly optimizing the observed features instead of the latent factors of the VAE. However, further work would be needed to adapt the DiVE loss functions to produce perturbations on discrete and categorical variables." }, { "heading": "A EXTENDED RELATED WORK", "text": "Counterfactual explanation lies inside a more broadly-connected body of work for explaining classifier decisions. Different lines of work share this goal but vary in the assumptions they make about what elements of the model and data to emphasize as way of explanation.\nModel-agnostic counterfactual explanation. Like [47, 37], these models make no assumptions about model structure, and interact solely with its label predictions. Karimi et al. [27] develop a model agnostic, as well as metric agnostic approach. They reduce the search for counterfactual explanations (along with user-provided constraints) into a series of satisfiability problems to be solved with off-the-shelf SAT solvers. Similar in spirit to [47], Guidotti et al. [16] first construct a local neighbourhood around test instances, finding both positive and negative exemplars within the neighbourhood. These are used to learn a shallow decision tree, and explanations are provided in terms of the inspection of its nodes and structure. Subsequent work builds on this local neighbourhood idea [17], but specializes to medical diagnostic images. They use a VAE to generate both positive and negative samples, then use random heuristic search to arrive at a balanced set. The generated explanatory samples are used to produce a saliency feature map for the test data point by considering the median absolute deviation of pixel-wise differences between the test point, and the positive and negative example sets.\nGradient based feature attribution. These methods identify input features responsible for the greatest change in the loss function, as measured by the magnitude of the gradient with respect to the inputs. Early work in this area focused on how methodological improvements for object detection in images could be re-purposed for feature attribution [59, 60], followed by work summarized gradient information in different ways [52, 54, 50]. Closer inspection identified pitfalls of gradientbased methods, including induced bias due to gradient saturation or network structure [1], as well as discontinuity due to activation functions [51]. These methods typically produce dense feature maps, which are difficult to interpret. In our work we address this by constraining the generative process of our counterfactual explanations.\nReference based feature attribution. These methods focus instead on measuring the differences observed by substituting observed input values with ones drawn from some reference distribution, and accumulating the effects of these changes as they are back-propagated to the input features. Shrikumar et al. [51] use a modified back-propagation approach to gracefully handle zero gradients and negative contributions, but leave the reference to be specified by the user. Fong & Vedaldi [11] propose three different heuristics for reference values: replacement with a constant, addition of noise, and blurring. Other recent efforts have focused on more complex proposals of the reference distribution. Chen et al. [5] construct a probabilistic model that acts as a lower bound on the mutual information between inputs and the predicted class, and choose zero values for regions deemed uninformative. Building on desiderata proposed by Dabkowski & Gal [7], Chang et al. [4] use a generative model to marginalize over latent values of relevant regions, drawing plausible values for each. These methods typically either do not identify changes that would alter a classifier decision, or they do not consider the plausibility of those changes.\nCounterfactual explanations. Rather than identify a set of features, counterfactual explanation methods instead generate perturbed versions of observed data that result in a corresponding change in model prediction. These methods usually assume both more access to model output and parameters, as well as constructing a generative model of the data to find trajectories of variation that elucidate model behaviour for a given test instance.\nJoshi et al. [26] propose a gradient guided search in latent space (via a learned encoder model), where they progressively take gradient steps with respect to a regularized loss that combines a term for plausibility of the generated data, and the loss of the ML model. Denton et al. [9] use a Generative Adversarial Network (GAN) [14] for detecting bias present in multi-label datasets. They modify the generator to obtain latent codes for different data points and learn a linear decision boundary in the latent space for each class attribute. By sampling generated data points along the vector orthogonal\nto the decision boundary, they observe how crossing the boundary for one attribute causes undesired changes in others. Some counterfactual estimation methods forego a generative model by instead solving a surrogate editing problem. Given an original image (with some predicted class), and an image with a desired class prediction value, Goyal et al. [15] produce a counterfactual explanation through a series of edits to the original image by value substitutions in the learned representations of both images. Similar in spirit are Dhurandhar et al. [10] and Van Looveren & Klaise [57]. The former propose a search over features to highlight subsets of those present in each test data point that are typically present in the assigned class, as well as features usually absent in examples from adjacent classes (instances of which are easily confused with the label for the test point predicted by the model). The latter generate counterfactual data that is proximal to xtest, with a sparse set of changes, and close to the training distribution. Their innovation is to use class prototypes to serve as an additional regularization term in the optimization problem whose solution produces a counterfactual.\nSeveral methods go beyond providing counterfactually generated data for explaining model decisions, by additionally qualifying the effect of proposed changed between a test data point and each counterfactual. Mothilal et al. [38] focus on tabular data, and generate sets of counterfactual explanations through iterative gradient based improvement, measuring the cost of each counterfactual by either distance in feature space, or the sparsity of the set of changes (while also allowing domain expertise to be applied). Poyiadzi et al. [44] construct a weighted graph between each pair of data point, and identify counterfactuals (within the training data) by finding the shortest paths from a test data point to data points with opposing classes. Pawelczyk et al. [42] focus on modelling the density of the data to provide ’attainable’ counterfactuals, defined to be proximal to test data points, yet not lying in low-density sub-spaces of the data. They further propose to weigh each counterfactual by the changes in percentiles of the cumulative distribution function for each feature, relative to the value of a test data point." }, { "heading": "B QUALITATIVE RESULTS", "text": "In Figure 5 we show qualitative results obtained by targeting different probability ranges for the output of the ML model as described in PE. Note that PE directly optimizes the generative model to take an input variable δ ∈ R that defines the desired output probability ỹ = f(x) + δ. To obtain explanations at different probability targets, we train a second order spline on the trajectory of perturbations produced during the gradient descent steps of our method. Thus, given the set of perturbations { t}, ∀t ∈ 1..τ , obtained during τ gradient steps, and the corresponding blackbox outputs {f(y| t)}, the spline obtains the ỹ for a target output ỹ by interpolation. As seen in Figure 5, DiVE produces more natural-looking facial expressions than xGEM+ and PE. Although DiVE is not explicitly trained to produce exemplars at intermediate target probabilities, our explanations are more correlated with the target probabilities than PE. Additional results for “Smiling” and “Young” are provided in Figure 3,4.\nFigure 3,4 present counterfactual explanations for additional persons and attributes. The results show that DiVE achieves higher quality reconstructions compared to other methods. Further, the reconstructions made by DiVE are more correlated with the desired target for the ML model output f(x). In Figure 5, we provide samples generated by our method with a gender-biased classifier fbiased and an unbiased classifier funbiased. We compare DiVE to PE and xGEM+. We found that gender changes with the “Smiling” attribute with fbiased while for funbiased it stayed the same. In addition, we also observed that for fbiased the correlation between “Smile” and “Gender” is higher than for PE. It can also be observed that xGEM+ fails to retain the identity of the person in x when compared to PE and our method. Finally, Figure 6 shows successful counterfactuals for different instantiations of DiVE.\n0.96 0.03 0.2 0.3 0.5 0.6 0.7 0.96\n0.16 0.07 0.21 0.3 0.5 0.6 0.7 0.96\n17\n0.0 0.1 0.2 0.3 0.4 0.6 0.7 0.96\n0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.03\nPE\nOurs\nN o-\nB ia\nse d\nD\niV E\nw /o\noriginal\nD iV E D iV E-\nF D iV EFS\nD\niV E\nw /o\nD\niV E\nD iV\nEF\nD iV\nEFS\nFigure 6: Successful counterfactual generations for different instantions of DiVE. Here, the original image was misclassified as non-smiling. All methodologies were able to correctly add a smile to the woman.\nC IDENTITY PRESERVATION\nAs argued, valuable explanations should remain proximal to the original image. Accordingly, performed the identity preservation experiment found in [53] to benchmark the methodologies against each other. Specifically, use the VGGFace2-based [3] oracle to extract latent codes for the original images as well as for the explanations and report latent space closeness as the fraction of time the explanations’ latent codes are the closest to their respective original image latent codes’ compared to the explanations on different original images. Further, we report face verification accuracy which consist of the fraction of time the cosine distance between the aforementioned latent codes is below 0.5.\nTable 4 presents both metrics for DiVE and its baselines on the ”Smilling” and ”Young” classification tasks. We find that DiVE outperforms all other methods on the ”Young” classification task and almost all on the ”Smiling” task.\nCelebA:Smiling CelebA:Young xGEM PE xGEM+ DiVE (ours) xGEM PE xGEM+ DiVE (ours)\nLatent Space Closeness 88.2 88.0 99.8 98.7 89.5 81.6 97.5 99.1 Face Verification Accuracy 0.0 85.3 91.2 97.3 0.0 72.2 97.4 98.2\nTable 4: Identity preserving performance on two prediction tasks.\nD IMPLEMENTATION DETAILS\nIn this Section, we provide provide the details to ensure the that our method is reproducible.\nArchitecture details. DiVE’s architecture is a variation BigGAN [2] as shown in Table 6. We chose this architecture because it achieved impressive FID results on the ImageNet [8]. The decoder (Table 6b) is a simplified version of the 128 × 128 BigGAN’s residual generator, without nonlocal blocks nor feature concatenation. We use InstanceNorm [56] instead of BatchNorm [23] to obtain consistent outputs at inference time without the need of an additional mechanism such as recomputing statistics [2]. All the InstanceNorm operations of the decoder are conditioned on the input code z in the same way as FILM layers [43]. The encoder (Table 6a) follows the same structure as the BigGAN 128 × 128 discriminator with the same simplifications done to our generator. We use the Swish non-linearity [46] in all layers except for the output of the decoder, which uses a Tanh activation.\nFor all experiments we use a latent feature space of 128 dimensions. The ELBO has a natural principled way of selecting the dimensionality of the latent representation. If d is larger than necessary, it will not enhance the reconstruction error and the optimization of the ELBO will make the posterior equal to the prior for these extra dimensions. More can be found on the topic in [36]. In practice, we experimented with d = {64, 128, 256} and found that with d = 128 we achieved a slightly lower ELBO.\nTo project the 2d features produced by the encoder to a flat vector (µ, log (σ2)), and to project the sampled codes z to a 2d space for the decoder, we use 3-layer MLPs. For the face attribute classifiers, we use the same DenseNet [21] architecture as described in Progressive Exaggeration [53].\nOptimization details. All the models are optimized with Adam [29] with a batch size of 256. During the training step, the auto-encoders are optimized for 400 epochs with a learning rate of 4 · 10−4. The classifiers are optimized for 100 epochs with a learning rate of 10−4. To prevent the auto-encoders from suffering KL vanishing, we adopt the cyclical annealing schedule proposed by Fu et al. [12] on the third term of Equation 2.\nCounterfactual inference details. At inference time, the perturbations are optimized with Adam until the ML model output for the generated explanation f(x̃) only differs from the target output ỹ by a margin δ or when a maximum number of iterations τ is reached. We set τ = 20 for all the experiments since more than 90% of the counterfactuals are found after that many iterations. The different i are initialized by sampling from a normal distribution N ∼ (0, 0.01). For the DiVEFisher baseline, to identify the most valuable explanations, we sort by the magnitude of f = diag(F ). Then, we divide the dimensions of the sorted into N contiguous partitions of size k = DN , where D is the dimensionality of Z . Formally, let\n(f) be sorted by f , then (f) is constrained as follows,\n(f) i,j = { 0, if j ∈ [(i− 1) · k, i · k] (f) i,j , otherwise , (9)\nwhere i ∈ 1..N indexes each of the multiple , and j ∈ 1..D indexes the dimensions of . As a result we obtain partitions with different order of complexity. Masking the first partition results in explanations that are most implicit within the model and the data. On the other hand, masking the last partition results in explanations that are more explicit.\nTo compare with Singla et al. [53] in Figure 3-5 we produced counterfactuals at arbitrary target values ỹ of the output of the ML model classifier. One way to achieve this would be to optimize LCF for each of the target probabilities. However, these successive optimizations would slow down the process of counterfactual generation. Instead, we propose to directly maximize the target class probability and then interpolate between the points obtained in the gradient descent trajectory to obtain the latent factors of the different target probabilities. Thus, given the set of perturbations { t}, ∀t ∈ 1..τ , obtained during τ gradient steps, and the corresponding ML model outputs {f(y| t)}, we obtain the ỹ for a target output ỹ by interpolation. We do such interpolation by fitting a piecewise quadratic polynomial on the latent trajectory, commonly known as Spline in the computer graphics literature." }, { "heading": "E BEYOND TRIVIAL EXPLANATIONS EXPERIMENTAL SETUP", "text": "The experimental benchmark proposed in Section 4.4 is performed on a subset of the validation set of CelebA. This subset is composed of 4 images for each CelebA attribute. From these 4 images, 2 were correctly classified by the ML model, while the other 2 were misclassified. The two correctly classified images are chosen so that one was classified with a high confidence of 0.9 and the other one with low confidence of 0.1. The 2 misclassifications were chosen with the same criterion. The total size of the dataset is of 320 images. For each of these images we generate k counterfactual explanations. From these counterfactuals, we report the ratio of successful explanations.\nHere are the specific values we tried in our hyperparameter search: γ ∈ [0.0, 0.001, 0.1, 1.0], α ∈ [0.0, 0.001, 0.1, 1.0], λ ∈ [0.0001, 0.0005, 0.001], number of explanations 2 to 15 and learning rate ∈ [0.05, 0.1]. Be xGEM+ doesn’t have a γ nor α parameter, we increased its learning rate span to [0.01, 0.05, 0.1] to reduce the gap in its search space compared with DiVE. We also changed the ranomd seeds and ran a total of 256 trials." }, { "heading": "F HUMAN EVALUATION", "text": "We built a web-based human evaluation task to assess if DiVE is more successful at finding nontrivial counterfactuals than previous state of the art and the effectiveness of the VGG-based oracle, see Figure 7. For that, we present humans with valid counterfactuals and ask them whether the main attribute being classified by the ML model is present in the image or not. We use a subset of CelebA containing a random sample of 4 images per attribute, each one classified by the VGGFace oracle as containing the attribute with the following levels of confidence: [0.1, 0.4, 0.6, 0.9]. From each of these 160 images, we generated counterfactuals with xGEM+ [26], DiVE, DiVERandom, DiVEFisher, and DiVEFisherSpectral and show the valid counterfactuals to the human annotators. Results are reported in Table 5. In the left column we observe that leveraging the Fisher information results in finding more non-trivial counterfactuals, which confuse the ML model without changing the main attribute being classified. In the second column we report the Pearson correlation between the oracle and the classifier predictions. A statistical inference test reveals a significant correlation (p-value≤0.02)." }, { "heading": "G MODEL ARCHITECTURE", "text": "Table 6 presents the architecture of the encoder and decoder used in DiVE.\nTable 6: DiCe architecture for 128×128 images. ch represents the channel width multiplier in each network.\nRGB image x ∈ R128×128×3\nResBlock down 3ch→ 16ch ResBlock 16ch→ 32ch\nResBlock down 32ch→ 32ch ResBlock 32ch→ 64ch\nResBlock down 64ch→ 64ch ResBlock 64ch→ 128ch\nResBlock down 128ch→ 128ch ResBlock 128ch→ 128ch\nResBlock down 128ch→ 128ch IN, Swish, Linear 128ch× 4× 4→ 128ch\nIN, Swish, Linear 128ch→ 128ch IN, Swish, Linear 128ch→ 128ch× 2\nz ∼ N (µ ∈ R128, σ ∈ R128)\n(a) Encoder\nz ∈ R128\nLinear 128ch→ 128ch Linear 128ch→ 128ch\nLinear 128ch→ 128ch× 4× 4 ResBlock up 128ch→ 64ch ResBlock up 64ch→ 32ch\nResBlock 32ch→ 16ch ResBlock up 16ch→ 16ch\nResBlock 16ch→ 16ch ResBlock up 16ch→ 16ch\nResBlock 16ch→ 16ch IN, Swish, Conv 16ch→ 3\ntanh\n(b) Decoder" }, { "heading": "H MODEL ALGORITHM", "text": "Algorithm 1 presents the steps needed for DiVE to generate explanations for a given ML model using a sample input image.\nAlgorithm 1: Generating Explanations Input : Sample image x, ML model f(·) Output : Generated Conterfactuals x̃\n1 Initialize the perturbations matrix parameter of size n× d 2 Σ← randn(µ = 0, σ = 0.01)\n3 Get the original output from the ML model 4 y ← f(x)\n5 Extract the latent features of the original input 6 z ← qφ(x)\n7 Obtain fisher information on z 8 fz ← F (z)\n9 Obtain k partitions using spectral clustering 10 P ← SpectralClustering(fz)\n11 Initialize counter 12 i← 0\n13 while i < τ do 14 for each , p ∈ (Σ, P ) do 15 Perturb the latent features 16 x̃← pθ(z + )\n17 Pass the perturbed image through the ML model 18 ŷ ← f(x̃)\n19 Learn to reconstruct Ŷ from Y 20 L ← compute Eq. 4\n21 Update while masking a subset of the gradients 22 ← + ∂L∂ · p\n23 end 24 Update counter 25 i← i+ 1 26 end" }, { "heading": "I OUT-OF-DISTRIBUTION EXPERIMENT", "text": "We test the out-of-distribution (OOD) performance of DiVE with the Synbols dataset [31]. Synbols is an image generator with characters from the Unicode standard and the wide range of artistic fonts provided by the open font community. This provides us to better control on the features present in each set when compared to CelebA. We generate 100K black and white of 32×32 images from 48 characters in the latin alphabet and more than 1K fonts (Figure 8). In order to train the VAE on a different disjoint character set, we randomly select the following 32 training characters: {a, b, d, e, f, g, i, j, l, m, n, p, q, r, t, y, z, à, á, ã, å, è, é, ê, ë, ı̂, ñ, ò, ö, ù, ú, û}. Counterfactuals are then generated for the remaining 16 characters: {c, h, k, o, s, u, v, w, x, â, ı̀, ı́, ı̈, ó, ô, ü}.\nDiVE’s objective is to discover biases on the ML model and the data. Thus, we use the font attribute in order to bias each of the characters on small disjoint subsets of fonts. Font subsets are chosen so that they are visually similar. In order to assess their similarity, we train a ResNet12 [40] to classify the fonts of the 100K images and calculate similarity in embedding space. Concretely, we use K-Means to obtain 16 clusters which are associated with each of the 16 characters used for counterfactual generation. The font assignments are reported in Table 7. Results for four different random counterfactuals are displayed in Figure 9. DiVEFisherSpectral successfuly confuses the ML model without changing the oracle prediction, revealing biases of the ML model." }, { "heading": "J DETAILS ON THE BIAS DETECTION METRIC", "text": "In Table 1, we follow the procedure in first developped in [26] and adapted in [53] and report a confounding metric for bias detection. Namely, the “Male” and “Female” is the accuracy of the oracle on those class conditioned on the target label of the original image. For example, we can see that the generated explanations for the the biased classifier, most methods generated an higher amount of Non-smiling females and smiling males, which was expected. The confounding metric, denoted as overall, is the fraction of generated explanations for which the gender was changed with respect to the original image. It thus reflect the magnitude of the the bias as approximated by the explainers. Singla et al. [53] consider that a model is better than another if the confounding metric is the highest on fbiased and the lowest on funbiased.\nThis is however not entirely true. There is no guarantee that fbiased will perfectly latch on the spurious correlation. In that case, an explainer’s ratio could potentially be too high which would reflect an overestimation of the bias. We thus need to a way to quantify the gender bias in each model. To do so, we look at the difference between the classifiers accuracy on “Smiling” when the image is of a “Male” versus a “Female”. Intuitively, the magnitude of this difference approximates how much the classifier latched on the “Male” attribute to make its smiling predictions. We compute the same metric for in the non-smiling case. We average both of them, which we refer as ground truth in Table 1. As expected, this value is high for the fbiased and low for funbiased. Formally, the ground truth is computed as\nEa∼p(a) [ Ex,y∼p(x,y|a) [∣∣1[y = f(x)|a = a1]− 1[y = f(x)|a = a2]∣∣]] (10) where a represents the attribute, in this case the gender." } ]
2,020
BEYOND TRIVIAL COUNTERFACTUAL GENERATIONS
SP:366b8c3549160787f24e8e585953ed99ecdb0aa2
[ "This paper considers a regression setting in which the missing values are observed with lower values than the true values. Authors provided appealing application for this problem setting. They rewrote the risk and provided an unbiased gradient estimator. However, there is a gap between the estimator and the actual implementation, thus making the overall paper less convincible." ]
We address a regression problem from weakly labeled data that are correctly labeled only above a regression line, i.e., upper one-side labeled data. The label values of the data are the results of sensing the magnitude of some phenomenon. In this case, the labels often contain missing or incomplete observations whose values are lower than those of correct observations and are also usually lower than the regression line. It follows that data labeled with lower values than the estimations of a regression function (lower-side data) are mixed with data that should originally be labeled above the regression line (upper-side data). When such missing label observations are observed in a non-negligible amount, we thus should assume our lower-side data to be unlabeled data that are a mix of original upperand lowerside data. We formulate a regression problem from these upper-side labeled and lower-side unlabeled data. We then derive a learning algorithm in an unbiased and consistent manner to ordinary regression that is learned from data labeled correctly in both upperand lower-side cases. Our key idea is that we can derive a gradient that requires only upper-side data and unlabeled data as the equivalent expression of that for ordinary regression. We additionally found that a specific class of losses enables us to learn unbiased solutions practically. In numerical experiments on synthetic and real-world datasets, we demonstrate the advantages of our algorithm.
[]
[ { "authors": [ "Nontawat Charoenphakdee", "Masashi Sugiyama" ], "title": "Positive-unlabeled classification under class prior shift and asymmetric error", "venue": "In SDM,", "year": 2019 }, { "authors": [ "Xuxi Chen", "Wuyang Chen", "Tianlong Chen", "Ye Yuan", "Chen Gong", "Kewei Chen", "Zhangyang Wang" ], "title": "Self-PU: Self boosted and calibrated positive-unlabeled training", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Kai Lai Chung" ], "title": "A course in probability theory", "venue": "Academic press,", "year": 1968 }, { "authors": [ "Roger J Cole", "Daniel F Kripke", "William Gruen", "Daniel J Mullaney", "J Christian Gillin" ], "title": "Automatic sleep/wake identification from wrist activity", "venue": null, "year": 1992 }, { "authors": [ "Francesco De Comité", "François Denis", "Rémi Gilleron", "Fabien Letouzey" ], "title": "Positive and unlabeled examples help learning", "venue": "In ALT,", "year": 1999 }, { "authors": [ "François Denis" ], "title": "Pac learning from positive statistical queries", "venue": "In ALT, pp", "year": 1998 }, { "authors": [ "Norman R Draper", "Harry Smith" ], "title": "Applied regression analysis, volume 326", "venue": null, "year": 1998 }, { "authors": [ "Marthinus Du Plessis", "Gang Niu", "Masashi Sugiyama" ], "title": "Convex formulation for learning from positive and unlabeled data", "venue": "In ICML,", "year": 2015 }, { "authors": [ "Marthinus C Du Plessis", "Gang Niu", "Masashi Sugiyama" ], "title": "Analysis of learning from positive and unlabeled data", "venue": "In NIPS,", "year": 2014 }, { "authors": [ "John Duchi", "Yoram Singer" ], "title": "Efficient online and batch learning using forward backward splitting", "venue": "Journal of Machine Learning Research,", "year": 2009 }, { "authors": [ "Tianyu Guo", "Chang Xu", "Jiajun Huang", "Yunhe Wang", "Boxin Shi", "Chao Xu", "Dacheng Tao" ], "title": "On positive-unlabeled classification in GAN", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Peter J Huber" ], "title": "Robust estimation of a location parameter", "venue": "The Annals of Mathematical Statistics,", "year": 1964 }, { "authors": [ "Masahiro Kato", "Takeshi Teshima", "Junya Honda" ], "title": "Learning from positive and unlabeled data with a selection bias", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Diederik Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Roger Koenker", "Gilbert Bassett Jr." ], "title": "Regression quantiles", "venue": "Econometrica: journal of the Econometric Society,", "year": 1978 }, { "authors": [ "Fabien Letouzey", "François Denis", "Rémi Gilleron" ], "title": "Learning from positive and unlabeled examples", "venue": "In ALT, pp", "year": 2000 }, { "authors": [ "Tianyu Li", "Chien-Chih Wang", "Yukun Ma", "Patricia Ortal", "Qifang Zhao", "Bjorn Stenger", "Yu Hirate" ], "title": "Learning classifiers on positive and unlabeled data with policy gradient", "venue": "In ICDM,", "year": 2019 }, { "authors": [ "DJ Mullaney", "DF Kripke" ], "title": "Messin. Wrist-actigraphic estimation of sleep", "venue": "time. Sleep,", "year": 1980 }, { "authors": [ "Vinod Nair", "Geoffrey E Hinton" ], "title": "Rectified linear units improve restricted boltzmann machines", "venue": "In ICML, pp", "year": 2010 }, { "authors": [ "Subhash C Narula", "John F Wellington" ], "title": "The minimum sum of absolute errors regression: A state of the art survey", "venue": "International Statistical Review/Revue Internationale de Statistique,", "year": 1982 }, { "authors": [ "Tomoya Sakai", "Nobuyuki Shimizu" ], "title": "Covariate shift adaptation on learning from positive and unlabeled data", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "Hong Shi", "Shaojun Pan", "Jian Yang", "Chen Gong" ], "title": "Positive and unlabeled learning via loss decomposition and centroid estimation", "venue": "In IJCAI,", "year": 2018 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning", "venue": null, "year": 1929 }, { "authors": [ "Warren W Tryon" ], "title": "Activity measurement in psychology and medicine", "venue": "Springer Science & Business Media,", "year": 2013 }, { "authors": [ "Vladimir Vapnik" ], "title": "The nature of statistical learning theory", "venue": "Springer science & business media,", "year": 1995 }, { "authors": [ "Eduardo Velloso", "Andreas Bulling", "Hans Gellersen", "Wallace Ugulino", "Hugo Fuks" ], "title": "Qualitative activity recognition of weight lifting exercises", "venue": "In AH, pp", "year": 2013 }, { "authors": [ "John B Webster", "Daniel F Kripke", "Sam Messin", "Daniel J Mullaney", "Grant Wyborney" ], "title": "An activity-based sleep monitor system for ambulatory use", "venue": null, "year": 1982 }, { "authors": [ "Rand R Wilcox" ], "title": "Introduction to robust estimation and hypothesis testing", "venue": null, "year": 1997 }, { "authors": [ "Yixing Xu", "Yunhe Wang", "Hanting Chen", "Kai Han", "XU Chunjing", "Dacheng Tao", "Chang Xu" ], "title": "Positive-unlabeled compression on the cloud", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Chenguang Zhang", "Yuexian Hou", "Yan Zhang" ], "title": "Learning from positive and unlabeled data without explicit estimation of class prior", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Chuang Zhang", "Dexin Ren", "Tongliang Liu", "Jian Yang", "Chen Gong" ], "title": "Positive and unlabeled learning with label disambiguation", "venue": "In IJCAI,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "This paper addresses a scenario in which a regression function is learned for label sensor values that are the results of sensing the magnitude of some phenomenon. A lower sensor value means not only a relatively lower magnitude than a higher value but also a missing or incomplete observation of a monitored phenomenon. Label sensor values for missing observations are lower than those for when observations are correct without missing observations and are also usually lower than an optimal regression line that is learned from the correct observations. A naive regression algorithm using such labels causes the results of prediction to be low and is thus biased and underfitted in comparison with the optimal regression line.\nIn particular, when the data coverage of a label sensor is insufficient, the effect of missing observations causing there to be bias is critical. One practical example is that, for comfort in healthcare, we mimic and replace an intrusive wrist sensor (label sensor) with non-intrusive bed sensors (explanatory sensors). We learn a regression function that predicts the values of the wrist sensor from values of the bed sensors. The wrist sensor is wrapped around a wrist. It accurately represents the motion intensity of a person and is used such as for sleep-wake discrimination Tryon (2013); Mullaney et al. (1980); Webster et al. (1982); Cole et al. (1992). However, it can sense motion only on the forearm, which causes data coverage to be insufficient and observations of movements on other body parts to be missing frequently. The bed sensors are installed under a bed; while their accuracy is limited because of their non-intrusiveness, they have much broader data coverage than that of the wrist sensor. In this case, the wrist sensor values for missing observations are improperly low and also inconsistent with the bed sensor values as shown in Fig. 1-(1). This leads to severe bias and underfitting.\nThe specific problem causing the bias stems from the fact that our data labeled with lower values than the estimations of the regression function are mixed with data that should be originally labeled above the regression line. Here, we call data labeled above the regression line upper-side data, depicted as circles in Fig. 1-(2), and data labeled below the regression line lower-side data, depicted as squares in Fig. 1-(2). When there are missing observations, that is, our scenario, it means that the original data with missing observations have been moved to the lower side, depicted as triangles in Fig. 1-(3). We\ncannot determine which data have been moved by just examining the label values. It follows that our lower-side data are mixed with the original upper- and lower-side data.\nWe thus should assume our lower-side data to be unlabeled data, that is, a mix of original upperand lower-side data. We overcome the bias by handling this asymmetric label corruption, in which upper-side data are correctly labeled but lower-side data are always unlabeled. There is an established approach against such corrupted weak labels in regression, that is, robust regression that regards weak labels as containing outliers Huber et al. (1964); Narula & Wellington (1982); Draper & Smith (1998); Wilcox (1997). However, since not asymmetric but rather symmetric label corruption is assumed there, it is still biased in our problem setting. In the classification problem setting, asymmetric label corruption is addressed with positive-unlabeled (PU) learning, where it is assumed that negative data cannot be obtained but unlabeled data are available as well as positive data Denis (1998); De Comité et al. (1999); Letouzey et al. (2000); Shi et al. (2018); Kato et al. (2019); Sakai & Shimizu (2019); Charoenphakdee & Sugiyama (2019); Li et al. (2019); Zhang et al. (2019); Xu et al. (2019); Zhang et al. (2020); Guo et al. (2020); Chen et al. (2020). The focus is on classification tasks, and an unbiased risk estimator has been proposed Du Plessis et al. (2014; 2015). There is a gap between the classification problem setting and our regression problem setting, i.e., we have to estimate specific continuous values, not positive/negative classes. We fill the gap with a novel approach for deriving an unbiased solution for our regression setting.\nIn this paper, we formulate a regression problem from upper one-side labeled data, in which the upper-side data are correctly labeled, and we regard lower-side data as unlabeled data. We refer to this as one-side regression. Using these upper-side labeled and lower-side unlabeled data, we derive a learning algorithm in an unbiased and consistent manner to ordinary regression that uses data labeled correctly in both upper- and lower-side cases. This is achieved by deriving our gradient that requires only upper-side data and unlabeled data as an asymptotically equivalent expression of that for ordinary regression. This is a key difference from the derivation of unbiased PU classification where loss has been used. We additionally found that a specific class of losses enables us to make it so that an unbiased solution can be learned practically. For implementing the algorithm, we propose a stochastic optimization method. In numerical experiments using synthetic and real-world datasets, we empirically evaluated the effectiveness of the proposed algorithm. We found that it improves performance against regression algorithms that assume that both upper- and lower-side data are correctly labeled." }, { "heading": "2 ONE-SIDE REGRESSION", "text": "Our goal is to derive a learning algorithm with upper one-side labeled data in an unbiased and consistent manner to ordinary regression that uses both upper- and lower-side labeled data. We first\nconsider the ordinary regression problem; after that, we formulate a one-side regression problem by transforming the objective function of the ordinary one." }, { "heading": "2.1 ORDINARY REGRESSION PROBLEM", "text": "Let x ∈ RD(D ∈ N) be a D-dimensional explanatory variable and y ∈ R be a real-valued label. We learn a regression function f(x) that computes the value of an estimation of a label, ŷ, for a newly observed x as ŷ = f(x). The optimal regression function f∗ is given by\nf∗ ≡ argmin f L(f), (1)\nwhere L(f) is the expected loss when the regression function f(x) is applied to data, x and y, distributed in accordance with an underlying probability distribution p(x, y):\nL(f) ≡ E[L(f(x), y)], (2)\nwhere E denotes the expectation over p(x, y), and L(f(x), y) is the loss function between f(x) and y, e.g., the squared loss, L(f(x), y) = ‖f(x)− y‖22. L(f) can be written by using the decomposed expectations Eup when labels are higher than estimations of the regression function (f(x) < y, upper-side case) and Elo when labels are lower than the estimations of the regression function (y < f(x), lower-side case) as\nL(f) = πupEup[L(f(x), y)] + πloElo[L(f(x), y)], (3)\nwhere πup and πlo are the ratios for upper- and lower-side cases, respectively.\nNote that the decomposition in Eq. (3) holds for any f including f∗, and we omitted the decomposed expectation when y = f(x) because it is always zero." }, { "heading": "2.2 ONE-SIDE REGRESSION PROBLEM", "text": "We here consider a scenario in which we have training data, D ≡ {xn, yn}Nn=1, that are correctly labeled only in the upper-side case because of the existence of missing label observations. The data in the lower-side case are a mix of original upper- and lower-side data and are considered to be unlabeled data. We can divide D by estimations of the regression function f into upper-side data {Xup,yup} ≡ {x, y ∈ D | f(x) < y} and unlabeled data Xun ≡ {x ∈ D | y < f(x)}. In the ordinary regression, where both upper- and lower-side data are correctly labeled for training, expectations Eup and Elo in Eq. (3) can be estimated by using the corresponding sample averages. In our setting, however, correctly labeled data from the lower-side case are unavailable, and, therefore, Elo cannot be estimated directly.\nWe can avoid this problem by expressing L(f) as\nL̃(f) ≡πupEup[L(f(x), y)] + E[L(f(x), ỹlo)]− πupEup[L(f(x), ỹlo)], (4)\nwhere expectation E for x can be estimated by computing a sample average for our unlabeled data Xun, and ỹlo is a virtual label that is always lower than the estimations of the regression function f(x), whose details will be given in the next paragraph. For this expression, the expected loss L̃(f) is represented by only the expectations over the upper-side data and unlabeled data, Eup and E. Thus, we can design a gradient-based learning algorithm by using our training data. This transformation comes from Eqs. (2) and (3) with ỹlo as\nE[L(f(x), ỹlo)] = πupEup[L(f(x), ỹlo)] + πloElo[L(f(x), ỹlo)] πloElo[L(f(x), ỹlo)] = E[L(f(x), ỹlo)]− πupEup[L(f(x), ỹlo)]. (5)\nIn practice, we cannot properly set the value of ỹlo as being always lower than f(x). However, for learning based on gradients, this is not needed when we set the loss function as losses whose gradients do not depend on the value of ỹlo but just on the sign of f(x)− ỹlo, which is always positive and sgn(f(x)− ỹlo) = 1 from the definition of ỹlo, i.e., the loss functions satisfy\n∂L(f(x), y) ∂θ = g ( sgn(f(x)− y), f(x) ) , (6)\nwhere θ is the parameter vector of f , g ( sgn(f(x)− y), f(x) ) is a gradient function depending on sgn(f(x)− y) and f(x), and sgn(•) is a sign function. Common such losses are absolute loss and quantile losses. For example, the gradient of absolute loss, |f(x)− y|, is\n∂|f(x)− y| ∂θ = ∂f(x) ∂θ (sgn(f(x)− y) = 1) −∂f(x)∂θ (sgn(f(x)− y) = −1) Undefined (sgn(f(x)− y) = 0) , (7)\nwhich does not depend on the value of y but just on the sign of f(x)− y." }, { "heading": "3 LEARNING WITH GRADIENT USING UPPER ONE-SIDE LABELED DATA", "text": "In this section, we derive the learning algorithm based on Eqs. (1) and (4) and show that it is unbiased to and consistent with ordinary regression. We consider the gradient of Eq. (4) by using losses that satisfy Eq. (6) for its second and third terms as\n∂L̃(f) ∂θ\n=πupEup [ ∂L(f(x), y)\n∂θ\n] + E [ g ( sgn(f(x)− ỹlo), f(x) )] − πupEup [ g ( sgn(f(x)− ỹlo), f(x) )] . (8)\nUsing upper-side and unlabeled sample sets, {Xup,yup} and Xun, the gradient in Eq. (8) can be estimated as\n∂L̃(f) ∂θ = πup nup [ ∑ {x,y}∈{Xup,yup} ∂L(f(x), y) ∂θ ] + 1 nun [ ∑ x∈Xun g ( sgn(f(x)− ỹlo), f(x) )]\n− πup nup [ ∑ x∈Xup g ( sgn(f(x)− ỹlo), f(x) )] , (9)\nwhere {x, y} ∈ {Xup,yup} represent coupled pairs of x and y in the upper-side sample set, and nup and nun are the numbers of samples for the upper-side and unlabeled sets, respectively.\nBy using the gradient in Eq. (9), we can optimize Eq. (1) and learn the regression function. Its unbiasedness and consistency will be given in Section 3.1, and the specific implementation of the algorithm will be given in Section 3.2." }, { "heading": "3.1 UNBIASEDNESS AND CONSISTENCY OF GRADIENT", "text": "Our learning algorithm based on the gradient in Eq. (9) that uses only upper-side data and unlabeled data is justified as follows. Theorem 1. Suppose that loss function L for the second term in Eq. (3) satisfies Eq. (6). Then, for any f , the gradient in Eq. (8) and its empirical approximation in Eq. (9) are unbiased to and consistent with the gradient of L(f) in Eq. (3).\nIn other words, learning based on the gradient of Eq. (9), which uses only upper-side data and unlabeled data (one-side regression), asymptotically produces the same result as learning based on the gradient of L(f) in Eq. (3), which uses both upper- and lower-side data (ordinary regression).\nProof. First, by substituting Eq. (5) into the second and third terms in Eq. (8),\n∂L̃(f) ∂θ\n=πupEup [ ∂L(f(x), y)\n∂θ\n] + πloElo[g ( sgn(f(x)− ỹlo), f(x) ) ]. (10)\nThen, from the definitions of ỹlo and Elo, both in which y is always y < f(x), Elo[g ( sgn(f(x)− ỹlo), f(x) ) ] = Elo[g ( 1, f(x) ) ]\n= Elo[g ( sgn(f(x)− y), f(x) ) ], (11)\nand, thus, the gradient (10) is essentially the same as the following gradient of the loss L(f) in Eq. (3) for ordinary regression when we set the loss function for the second term in Eq. (3) as losses that satisfy Eq. (6),\n∂L(f) ∂θ\n= πupEup [ ∂L(f(x), y)\n∂θ\n] + πloElo[g ( sgn(f(x)− y), f(x) ) ]. (12)\nThe gradient in Eq. (9) is also unbiased to and consistent with the gradient in Eq. (12), and its convergence rate is of the order Op(1/√nup + 1/√nun) in accordance with the central limit theorem Chung (1968), where Op denotes the order in probability." }, { "heading": "3.2 IMPLEMENTATION OF LEARNING ALGORITHM BASED ON STOCHASTIC OPTIMIZATION", "text": "We scale our algorithm based on Eq. (9) up by stochastic approximation with M -mini-batches and add a regularization term, R(f):\n∂L̃(f) ∂θ = M∑ m=1 [[ ∑ {x,y}∈ { X {m} up ,y {m} up } ∂L(f(x), y)∂θ ] + ρ [ ∑ x∈X{m}un g ( 1, f(x) )]\n− [ ∑ x∈X{m}up g ( 1, f(x) )]] + λ ∂R(f) ∂θ , (13)\nwhere {X{m}up ,y{m}up } andX{m}un are upper-side and unlabeled sets in the m-th mini-batch, respectively, λ is a regularization parameter, and the regularization term R(f) is, for example, the L1 or L2 norm for the parameters θ. We also convert nup/(πupnun) as ρ ignoring constant coefficients and apply sgn(f(x)− ỹlo) = 1. The hyperparameters ρ and λ are optimized in training. We can learn the regression function with the gradient in Eq. (13) by using any stochastic gradient method, such as Adam Kingma & Ba (2015) and FOBOS Duchi & Singer (2009). The algorithm is described in Algorithm 1. In the following experiments, we used Adam with the hyperparameters recommended in Kingma & Ba (2015), and the number of samples in the mini-batches was set to 32. By using the learned f(x), we can estimate ŷ = f(x) for new data.\nFrom a practical perspective, the first term in Eqs. (9) and (13) requires that the estimations for the upper-side samples be higher than their label values as much as possible because∑ {x,y}∈{Xup,yup} L(f(x), y) becomes zero when every estimation of the regression function f(x) is located in y ≤ f(x). In contrast, the second term requires that the estimations for unlabeled samples be just as small as possible. The third term balances the effect of the second term.\nOur algorithm and discussions are applicable to a scenario that is opposite the one-side case, where the data are correctly labeled only on the lower side. Since the derivation is obvious from the analogy of the upper one-side case, we just show its learning algorithm for the lower one-side case in the supplementary material. One example of the lower one-side case is when the label sensor has ideal coverage, but the cost of observation is high, and we need to mimic sensor values with other cheaper sensors having smaller coverage." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "We now empirically test the effectiveness of the proposed approach. Our goal is to investigate the impact of our unbiased gradient, which is derived from the objective function based on the assumption of upper one-side labeled data in Eq. (4). We thus show how the proposed method improves performance against regression methods whose objective functions assume that both upperand lower-side data are correctly labeled. We use the same model and optimization method for all of the methods, and the only difference is their objective functions." }, { "heading": "4.1 EXPERIMENTAL SETUP AND DATASETS", "text": "We report the mean absolute error (MAE) and its standard error between the estimation results ŷ = {ŷn}Nn=1 and the corresponding true labels y̌ across 5-fold cross-validation, each with a different\nAlgorithm 1 One-side regression based on stochastic gradient method Input: Training data D = {xn, yn}Nn=1 and hyperparameters ρ, λ ≥ 0 Output: Model parameters θ for f\n1: Let A be an external stochastic gradient method and Gm be a gradient for the m-th mini-batch 2: while No stopping criterion has been met 3: Shuffle D into M -mini-batches, and denote by { X{m},y{m} } the m-th mini-batch whose\nsize is Nm 4: for m = 1 to M 5: Gm ← 0 6: for n = 1 to Nm 7: if f ( x {m} n ) − y{m}n < 0 then\n8: Gm ← Gm + ∂L ( f ( x{m}n ) ,y{m}n ) ∂θ − g ( 1, f ( x {m} n )) + λ∂R(f)∂θ\n9: else 10: Gm ← Gm + ρg ( 1, f ( x {m} n )) + λ∂R(f)∂θ 11: Update θ by A with Gm\nrandomly sampled training-testing split. MAE is defined as MAE(y̌, ŷ) = 1/N ∑N n=1 |y̌n − ŷn|. For each fold of the cross-validation, we used a randomly sampled 20% of the training set as a validation set to choose the best hyperparameters for each algorithm, where hyperparameters providing the highest MAE in the validation set were chosen. All of the experiments were carried out with a Python implementation on workstations having 48-80 GB of memory and 2.3-4.0 GHz CPUs. With this environment, the computational time was a few hours for producing the results for each dataset.\nData 1: Synthetic dataset. Using a synthetic dataset, we investigated whether our algorithm could indeed learn from upper one-side labeled data. We randomly generated N training samples, X = {xn}Nn=1, from the standard Gaussian distribution N (xn; 0, I), where the number of samples was N = 1, 000, the number of features in x was D = 10, and I is the identity matrix. Then, usingX , we generated the corresponding N sets of true labels y̌ = {y̌n}Nn=1 from the distribution N (y̌n;w>xn, β), where w are coefficients that were also randomly generated from the standard Gaussian distribution N (w; 0, I), β is the noise precision, and > denotes the transpose. For simulating the situation in which a label sensor has missing observations, we created training labels y = {yn}Nn=1 by randomly selecting K percent of data in y̌ and replacing their values with the minimum value of y̌. We finally added white Gaussian noise whose precision was the same as that of y̌ for the replaced K percent of data. We repeatedly evaluated the proposed method for each of the following settings. The noise precision was β = {100, 10−1}, which corresponded to lowand high-noise settings, and the proportion of missing training samples was K = {25, 50, 75}%. In the case of K = 75%, only 25 percent of the samples correctly corresponded to labels, and all of the other samples were attached with labels that were lower than the corresponding true values. In general, it is quite hard to learn regression functions using such data. In the experiment on Data 1, we used a linear model, θ>x, for f(x) and an implementation for Eq. (13) with squared loss for the first term, absolute loss, which satisfies Eq. (6), for the second and third terms, and L1-regularization for the regularization term. Loss functions having such a heterogeneous aspect are often used in the literature, e.g., Huber loss Huber et al. (1964), Epsilon-insensitive loss Vapnik (1995), and quantile losses Koenker & Bassett Jr (1978). We set the candidates of the hyperparameters, ρ and λ, to {10−3, 10−2, 10−1, 100}. We standardized the data by subtracting their mean and dividing by their standard deviation in the training split.\nData 2: Kaggle dataset with synthetic corruption. We used a real-world sensor dataset collected from the Kaggle dataset Sen (2016) that contains breathing signals. For this dataset, we used signals from a chest belt as X = {xn}Nn=1 and signals obtained by the Douglas bag (DB) method, which is the gold standard for measuring ventilation, as true labels y̌ = {y̌n}Nn=1. The dataset consisted of N = 1, 432 samples, and x in each sample had D = 2 number of features, i.e., the period and height of the expansion/contraction of the chest. For our problem setting, we created training labels y = {yn}Nn=1 by randomly selecting K percent of data in y̌ and replacing their value with the minimum value of y̌. We finally added white Gaussian noise whose standard deviation was 0.1× s for the replaced K percent of data, where s is the standard deviation of the original y̌. The setting for K was the same as that of Data 1, K = {25, 50, 75}%. In the experiment on Data 2, for its non-\nlinearity, we used θ>φ(x, σ) for f(x), where φ is a radial basis function, and σ is a hyperparameter representing the kernel width that is also optimized in the training split. We set the candidates of the hyperparameters, ρ, λ, and σ, to {10−3, 10−2, 10−1, 100}. The other implementation was the same as that for Data 1.\nData 3: Real-world UCI dataset. We here applied the algorithm to a real sensor dataset, which was collected from the UCI Machine Learning Repository Velloso (2013); Velloso et al. (2013). It contains sensor outputs from dumbbells and from wearable devices attached to the arm, forearm, and waist during exercise. We used all of the features from the dumbbell sensor that took “None” values less than ten times asX = {xn}Nn=1, where each sample had D = 13 number of features. We used the magnitude of acceleration on the arm as training labels y = {yn}Nn=1, which had insufficient data coverage and missing observations for the movements of other body parts. For testing, we used the magnitude of acceleration for the entire body as true labels y̌ = {y̌n}Nn=1. Because there were five classes for the exercise task with severe mode changes between classes, we divided the dataset into five datasets on the basis of class: A (N = 11, 159), B (N = 7, 593), C (N = 6, 844), D (N = 6, 432), and E (N = 7, 214). In the experiment on Data 3, we used a 6-layer multilayer perceptron with ReLU Nair & Hinton (2010) (more specifically, D-100-100-100-100-1) as f(x) in order to demonstrate the usefulness of the proposed method in training deep neural networks. We also used a dropout Srivastava et al. (2014) with a rate of 50% after each fully connected layer. We used two implementations for the first term in Eq. (13) with absolute loss (Proposed-1) and squared loss (Proposed-2). For both implementations, we used the absolute loss, which satisfies Eq. (6), for the second and third terms and used L1-regularization for the regularization term. We set the candidates of the hyperparameters, ρ and λ, to {10−3, 10−2, 10−1, 100}. The other implementation was the same as that for Data 1." }, { "heading": "4.2 PERFORMANCE COMPARISON", "text": "Table 1-(1) and -(2) show the performance on Data 1 and Data 2 for the proposed method and an ordinary regression method that uses mean squared error (MSE) assuming that both upper- and lower-side data are correctly labeled as its objective function. This comparison shows whether our method could learn from upper one-side labeled data, from which the ordinary regression method could not learn. From Table 1-(1) and -(2), we can see that the overall performance of the proposed method was significantly better than that of MSE. We found that the performance of our method was not significantly affected by the increase in the proportion of missing training samples K even for K = 75%, unlike that of MSE. Table 1-(3) shows a more extensive comparison using the real-world UCI dataset (Data 3) between our methods, Proposed-1 and Proposed-2, and methods based on\nvarious objective functions consisting of MSE, MAE, and Huber losses Huber et al. (1964); Narula & Wellington (1982); Wilcox (1997). The regression methods based on MAE and Huber losses were robust regression methods that assume symmetric label corruption. From Table 1-(3), we can see that the performance of Proposed-1 and Proposed-2 was totally better than that of the baselines. The robust regression methods did not improve in performance against MSE. In particular, Proposed-1 and Proposed-2 respectively reduced the error by more than 20% and 30% compared with the other methods on average.\nDemonstration of unbiased learning and prediction. Figure 2-(1) compares the estimation results of the proposed method with true labels and those of MSE for the Kaggle dataset with synthetic corruption (Data 2). Since the ordinary regression method, MSE, regards both upper- and lower-side data as correctly labeled, we can see that it produced biased results due to the missing observations. The proposed method did not. Figure 2-(2) shows a comparison of the estimation results between the proposed method, Proposed-2, and MSE for the real-world UCI dataset (Data 3). For ease of viewing, we show the results for the first 1, 000 samples for the class E data, where the errors of most of the methods were the lowest. Although MSE showed the lowest error among the baselines for the class E data, we can see that the predictions by MSE were somewhat biased and underfitted for the real data having our assumed nature. This was not the case for the proposed method." }, { "heading": "4.3 REAL HEALTHCARE CASE STUDY", "text": "We show the results of a healthcare case study in the supplementary material, where we estimated the motion intensity of a participant that was measured accurately with an intrusive sensor wrapped around the wrist (ActiGraph) Tryon (2013); Mullaney et al. (1980); Webster et al. (1982); Cole et al. (1992) from non-intrusive bed sensors that were installed under a bed. The results showed that the intrusive sensor could be replaced with the non-intrusive ones, which would be quite useful for reducing the burden on users." }, { "heading": "5 CONCLUSION", "text": "We formulated a one-side regression problem using upper-side labeled and lower-side unlabeled data and proposed a learning algorithm for it. We showed that our learning algorithm is unbiased to and consistent with ordinary regression that uses data labeled correctly in both upper- and lower-side cases. We developed a stochastic optimization method for implementing the algorithm. An experimental evaluation using synthetic and real-world datasets demonstrated that the proposed algorithm was significantly better than regression algorithms without the assumption of upper one-side labeled data." } ]
2,020
null
SP:07e927ae4286e3e227bf1c8ed5d17669ee871d96
[ "1. For Theorem 1, as the reviewer understands it, for an optimization problem whose only critical point is a strict maxima, it only has four outcomes, which are listed in the theorem. The result seems quite intuitive and provides very limited understanding for the problem. Please list other possible outcomes for the general problem and state in such way that the paper finds some impossible outcomes which can be excluded for consideration." ]
Under mild regularity conditions, gradient-based methods converge globally to a critical point in the single-loss setting. This is known to break down for vanilla gradient descent when moving to multi-loss optimization, but can we hope to build some algorithm with global guarantees? We negatively resolve this open problem by proving that desirable convergence properties cannot simultaneously hold for any algorithm. Our result has more to do with the existence of games with no satisfactory outcomes, than with algorithms per se. More explicitly we construct a two-player game with zero-sum interactions whose losses are both coercive and analytic, but whose only simultaneous critical point is a strict maximum. Any ‘reasonable’ algorithm, defined to avoid strict maxima, will therefore fail to converge. This is fundamentally different from single losses, where coercivity implies existence of a global minimum. Moreover, we prove that a wide range of existing gradient-based methods almost surely have bounded but non-convergent iterates in a constructed zero-sum game for suitably small learning rates. It nonetheless remains an open question whether such behavior can arise in high-dimensional games of interest to ML practitioners, such as GANs or multi-agent RL.
[ { "affiliations": [], "name": "MULTI-LOSS OPTIMIZATION" }, { "affiliations": [], "name": "Alistair Letcher" } ]
[ { "authors": [ "Jacob Abernethy", "Kevin A. Lai", "Andre Wibisono" ], "title": "Last-iterate convergence rates for min-max optimization", "venue": "ArXiv e-prints,", "year": 2019 }, { "authors": [ "PA Absil", "Robert Mahony", "Ben Andrews" ], "title": "Convergence of the iterates of descent methods for analytic cost functions", "venue": "SIAM Journal on Optimization,", "year": 2005 }, { "authors": [ "Waı̈ss Azizian", "Ioannis Mitliagkas", "Simon Lacoste-Julien", "Gauthier Gidel" ], "title": "A tight and unified analysis of extragradient for a whole spectrum of differentiable games", "venue": "ArXiv e-prints,", "year": 2019 }, { "authors": [ "James P. Bailey", "Gauthier Gidel", "Georgios Piliouras" ], "title": "Finite regret and cycles with fixed step-size via alternating gradient descent-ascent", "venue": "ArXiv e-prints,", "year": 2019 }, { "authors": [ "D. Balduzzi", "S. Racaniere", "J. Martens", "J. Foerster", "K. Tuyls", "T. Graepel" ], "title": "The Mechanics of n-Player Differentiable Games", "venue": null, "year": 2018 }, { "authors": [ "David Balduzzi", "Wojciech M. Czarnecki", "Tom Anthony", "Ian Gemp", "Edward Hughes", "Joel Leibo", "Georgios Piliouras", "Thore Graepel" ], "title": "Smooth markets: A basic mechanism for organizing gradient-based learners", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Saugata Basu", "Richard Pollack", "Marie-Françoise Roy" ], "title": "Algorithms in Real Algebraic Geometry", "venue": null, "year": 2006 }, { "authors": [ "L. Busoniu", "R. Babuska", "B. De Schutter" ], "title": "A comprehensive survey of multiagent reinforcement learning", "venue": "IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews),", "year": 2008 }, { "authors": [ "C. Daskalakis", "A. Ilyas", "V. Syrgkanis", "H. Zeng" ], "title": "Training GANs with Optimism", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Constantinos Daskalakis", "Ioannis Panageas" ], "title": "The limit points of (optimistic) gradient descent in min-max optimization", "venue": null, "year": 2018 }, { "authors": [ "W. Decker", "G.-M. Greuel", "G. Pfister", "H. Schönemann" ], "title": "Singular 4-1-2 — A computer algebra system for polynomial computations", "venue": null, "year": 2019 }, { "authors": [ "Farzan Farnia", "Asuman Ozdaglar" ], "title": "Do GANs always have Nash equilibria", "venue": null, "year": 2020 }, { "authors": [ "J.N. Foerster", "R.Y. Chen", "M. Al-Shedivat", "S. Whiteson", "P. Abbeel", "I. Mordatch" ], "title": "Learning with Opponent-Learning Awareness", "venue": null, "year": 2018 }, { "authors": [ "Ya-Ping Hsieh", "Panayotis Mertikopoulos", "Volkan Cevher" ], "title": "The limits of min-max optimization algorithms: convergence to spurious non-critical sets", "venue": "ArXiv e-prints,", "year": 2020 }, { "authors": [ "M. Jaderberg", "W.M. Czarnecki", "S. Osindero", "O. Vinyals", "A. Graves", "D. Silver", "K. Kavukcuoglu" ], "title": "Decoupled Neural Interfaces using Synthetic Gradients", "venue": null, "year": 2017 }, { "authors": [ "K. Lange" ], "title": "Optimization. Springer Texts in Statistics", "venue": null, "year": 2013 }, { "authors": [ "J.D. Lee", "M. Simchowitz", "M.I. Jordan", "B. Recht" ], "title": "Gradient Descent Only Converges to Minimizers", "venue": "In 29th Annual Conference on Learning Theory,", "year": 2016 }, { "authors": [ "Alistair Letcher", "David Balduzzi", "Sébastien Racanière", "James Martens", "Jakob Foerster", "Karl Tuyls", "Thore Graepel" ], "title": "Differentiable game mechanics", "venue": "Journal of Machine Learning Research,", "year": 2019 }, { "authors": [ "Alistair Letcher", "Jakob Foerster", "David Balduzzi", "Tim Rocktäschel", "Shimon Whiteson" ], "title": "Stable opponent shaping in differentiable games", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "David Luenberger", "Yinyu Ye" ], "title": "Linear and Nonlinear Programming, volume 67", "venue": null, "year": 1984 }, { "authors": [ "Oren Mangoubi", "Nisheeth K. Vishnoi" ], "title": "A second-order equilibrium in nonconvex-nonconcave min-max optimization: Existence and algorithm", "venue": "ArXiv e-prints,", "year": 2020 }, { "authors": [ "Eric Mazumdar", "Lillian J. Ratliff", "Michael I. Jordan", "S. Shankar Sastry" ], "title": "Policy-gradient algorithms have no guarantees of convergence in linear quadratic games", "venue": "In Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems,", "year": 2020 }, { "authors": [ "Eric V. Mazumdar", "Michael I. Jordan", "S. Shankar Sastry" ], "title": "On finding local nash equilibria (and only local nash equilibria) in zero-sum games", "venue": "ArXiv e-prints,", "year": 2019 }, { "authors": [ "L. Mescheder", "S. Nowozin", "A. Geiger" ], "title": "The Numerics of GANs", "venue": null, "year": 2017 }, { "authors": [ "G.S. Nelson" ], "title": "A User-Friendly Introduction to Lebesgue Measure and Integration", "venue": "Student Mathematical Library. American Mathematical Society,", "year": 2015 }, { "authors": [ "Gerasimos Palaiopanos", "Ioannis Panageas", "Georgios Piliouras" ], "title": "Multiplicative weights update with constant step-size in congestion games: Convergence, limit cycles and chaos", "venue": null, "year": 2017 }, { "authors": [ "I. Panageas", "G. Piliouras" ], "title": "Gradient Descent Only Converges to Minimizers: Non-Isolated Critical Points and Invariant Regions", "venue": "In ITCS 2017,", "year": 2017 }, { "authors": [ "Christos Papadimitriou", "Georgios Piliouras" ], "title": "Game dynamics as the meaning of a game", "venue": "SIGecom Exch.,", "year": 2019 }, { "authors": [ "D. Pathak", "P. Agrawal", "A.A. Efros", "T. Darrell" ], "title": "Curiosity-driven Exploration by Self-supervised Prediction", "venue": null, "year": 2017 }, { "authors": [ "S. Racanière", "T. Weber", "D.P. Reichert", "L. Buesing", "A. Guez", "D. Jimenez Rezende", "A. Puigdomènech Badia", "O. Vinyals", "N. Heess", "Y. Li", "R. Pascanu", "P. Battaglia", "D. Hassabis", "D. Silver", "D. Wierstra" ], "title": "Imagination-Augmented Agents for Deep Reinforcement Learning", "venue": null, "year": 2017 }, { "authors": [ "Florian Schaefer", "Anima Anandkumar" ], "title": "Competitive gradient descent", "venue": null, "year": 2019 }, { "authors": [ "M. Spivak" ], "title": "Calculus On Manifolds: A Modern Approach To Classical Theorems Of Advanced Calculus", "venue": "Avalon Publishing,", "year": 1971 }, { "authors": [ "A.S. Vezhnevets", "S. Osindero", "T. Schaul", "N. Heess", "M. Jaderberg", "D. Silver", "K. Kavukcuoglu" ], "title": "FeUdal Networks for Hierarchical Reinforcement Learning", "venue": null, "year": 2017 }, { "authors": [ "Lampros Flokas", "Georgios Piliouras" ], "title": "Poincaré recurrence, cycles and spurious equilibria in gradient-descent-ascent for non-convex non-concave zero-sum games", "venue": null, "year": 2019 }, { "authors": [ "G. Wayne", "L.F. Abbott" ], "title": "Hierarchical control using networks trained with higher-level forward models", "venue": "Neural Computation,", "year": 2014 }, { "authors": [ "Junchi Yang", "Negar Kiyavash", "Niao He" ], "title": "Global convergence and variance-reduced optimization for a class of nonconvex-nonconcave minimax problems", "venue": "ArXiv e-prints,", "year": 2020 }, { "authors": [ "Kaiqing Zhang", "Zhuoran Yang", "Tamer Basar" ], "title": "Policy optimization provably converges to nash equilibria in zero-sum linear quadratic games", "venue": null, "year": 2019 }, { "authors": [ "Basu" ], "title": "For an alternative proof that θ̄ = 0 is the only critical point, we may take advantage of computer algebra systems to find the exact number of real roots using the resultant matrix and Sturm’s theorem. Singular (Decker et al., 2019) is one such free and open-source system for polynomial computations, backed by published computer algebra references", "venue": "In particular,", "year": 2006 } ]
[ { "heading": "1 INTRODUCTION", "text": "Problem Setting. As multi-agent architectures proliferate in machine learning, it is becoming increasingly important to understand the dynamics of gradient-based methods when optimizing multiple interacting goals, otherwise known as differentiable games. This framework encompasses GANs (Goodfellow et al., 2014), intrinsic curiosity (Pathak et al., 2017), imaginative agents (Racanière et al., 2017), synthetic gradients (Jaderberg et al., 2017), hierarchical reinforcement learning (Wayne & Abbott, 2014; Vezhnevets et al., 2017) and multi-agent RL in general (Busoniu et al., 2008). The interactions between learning agents make for vastly more complex mechanics: naively applying gradient descent on each loss simultaneously is known to diverge even in simple bilinear games.\nRelated Work. A large number of methods have recently been proposed to alleviate the failings of simultaneous gradient descent: adaptations of single-loss algorithms such as Extragradient (EG) (Azizian et al., 2019) and Optimistic Mirror Descent (OMD) (Daskalakis et al., 2018), Alternating Gradient Descent (AGD) for finite regret (Bailey et al., 2019), Consensus Optimization (CO) for GAN training (Mescheder et al., 2017), Competitive Gradient Descent (CGD) based on solving a bilinear approximation of the loss functions (Schaefer & Anandkumar, 2019), Symplectic Gradient Adjustment (SGA) based on a novel decomposition of game mechanics (Balduzzi et al., 2018; Letcher et al., 2019a), and opponent-shaping algorithms including Learning with OpponentLearning Awareness (LOLA) (Foerster et al., 2018) and its convergent counterpart, Stable Opponent Shaping (SOS) (Letcher et al., 2019b). Let A be this set of algorithms. Each has shown promising theoretical implications and empirical results, but none offers insight into global convergence in the non-convex setting, which includes the vast majority of machine learning applications. One of the main roadblocks compared with single-loss optimization has been noted by Schaefer & Anandkumar (2019): “a convergence proof in the nonconvex case analogue to Lee et al. (2016) is still out of reach in the competitive setting. A major obstacle to this end is the identification of a suitable measure of progress (which is given by the function value in the single agent setting), since norms of gradients can not be expected to decay monotonously for competitive dynamics in non-convex-concave games.”\nIt has been established that Hamiltonian Gradient Descent converges in two-player zero-sum games under a “sufficiently bilinear” condition by Abernethy et al. (2019), but this algorithm is unsuitable for optimization as it cannot distinguish between minimization and maximization (Hsieh et al., 2020, Appendix C.4). Global convergence has also been established for some algorithms in a few special cases: potential and Hamiltonian games (Balduzzi et al., 2018), zero-sum games satisfying the twosided Polyak-Łojasiewicz condition (Yang et al., 2020), zero-sum linear quadratic games (Zhang et al., 2019) and zero-sum games whose loss and first three derivatives are bounded (Mangoubi & Vishnoi, 2020). These are significant contributions with several applications of interest, but do not include any of the architectures mentioned above. Finally, Balduzzi et al. (2020) show that GD dynamics are bounded under a ‘negative sentiment’ assumption in smooth markets, which do include GANs – but this does not imply convergence, as we will show.\nOn the other hand, failure of global convergence has been shown for the Multiplicative Weights Update method by Palaiopanos et al. (2017), for policy-gradient algorithms by Mazumdar et al. (2020), and for simultaneous and alternating gradient descent (simGD and AGD) by Vlatakis-Gkaragkounis et al. (2019); Bailey et al. (2019), with interesting connections to Poincaré recurrence. Nonetheless, nothing is claimed about other optimization methods. Farnia & Ozdaglar (2020) show that GANs may have no Nash equilibria, but it does not follow that algorithms fail to converge since there may be locally-attracting but non-Nash critical points (Mazumdar et al., 2019, Example 2).\nFinally, Hsieh et al. (2020) uploaded a preprint just after the completion of this work with a similar focus to ours. They prove that generalized Robbins-Monro schemes may converge with arbitrarily high probability to spurious attractors. This includes simGD, AGD, stochastic EG, optimistic gradient and Kiefer-Wolfowitz. However, Hsieh et al. (2020) focus on the possible occurrence of undesirable convergence phenomena for stochastic algorithms. We instead prove that desirable convergence properties cannot simultaneously hold for all algorithms (including deterministic). Moreover, their results apply only to decreasing step-sizes whereas ours include constant step-sizes. These distinctions are further highlighted by Hsieh et al. (2020) in the further related work section. Taken together, our works give a fuller picture of the failure of global convergence in multi-loss optimization.\nContribution. We prove that global convergence in multi-loss optimization is fundamentally incompatible with the ‘reasonable’ requirement that algorithms avoid strict maxima and converge only to critical points. We construct a two-player game with zero-sum interactions whose losses are coercive and analytic, but whose only critical point is a strict maximum (Theorem 1). Reasonable algorithms must either diverge to infinite losses or cycle (bounded non-convergent iterates).\nOne might hope that global convergence could at least be guaranteed in games with strict minima and no other critical points. On the contrary we show that strict minima can have arbitrarily small regions of attraction, in the sense that reasonable algorithms will fail to converge there with arbitrarily high probability for fixed initial parameter distribution (Theorem 2).\nFinally, restricting the game class even further, we construct a zero-sum game in which all algorithms in A (as defined in Appendix A) are proven to cycle (Theorem 3). It may be that cycles do not arise in high-dimensional games of interest including GANs. Proving or disproving this is an important avenue for further research, but requires that we recognise the impossibility of global guarantees in the first place." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 SINGLE LOSSES: GLOBAL CONVERGENCE OF GRADIENT DESCENT", "text": "Given a continuously differentiable function f : Rd → R, let\nθk+1 = θk − α∇f(θk)\nbe the iterates of gradient descent with learning rate α, initialised at θ0. Under standard regularity conditions, gradient descent converges globally to critical points: Proposition 1. Assume f ∈ C2 has compact sublevel sets and is either analytic or has isolated critical points. For any θ0 ∈ Rd, define U0 = {f(θ) ≤ f(θ0)} and let L < ∞ be a Lipschitz constant for ∇f in U0. Then for any 0 < α < 2/L we have limk θk = θ̄ for some critical point θ̄.\nThe requirements for convergence are relatively mild:\n1. f has compact sublevel sets iff f is coercive, lim‖θ‖→∞ f(θ) =∞, which mostly holds in machine learning since f is a loss function.\n2. f has isolated critical points if it is a Morse function (nondegenerate Hessian at critical points), which holds for almost all C2 functions. More precisely, Morse functions form an open, dense subset of all functions f ∈ C2(Rd,R) in the Whitney C2-topology.\n3. Global Lipschitz continuity is not assumed, which would fail even for cubic polynomials.\nThe goal of this paper is to prove that similar (even weaker) guarantees cannot be obtained in the multi-loss setting – not only for GD, but for any reasonable algorithm. This has to do with the more complex nature of gradient vector fields arising from multiple losses." }, { "heading": "2.2 DIFFERENTIABLE GAMES", "text": "Following Balduzzi et al. (2018), we frame the problem of multi-loss optimization as a differentiable game among cooperating and competing agents/players. These may simply be different internal components of a single system, like the generator and discriminator in GANs.\nDefinition 1. A differentiable game is a set of n agents with parameters θ = (θ1, . . . , θn) ∈ Rd and twice continuously differentiable losses Li : Rd → R, where θi ∈ Rdi for each i and ∑ i di = d.\nLosses are not assumed to be convex/concave in any of the parameters. In practice, losses need only be differentiable almost-everywhere: think of neural nets with rectified linear units.\nIf n = 1, the ‘game’ is simply to minimise a given loss function. We write ∇iLk = ∇θiLk and ∇ijLk = ∇θj∇θiLk for any i, j, k, and define the simultaneous gradient of the game\nξ = ( ∇1L1, . . . ,∇nLn )T ∈ Rd as the concatenation of each player’s gradient. If each agent independently minimises their loss using GD with learning rate α, the parameter update for all agents is given by θ ← θ − αξ(θ). We call this simultaneous gradient descent (simGD), or GD for short. We call θ̄ a critical point if ξ(θ̄) = 0. Now introduce the ‘Hessian’ (or Jacobian) of the game as the block matrix\nH = ∇ξ = ∇11L 1 · · · ∇1nL1 ... . . .\n... ∇n1Ln · · · ∇nnLn ∈ Rd×d . Importantly note that H is not symmetric in general unless n = 1, in which case we recover the usual Hessian H = ∇2L. However H can be decomposed into symmetric and anti-symmetric components as H = S + A (Balduzzi et al., 2018). A second useful decomposition has appeared recently in (Letcher et al., 2019b) and (Schaefer & Anandkumar, 2019): H = Hd + Ho where Hd and Ho are the matrices of diagonal and off-diagonal blocks; formally, Hd = ⊕ i∇iiLi. One solution concept for differentiable games, analogous to the single-loss case, is defined as follows.\nDefinition 2. A critical point θ̄ is a (strict, local) minimum if H(θ̄) 0.1\nThese were named (strict) stable fixed points by Balduzzi et al. (2018), but the term is usually reserved in dynamical systems to the larger class defined by Hessian eigenvalues with positive real parts, which is implied but not equivalent to H 0 for non-symmetric matrices. In particular, strict minima are (differential) Nash equilibria as defined by Mazumdar et al. (2019), since diagonal blocks must also be positive definite: ∇iiLi(θ̄) 0. The converse does not hold.\nAlgorithm class. This paper is concerned with any algorithm whose iterates are obtained by initialising θ0 and applying a function F to the previous iterates, namely θk+1 = F (θk, . . . , θ0). This holds for all gradient-based methods (deterministic or stochastic); most of them are only functions\n1For non-symmetric matrices, positive definiteness is defined as H 0 iff uTHu > 0 for all non-zero u ∈ Rd. This is equivalent to the symmetric part S of H being positive definite.\nof the current iterate θk, so that θk = F k(θ0). All probabilistic statements in this paper assume that θ0 is initialised following any bounded and continuous measure ν on Rd. Continuity is a weak requirement and widely holds across machine learning, while boundedness mostly holds in practice since the bounded region can be made large enough to accommodate required initial points.\nFor single-player games, the goal of such algorithms is for θk to converge to a local (perhaps global) minimum as k → ∞. The goal is less clear for differentiable games, but is generally to reach a minimum or a Nash equilibrium. In the case of GANs the goal might be to reach parameters that produce realistic images, which is more challenging to define formally.\nThroughout the text we use the term (limit) cycle to mean bounded but non-convergent iterates. This terminology is used because bounded iterates are non-convergent if and only if they have at least two accumulation points, between which they must ‘cycle’ infinitely often. This is not to be taken literally: the set of accumulation points may not even be connected. Hsieh et al. (2020) provide a more complete characterisation of these cycles.\nGame class. Expecting global guarantees in all differentiable games is excessive, since every continuous dynamical system arises as simultaneous GD on the loss functions of a differentiable game (Balduzzi et al., 2020, Lemma 1). For this reason, the aforementioned authors have introduced a vastly more tractable class of games called markets. Definition 3. A (smooth) market is a differentiable game where interactions between players are pairwise zero-sum, namely,\nLi(θ) = Li(θi) + ∑ j 6=i gij(θ i, θj)\nwith gij(θi, θj) + gji(θj , θi) = 0 for all i, j.\nThis generalises zero-sum games while remaining amenable to optimization and aggregation, meaning that “we can draw conclusions about the gradient-based dynamics of the collective by summing over properties of its members” (Balduzzi et al., 2020). Moreover, this class captures a large number of applications including GANs and related architectures, intrinsic curiosity modules, adversarial training, task-suites and population self-play. One would modestly hope for some reasonable algorithm to converge globally in markets. We will prove that even this is too much to ask." }, { "heading": "2.3 REASONABLE ALGORITHMS", "text": "We wish to prove that global convergence is at odds with weak, ‘reasonable’ desiderata. The first requirement is that fixed points of an optimization algorithm F are critical points. Formally,\nF (θ) = θ =⇒ ξ(θ) = 0 . (R1)\nIf not, some agent i could strictly improve its losses by following the gradient −∇iLi 6= 0. There is no reason for a gradient-based algorithm to stop improving if its gradient is non-zero.\nThe second requirement is that algorithms avoid strict maxima. Analogous to strict minima, they are defined for single losses by a negative-definite Hessian H ≺ 0. Converging to such a point θ̄ is the opposite goal of any meaningful algorithm since moving anywhere away from θ̄ decreases the loss. There are multiple ways of generalising this concept for multiple losses, but Proposition 2 below justifies that H ≺ 0 is the weakest one. Proposition 2. Write λ(A) = Re(Spec(A)) for real parts of the eigenvalues of a matrix A. We have the following implications, and none of them are equivalences.\nmaxλ(H) < 0 minλ(H) < 0\nH ≺ 0 minλ(S) < 0\nmaxλ(Hd) < 0 minλ(Hd) < 0\nDefinition 4. A critical point θ̄ is a (strict, local) maximum if H(θ̄) ≺ 0.\nImposing that algorithms avoid strict maxima is therefore the weakest possible requirement of its kind. Note that the bottom-left implication Proposition 2 is equivalent to ∇iiLi ≺ 0 for all i, so strict maxima are also strict maxima of each player’s individual loss function. Players can all decrease their losses by moving anywhere away from them. It is exceedingly reasonable to ask that optimization algorithms avoid these points almost surely. Formally, we require that for any strict maximum θ̄ and bounded region U there are hyperparameters such that\nµ ( {θ0 ∈ U | lim\nk θk = θ̄}\n) = 0 . (R2)\nµ denotes Lebesgue measure. Hyperparameters may depend on the given game and the region U , as is typical for learning rates in gradient-based methods. Definition 5 (Reason). An algorithm is reasonable if it satisfies R1 and R2.\nReason is not equivalent to rationality or self-interest. Reason is much weaker, imposing only that agents are well-behaved regarding strict maxima even if their individual behavior is not selfinterested. For instance, SGA agents do not behave out of self-interest (Balduzzi et al., 2018)." }, { "heading": "3 GLOBAL CONVERGENCE IN DIFFERENTIABLE GAMES", "text": "" }, { "heading": "3.1 REASONABLE ALGORITHMS FAIL TO CONVERGE GLOBALLY", "text": "Our main contribution is to show that global guarantees do not exist for any reasonable algorithm. First recall that global convergence should not be expected in all games, since there may be a divergent direction with minimal loss (imagine minimising L = ex). It should however be asked that algorithms have bounded iterates in coercive games, defined by coercive losses\nlim ‖θ‖→∞\nLi(θ) =∞\nfor all i. Indeed, unbounded iterates in coercive games would lead to infinite losses for all agents, the worst possible outcome. Given bounded iterates, convergence should hold if the Hessian is nondegenerate at critical points (which must therefore be isolated, recall Proposition 1). We call such a game nondegenerate. This condition can also be replaced by analyticity of the loss. In the spirit of weakest assumptions, we ask for convergence when both conditions hold. Definition 6 (Globality). An algorithm is global if, in a coercive, analytic and nondegenerate game, for any fixed θ0, iterates θk are bounded and converge for suitable hyperparameters. (G1)\nNote that GD is global for single-player games by Proposition 1. Unfortunately, reason and globality are fundamentally at odds as soon as we move to two-player markets. Theorem 1. There is a coercive, nondegenerate, analytic two-player marketM whose only critical point is a strict maximum. In particular, algorithms only have four possible outcomes inM:\n1. Iterates are unbounded, and all players diverge to infinite loss. [Not global]\n2. Iterates are bounded and converge to the strict maximum. [Not reasonable]\n3. Iterates are bounded and converge to a non-critical point. [Not reasonable]\n4. Iterates are bounded but do not converge (cycle). [Not global]\nProof. Consider the analytic marketM given by\nL1(x, y) = x6/6− x2/2 + xy + 1 4\n( y4\n1 + x2 − x\n4\n1 + y2 ) L2(x, y) = y6/6− y2/2− xy − 1\n4\n( y4\n1 + x2 − x\n4\n1 + y2\n) .\nWe prove in Appendix D thatM is coercive, nondegenerate, and has a unique critical point at the origin, which is a strict maximum.\nConstructing an algorithm with global guarantees is therefore doomed to be unreasonable in that it will converge to strict maxima or non-critical points inM. None of the outcomes ofM are satisfactory. The first three are highly objectionable, as already discussed. The fourth is less obvious, and may even have game-theoretic significance (Papadimitriou & Piliouras, 2019), but is counter-intuitive from an optimization standpoint. Terminating the iteration would lead to a non-critical point, much like the third outcome. Even if we let agents update parameters continuously as they play a game or solve a task, they will have oscillatory behavior and fail to produce consistent outcomes (e.g. when generating an image or playing Starcraft).\nThe hope for machine learning is that such predicaments do not arise in applications we care about, such as GANs or intrinsic curiosity. This may well be the case, but proving or disproving global convergence in these specific settings is beyond the scope of this paper.\nRemark. Why can this approach not be used to disprove global convergence for single losses? One reason is that we cannot construct a coercive loss with no critical points other than strict maxima: coercive losses, unlike games, always have a global minimum." }, { "heading": "3.2 WHAT IF THERE ARE STRICT MINIMA?", "text": "One might wonder if it is purely the absence of strict minima that causes non-convergence, since strict minima are locally attracting under gradient dynamics. Can we guarantee global convergence if we impose existence of a minimum, and more, the absence of any other critical points?\nUnfortunately, strict minima may have an arbitrarily small region of attraction. Assuming parameters are initialised following any bounded continuous measure ν on Rd, we can always modifyM by deforming a correspondingly small region around the origin, turning it into a minimum while leaving the dynamics unchanged outside of this region.\nFor a fixed initial distribution, any reasonable algorithm can therefore enter a limit cycle or diverge to infinite losses with arbitrarily high probability. Theorem 2. Given a reasonable algorithm with bounded continuous distribution on θ0 and a real number > 0, there exists a coercive, nondegenerate, almost-everywhere analytic two-player marketMσ with a strict minimum and no other critical points, such that θk either cycles or diverges to infinite losses for both players with probability at least 1− .\nProof. Let 0 < σ < 0.1 and define\nfσ(θ) =\n{ (x2 + y2 − σ2)/2 if ‖θ‖ ≥ σ\n(y2 − 3x2)(x2 + y2 − σ2)/(2σ2) otherwise, where θ = (x, y) and ‖θ‖ = √ x2 + y2 is the standard L2-norm. Note that fσ is continuous since\nlim ‖θ‖→σ+ fσ(x, y) = 0 = lim ‖θ‖→σ− fσ(x) .\nNow consider the two-player marketMσ given by\nL1(x, y) = x6/6− x2 + fσ(x, y) + xy + 1\n4\n( y4\n1 + x2 − x\n4\n1 + y2 ) L2(x, y) = y6/6− fσ(x, y)− xy − 1\n4\n( y4\n1 + x2 − x\n4\n1 + y2\n) .\nWe prove in Appendix E thatMσ is a coercive, nondegenerate, almost-everywhere analytic game whose only critical point is a strict minimum at the origin. We then prove that θk cycles or diverges with probability at least 1− , and plot iterates for each algorithm in A." }, { "heading": "3.3 HOW DO EXISTING ALGORITHMS BEHAVE?", "text": "Any algorithm will either fail to be reasonable or global inM. Nonetheless, it would be interesting to determine the specific failure that each algorithm in A exhibits. Each of them is defined in\nAppendix A, writing α for the learning rate and γ for the Consensus Optimization hyperparameter. We expect each algorithm to be reasonable and moreover to have bounded iterates inM for suitably small hyperparameters. If this holds, they must cycle by Theorem 1.\nThis was witnessed experimentally across 1000 runs for α = γ = 0.01, with every run resulting in cycles. A single such run is illustrated in Figure 1. Algorithms may follow one of the three other outcomes for other hyperparameters, for instance diverging to infinite loss if α is too large or converging to the strict maximum for CO if γ is too large. The point here is to characterise the ‘regular’ behavior which can be seen as that occurring for sufficiently small hyperparameters.\nInstead of proving that algorithms must cycle inM, we construct a zero-sum game N with similar properties asM and prove below that algorithms in A almost surely fail to converge there for small α, γ. This is stronger than proving the analogous result forM, since N belongs to the even smaller class of zero-sum games which one might have hoped was well-behaved.\nIn this light, one might wish to extend Theorem 1 to zero-sum games. However, zero-sum games cannot be coercive since L1 → ∞ implies L2 → −∞. It is therefore unclear whether global guarantees should be expected. Note however that N will be weakly-coercive in the sense that\nlim ‖θi‖→∞\nLi(θi, θ−i) =∞\nfor all i and fixed θ−i.\nTheorem 3. There is a weakly-coercive, nondegenerate, analytic two-player zero-sum game N whose only critical point is a strict maximum. Algorithms in A almost surely have bounded nonconvergent iterates in N for α, γ sufficiently small.\nProof. Consider the analytic zero-sum game N given by\nL1 = xy − x2/2 + y2/2 + x4/4− y4/4 = −L2 .\nWe prove in Appendix F that N is weakly-coercive, nondegenerate, and has a unique critical point at the origin which is a strict maximum. We prove that algorithms in A have the origin as unique fixed points, with negative-definite Jacobian for α, γ small, hence failing to converge almost surely. We moreover prove that algorithms have bounded non-convergent iterates inN for α, γ sufficiently small. Iterates are plotted for a single run of each algorithm in Figure 3 with α = γ = 0.01.\nAs in M, the behavior of each algorithm may differ for larger hyperparameters. All algorithms may have unbounded iterates or converge to the strict maximum for large α, while EG and OMD may even converge to a non-critical point (see proof). All such outcomes are unsatisfactory, though unbounded iteration will not result in positive infinite losses for both players since L1 = −L2." }, { "heading": "3.4 COROLLARY: THERE ARE NO SUITABLE MEASURES OF PROGRESS", "text": "A crucial step in proving global convergence of GD on single losses is showing that the set of accumulation points is a subset of critical points, using the function value as a ‘measure of progress’. The fact that this fails for differentiable games implies that there can be no suitable measures of progress for reasonable algorithms with bounded iterates. We formalise this below, answering the question of Schaefer & Anandkumar (2019) quoted in the introduction. Definition 7. A measure of progress for an algorithm given by θk+1 = F (θk) is a continuous map M : Rd → R, bounded below, such that M(F (θ)) ≤M(θ) and M(F (θ)) = M(θ) iff F (θ) = θ.\nMeasures of progress are very similar to descent functions, as defined by Luenberger & Ye (1984), and somewhat akin to Lyapunov functions. The function value f is a measure of progress for singleloss GD under the usual regularity conditions, while the gradient norm ‖ξ‖ is a measure of progress for GD in strictly convex differentiable games:\n‖ξ(θ − αξ)‖2 = ‖ξ‖2 − αξTHtξ + o(α) ≤ ‖ξ‖2\nfor small α. Unfortunately, games likeM prevent the existence of such measures in general. Corollary 1. There are no measures of progress for reasonable algorithms which produce bounded iterates inM or N .\nAssuming the algorithm to be reasonable is necessary: any map is a measure of progress for the unreasonable algorithm F (θ) = θ. Assuming the algorithm to have bounded iterates inM or N is necessary: M(θ) = exp (− θ · 1) is a measure of progress for the reasonable but always-divergent algorithm F (θ) = θ + 1, where 1 is the constant vector of ones." }, { "heading": "4 CONCLUSION", "text": "We have proven that global convergence is fundamentally at odds with weak, desirable requirements in multi-loss optimization. Any reasonable algorithm can cycle or diverge to infinite losses, even in two-player markets. This arises because coercive games, unlike losses, may have no critical points other than strict maxima. However, this is not the only point of failure: strict minima may have arbitrarily small regions of attraction, making convergence arbitrarily unlikely.\nLimit cycles are not necessarily bad: they may even have game-theoretic significance (Papadimitriou & Piliouras, 2019). This paper nonetheless shows that some games have no satisfactory outcome in the usual sense, even in the class of two-player markets. Players should neither escape to infinite losses, nor converge to strict maxima or non-critical points, so cycling may be the lesser evil. The community is accustomed to optimization problems whose solutions are single points, but cycles may have to be accepted as solutions in themselves.\nThe hope for machine learning practitioners is that local minima with large regions of attraction prevent limit cycles from arising in applications of interest, including GANs. Proving or disproving this is an interesting and important avenue for further research, with real implications on what to expect when agents learn while interacting with others. Cycles may for instance be unacceptable in self-driving cars, where oscillatory predictions may have life-threatening implications." }, { "heading": "A ALGORITHMS AND EXPERIMENT HYPERPARAMETERS", "text": "Each algorithm in A cited in the ‘Related Work’ section can be defined as F (θ) = θ − αG(θ) for some continuous G : Rd → Rd. We have already seen that simultaneous GD is given by GGD = ξ. The only examples in this paper are two-player games, for which AGD is given by\nGAGD =\n( ξ1(θ1, θ2)\nξ2(θ1 − αξ1, θ2) ) The other algorithms are given by\nGEG = ξ ◦ (id− αξ) GOMD = 2ξ(θk)− ξ(θk−1) GSGA = (I + λA T )ξ GCO = (I + γH T )ξ\nGCGD = (I + αHo) −1ξ GLA = (I − αHo)ξ\nGLOLA = (I − αHo)ξ − α diag(HTo ∇L) GSOS = (I − αHo)ξ − pα diag(HTo ∇L) . For OMD, the previous iterate can be uniquely recovered as θk−1 = (id − αξ)−1(θk) using the proximal point algorithm if ‖H‖ ≤ L and α < 1/L, giving\nGOMD = 2ξ − ξ ◦ (id− αξ)−1 .\nIn all experiments we initialise θ0 following a standard normal distribution and use a learning rate α = 0.01, with γ = 0.01 for CO. Learning rates αi could be chosen to be different for each player i, but we set them to be equal throughout this paper for simplicity. Claims regarding the behavior of each algorithm for sufficiently small αmean that all αi should be sufficiently small. The λ parameter for SGA is obtained by the alignment criterion introduced in the original paper,\nλ = sign ( 〈ξ,HT ξ〉〈AT ξ,HT ξ〉 ) .\nSimilarly, the p parameter for SOS is given by a two-part criterion which need not be described here.\nAccompanying code for all experiments can be found at https://github.com/aletcher/ impossibility-global-convergence." }, { "heading": "B PROOF OF PROPOSITION 1", "text": "We first prove a lemma and state a standard optimization result. Lemma 0. Let G ∈ C1(U,Rd) for an open set U . If G is L-Lipschitz then supθ∈U ‖∇G(θ)‖ ≤ L.\nThe proof is an adaptation of (Panageas & Piliouras, 2017, Lemma 7) for non-convex sets.\nProof. Fix any θ ∈ U and > 0. Since U is open, the ball Br(θ) of radius r centered at θ is contained in U for some r > 0. By Taylor expansion, for any unit vector θ′,\n‖G(θ + rθ′)−G(θ)‖ ≥ r ‖∇G(θ)θ′‖ − o(r) ≥ r ‖∇G(θ)θ′‖ − r for r sufficiently small. Since G is L-Lipschitz, we obtain\nr ‖∇G(θ)θ′‖ ≤ ‖G(θ + rθ′)−G(θ)‖+ r ≤ r(L+ ) . Since was arbitrary, ‖∇G(θ)θ′‖ ≤ L for any unit θ′. By definition of the norm, we obtain\n‖∇G(θ)‖ = sup ‖θ′‖=1\n‖∇G(θ)θ′‖ ≤ L\nfor all θ ∈ U and hence supθ∈U ‖∇G(θ)‖ ≤ L.\nProposition ((Lange, 2013, Prop. 12.4.4) and (Absil et al., 2005, Th. 4.1)). Assume f has LLipschitz gradient and is either analytic or has isolated critical points. Then for any 0 < α < 2/L and θ0 ∈ Rd we have\nlim k ‖θk‖ =∞ or lim k θk = θ̄\nfor some critical point θ̄. If f moreover has compact sublevel sets then the latter holds, limk θk = θ̄.\nWe can now prove Proposition 1, which avoids requiring Lipschitz continuity by proving that iterates are contained in the sublevel set given by θ0 for appropriate learning rate α.\nProposition 1. Assume f ∈ C2 has compact sublevel sets and is either analytic or has isolated critical points. For any θ0 ∈ Rd, define U0 = {f(θ) ≤ f(θ0)} and let L < ∞ be a Lipschitz constant for ∇f in U0. Then for any 0 < α < 2/L we have limk θk = θ̄ for some critical point θ̄.\nProof. Note that ∇f ∈ C1, so f has L-Lipschitz gradient inside any compact set U for some finite L, and supθ∈U‖∇2f(θ)‖ ≤ L by Lemma 0. Now define Uα = {θ − tα∇f(θ) | t ∈ [0, 1], θ ∈ U0} and the continuous function L(α) = supθ∈Uα ∥∥∇2f(θ)∥∥. Notice that U0 ⊂ Uα′ for all α. We prove that αL(α) < 2 implies Uα = U0 and in particular, L(α) = L(0). By Taylor expansion,\nf(θ − tα∇f) = f(θ)− α ‖∇f(θ)‖2 + t 2α2\n2 ∇f(θ)T∇2f(θ − t′α∇f)f(θ)\nfor some t′ ∈ [0, t] ⊂ [0, 1]. Since θ − t′α∇f ∈ Uα, it follows that\nf(θ − tα∇f) ≤ f(θ)− α ‖∇f(θ)‖2 (1− αL(α)/2) ≤ f(θ)\nfor all αL(α) < 2. In particular, θ− tα∇f ∈ U0 and hence Uα = U0. We conclude that αL(α) < 2 implies L(α) = L(0), implying in turn αL(0) < 2. We now claim the converse, namely that αL(0) < 2 implies αL(α) < 2. For contradiction, assume otherwise that there exists α′L(0) < 2 with α′L(α′) ≥ 2. Since αL(α) is continuous and 0L(0) = 0 < 2, there exists ᾱ ≤ α′ such that ᾱL(0) < 2 and ᾱL(ᾱ) = 2. This is in contradiction with continuity:\n2 = ᾱL(ᾱ) = lim α→ᾱ− αL(α) = lim α→ᾱ− αL(0) = ᾱL(0) .\nFinally we conclude that Uα = U0 for all αL(0) < 2, and in particular, for all αL < 2. Finally, θk ∈ U0 implies θk+1 ∈ Uα = U0 and hence θk ∈ U0 by induction. The result now follows by applying the previous proposition to f |U0 ." }, { "heading": "C PROOF OF PROPOSITION 2", "text": "Proposition 2. Write λ(A) = Re(Spec(A)) for real parts of the eigenvalues of a matrix A. We have the following implications, and none of them are equivalences.\nmaxλ(H) < 0 minλ(H) < 0\nH ≺ 0 minλ(S) < 0\nmaxλ(Hd) < 0 minλ(Hd) < 0\nThe top row is dynamics-based, governed by the collective Hessian, while the bottom row is gametheoretic wherebyHd = ⊕ ∇iiLi decomposes into agentwise Hessians. The left and right triangles collide respectively to strict maxima and saddles for single losses, since H = S = Hd = ∇2L.\nProof. First note that H ≺ 0 ⇐⇒ S ≺ 0 ⇐⇒ maxλ(S) < 0, so the leftmost term can be replaced by maxλ(S) < 0.\nWe begin with the leftmost implications. If maxλ(S) < 0 then S ≺ 0 by symmetry of S, implying both H ≺ 0 since uTHu = uTSu for all u ∈ Rd, and negative definite diagonal blocks∇2Lii ≺ 0; finally Hd ≺ 0. In particular this implies maxλ(H) < 0 and maxλ(Hd) ≺ 0 since real parts of eigenvalues of a negative definite matrix are negative.\nThe rightmost implications follow as above by contraposition: if minλ(S) ≥ 0 then S 0, which implies H 0 and Hd 0 and hence minλ(H) ≥ 0, minλ(Hd) ≥ 0. The top and bottom implications are trivial.\nThe diagonal implications hold by a trace argument:∑ i λi(H) = Tr(H) = Tr(Hd) = ∑ i λi(Hd) ,\nhence maxλ(H) < 0 implies the LHS is negative and thus ∑ i λi(Hd) < 0. It follows that λi(Hd) < 0 for some i and finally minλ(Hd) < 0. The other diagonal holds identically.\nWe now prove that no implication is an equivalence. For the leftmost implications,\nH = ( −1 2 2 −1 ) has maxλ(Hd) = −1 < 0 while maxλ(S) = 3 > 0, and\nH = ( 2 4 −4 −4 ) has maxλ(H) = −1 < 0 while maxλ(S) = 2 > 0. This also proves the diagonal implications: the first matrix has minλ(Hd) = −1 < 0 but maxλ(H) = 3 > 0, and the second matrix has minλ(H) = −1 < 0 but maxλ(Hd) = 2 > 0. For the rightmost implications, swap the sign of the diagonal elements for the two matrices above.\nThe top and bottom implications are trivially not equivalences:\nH = Hd = ( 1 0 0 −1 ) has minλ(H) = minλ(Hd) = −1 < 0 but maxλ(H) = maxλ(Hd) = 1 > 0." }, { "heading": "D PROOF OF THEOREM 1", "text": "The variable changes\n(x′, y′) = (y,−x) , (x′, y′) = (−y, x) , (x′, y′) = (−x,−y) (†)\nwill be useful, taking the positive quadrant x, y ≥ 0 to the other three. Theorem 1. There is a coercive, nondegenerate, analytic two-player marketM whose only critical point is a strict maximum. In particular, algorithms only have four possible outcomes inM:\n1. Iterates are unbounded, and all players diverge to infinite loss. [Not global]\n2. Iterates are bounded and converge to the strict maximum. [Not reasonable]\n3. Iterates are bounded and converge to a non-critical point. [Not reasonable]\n4. Iterates are bounded but do not converge (cycle). [Not global]\nFor intuition purposes, M was constructed by noticing that there is no necessary reason for the local minima of two coercive losses to coincide: the gradients of each loss may only simultaneously vanish at a local maximum in each player’s respective coordinate. The highest-order terms (first and last) provide coercivity in both coordinates while still having zero-sum interactions. The −x2 and −y2 terms yield a strict local maximum at the origin, while the ±xy terms provide opposite incentives around the origin, preventing any other simultaneous critical point to arise.\nProof. Write θ = (x, y) and consider the analytic marketM given by\nL1 = x6/6− x2/2 + xy + 1 4\n( y4\n1 + x2 − x\n4\n1 + y2 ) L2 = y6/6− y2/2− xy − 1\n4\n( y4\n1 + x2 − x\n4\n1 + y2\n)\nwith simultaneous gradient\nξ = x5 − x+ y − y4x2(1+x2)2 − x31+y2 y5 − y − x− x\n4y 2(1+y2)2 − y3 1+x2 . We prove ‘by hand’ that the origin θ̄ = 0 is the only critical point (solution to ξ = 0). See further down for an easier approach based on Sturm’s theorem, computer-assisted though equally rigorous.\nWe can assume x, y ≥ 0 since any other solution can be obtained by a quadrant variable change (†). Now assume for contradiction that ξ = 0 with y 6= 0.\n1. We first show that y > 1. Indeed,\n0 = ξ2 = y 5 − y − x− x\n4y\n2(1 + y2)2 − y\n3\n1 + x2 < y5 − y = y(y4 − 1)\nimplies y > 1 since y ≥ 0.\n2. We now show that y < 1.5. First assume for contradiction that x ≥ y, then\nξ1 = y − x+ x5 − xy4 2(1 + x2)2 − x 3 1 + y2 > 1− x+ x5 − x5/8− x3/2 := h(x) .\nNow h′(x) = 35\n8 x4 − 3 2 x2 − 1\nhas unique positive root\nx0 =\n√ 6 + 2 √ 79\n35\nand h(x)→∞ as x→∞, hence h attains its minimum at x0 and plugging x0 yields a contradiction\nξ1 > h(x0) > 0 .\nWe conclude that x < y, but combining this with x ≥ 0 yields\nξ2 > −2y + y5 − y5/8− y3 = y(7y4/8− y2 − 2) > 7y4/8− y2 − 2 > 0\nfor all y ≥ 1.5, since the rightmost polynomial is positive at y = 1.5 and has positive derivative\n7y3/2− 2y = y(7y2/2− 2) ≥ 7(1.5)2/2− 2 > 0 .\nWe must therefore have y < 1.5 as required.\n3. It remains only to show that ξ1 > 0 for all 1 < y < 1.5. First notice that fx(y) = ξ1(x, y) is concave in y for any fixed x ≥ 0 since\nf ′x(y) = 1− 2y3x\n(1 + x2)2 + 2x3\ny\n(1 + y2)2\nand so\nf ′′x (y) = − 6y2x\n(1 + x2)2 + 2x3\n1 + y2 − 4y2\n(1 + y2)3 = − 6y\n2x\n(1 + x2)2 − 2x3 3y 2 − 1 (1 + y2)3 ≤ 0\nfor y > 1. It follows that fx attains its infimum on the boundary y ∈ {1, 1.5}, so it suffices to check that ξ1(x, 1) > 0 and ξ1(x, 1.5) > 0 for all x ≥ 0. First notice that\ng(x) := x\n2(1 + x2)2\nsatisfies\ng′(x) = 1 + x2 − 4x2\n2(1 + x2)2 =\n1− 3x2\n2(1 + x2)2 ,\nwhich has a unique positive root at x0 = 1/ √\n3. This critical point of g must be a maximum since g(x) > 0 for x > 0 and g(x)→ 0 as x→∞. It follows that\ng(x) ≤ g(x0) = 1\n2 √ 3(1 + 1/3)2 = 3 √ 3/32 .\nWe now obtain ξ1(x, 1) ≥ x5 − x3/2− x+ 1− 3 √ 3/32 := p(x)\nand ξ1(x, 1.5) ≥ x5 − 4x3/13− x+ 1.5− (1.5)43 √ 3/32 := q(x) .\nNotice that p′(x) = 5x4 − 3x2/2− 1\nhas unique positive root\nx0 =\n√ 3 + √ 89\n20\nand p(x)→∞ as x→∞, hence p attains its minimum at x0 and plugging x0 yields\nξ1(x, 1) ≥ p(x0) > 0 .\nSimilarly for q we have q′(x) = 5x4 − 12x2/13− 1\nhas unique positive root\nx0 =\n√ 6 + √ 881\n65\nand plugging x0 yields ξ1(x, 1.5) ≥ q(x0) > 0 .\nWe conclude that ξ1(x, y) ≥ min(ξ1(x, 1), ξ1(x, 1.5)) > 0\nand the contradiction is complete, hence y = 0. Finally ξ2 = 0 = x, so θ̄ = 0 is the unique critical point as required. Now the Hessian at θ̄ is\nH(θ̄) = ( −1 1 −1 −1 ) ,\nwhich is negative definite since S(θ̄) = −I ≺ 0, so θ̄ is a nondegenerate strict maximum and M is nondegenerate. It remains only to prove coercivity of M, namely coercivity of L1 and L2. Coercivity of L1 follows by noticing that the dominant terms are x6/6 and y4/(1 + x2). Formally, first note that x 4\n1+y2 ≤ x 4, hence\nL1 ≥ x6/6− x4/4− x2/2 + xy + 1 4\n( y4\n1 + x2\n) .\nNow xy ≥ −|xy| ≥ −(2x2 + y2/8) by Young’s inequality, hence\nL1 ≥ x6/6− x4/4− 5x2/2− y2/8 + 1 4\n( y4\n1 + x2\n) .\nFor any sequence ‖θ‖ → ∞, either |x| → ∞ or |x| is bounded above by some k ∈ R and |y| → ∞. In the latter case, we have\nlim ‖θ‖→∞ L1 ≥ lim |y|→∞\n−k4/4− 5k2/2− y2/8 + y 4\n4(1 + k2) =∞\nsince the leading term y4 is of even degree and has positive coefficient, so we are done. Otherwise, for |x| → ∞, we pursue the previous inequality to obtain\nL1 ≥ x6/6− x4/4− 5x2/2 + y 2\n8\n( 2y2 1 + x2 − 1 ) .\nNow notice that y2 ≥ x2 ≥ 1 implies\nL1 ≥ x6/6− x4/4− 5x2/2 + x 2\n8 ( x2 − 1 1 + x2 ) ≥ x6/6− x4/4− 5x2/2− x2/8 .\nOn the other hand, x2 ≥ y2 also implies L1 ≥ x6/6− x4/4− 5x2/2− x2/8\nby discarding the first (positive) term in the brackets. Both cases lead to the same inequality and hence, for any sequence with |x| → ∞,\nlim ‖θ‖→∞ L1 ≥ lim |x|→∞ x6/6− x4/4− 5x2/2− x2/8 =∞\nsince the leading term x6 has even degree and positive coefficient. Hence L1 is coercive, and the same argument holds for L2 by swapping x and y. As required we have constructed a coercive, nondegenerate, analytic two-player marketM whose only critical point is a strict maximum. In particular, any algorithm either has unbounded iterates with infinite losses or bounded iterates. If they are bounded, they either fail to converge or converge. If they converge, they either converge to a non-critical point or a critical point, which can only be the strict maximum.\n[For an alternative proof that θ̄ = 0 is the only critical point, we may take advantage of computer algebra systems to find the exact number of real roots using the resultant matrix and Sturm’s theorem. Singular (Decker et al., 2019) is one such free and open-source system for polynomial computations, backed by published computer algebra references. In particular, the rootsur library used below is based on the book by Basu et al. (2006). First convert the equations into polynomials:{\n2(1 + x2)2(1 + y2)(x5 − x+ y)− y4x(1 + y2)− 2x3(1 + x2)2 = 0 2(1 + y2)2(1 + x2)(y5 − y − x)− x4y(1 + x2)− 2y3(1 + y2)2 = 0 .\nWe compute the resultant matrix determinant of the system with respect to y, a univariate polynomial P in x whose zeros are guaranteed to contain all solutions in x of the initial system. We then use the Sturm sequence of P to find its exact number of real roots. This is implemented with the Singular code below, whose output is 1.\nLIB \"solve.lib\"; LIB \"rootsur.lib\"; ring r = (0,x),(y),dp; poly p1 = 2*(1+xˆ2)ˆ2*(1+yˆ2)*(xˆ5-x+y)-yˆ4*x*(1+yˆ2)-2*xˆ3*(1+xˆ2)ˆ2; poly p2 = 2*(1+yˆ2)ˆ2*(1+xˆ2)*(yˆ5-y-x)-xˆ4*y*(1+xˆ2)-2*yˆ3*(1+yˆ2)ˆ2; ideal i = p1,p2; poly f = det(mp_res_mat(i)); ring s = 0,(x,y),dp; poly f = imap(r, f); nrroots(f);\nWe know that θ̄ = 0 is a real solution, so θ̄ must be the unique critical point.]" }, { "heading": "E PROOF OF THEOREM 2", "text": "Theorem 2. Given a reasonable algorithm with bounded continuous distribution on θ0 and a real number > 0, there exists a coercive, nondegenerate, almost-everywhere analytic two-player marketMσ with a strict minimum and no other critical points, such that θk either cycles or diverges to infinite losses for both players with probability at least 1− .\nProof. We modify the construction from Theorem 1 by deforming a small region around the maximum to replace it with a minimum. First let 0 < σ < 0.1 and define\nfσ(θ) =\n{ (x2 + y2 − σ2)/2 if ‖θ‖ ≥ σ\n(y2 − 3x2)(x2 + y2 − σ2)/(2σ2) otherwise, where θ = (x, y) and ‖θ‖ = √ x2 + y2 is the standard L2-norm. Note that fσ is continuous since\nlim ‖θ‖→σ+ fσ(θ) = 0 = lim ‖θ‖→σ− fσ(θ) .\nNow consider the two-player marketMσ given by\nL1 = x6/6− x2 + fσ + xy + 1\n4\n( y4\n1 + x2 − x\n4\n1 + y2 ) L2 = y6/6− fσ − xy − 1\n4\n( y4\n1 + x2 − x\n4\n1 + y2\n) .\nThe resulting losses are continuous but not differentiable; however, they are analytic (in particular smooth) almost everywhere, namely, for all θ not on the circle of radius σ. This is sufficient for the purposes of gradient-based optimization, noting that neural nets also fail to be everywheredifferentiable in the presence of rectified linear units.\nWe claim thatMσ has a single critical point at the origin θ̄ = 0. First note that\nξMσ = ξM0 = x5 − x+ y − y4x2(1+x2)2 − x31+y2 y5 − y − x− x\n4y 2(1+y2)2 − y3 1+x2 = ξM for all ‖θ‖ ≥ σ, where M is the game from Theorem 1. It was proved there that the only real solution to ξ = 0 is the origin, which does not satisfy ‖θ‖ ≥ σ. Any critical point must therefore satisfy ‖θ‖ < σ, for which\nξ = ξMσ = x5 + x+ y − 2x(3x2 + y2)/σ2 − y4x2(1+x2)2 − x31+y2 y5 + y − x− 2y(y2 − x2)/σ2 − x\n4y 2(1+y2)2 − y3 1+x2 . First note that θ̄ = 0 is a critical point; we prove that there are no others. The continuous parameter σ prevents us from using a formal verification system, so we must work ‘by hand’. Warning: the proof is a long inelegant string of case-by-case inequalities.\nAssume for contradiction that ξ = 0 with θ 6= 0. First note that ‖θ‖ < σ implies |x|, |y| < σ, and x = 0 or y = 0 implies x = y = 0 using ξ1 = 0 or ξ2 = 0 respectively. We can therefore assume 0 < |x|, |y| < σ. We can moreover assume that x > 0, the opposite case following by the quadrant change of variables (x′, y′) = (−x,−y).\n1. We begin with the case σ/2 ≤ x < σ. First notice that\nx+ y − 2x(3x2 + y2)/σ2 = x(1− 6x2/σ2) + y(1− 2xy/σ2) ≤ x(1− 3/2) + y(1− y/σ)\nand the rightmost term attains its maximum value for y = σ/2, hence\nx+ y − 2x(3x2 + y2)/σ2 ≤ −x/2 + σ/4 ≤ 0 .\nThis implies\nξ1 ≤ x5 − y4x 2(1 + x2)2 − x 3 1 + y2 < x5 − x 3 1 + y2 < x3\n( 1− y2 − 1\n1 + y2\n) = −x3y4\n1 + y2 < 0\nusing x2 + y2 < 1, which is a contradiction to ξ = 0.\n2. We proceed with the case x < σ/2 and |y| ≤ σ/2. First, y < 0 implies the contradiction\nξ2 < y − 2y3/σ2 − x4y 2(1 + y2)2 − y 3 1 + x2 < y/2− y\n( σ4\n25 + σ2 22\n) < y ( 1\n2 − 1 25 − 1 22\n) < 0 ,\nso we can assume y > 0. In particular we have (1− 2y(y + x)/σ2) > 0. If y ≤ x, we also obtain\nξ2 < y 5 + (y − x) ( 1− 2y(y + x)/σ2 ) − y 3\n1 + x2 < y3\n( y2 − 1\n1 + x2\n) < −y3x4\n1 + x2 < 0 ,\nso we can assume x < y. There are again two cases to distinguish. If x < σ/2− bσ2 with b = 0.08,\nx(1− 6x2/σ2) + y(1− 2xy/σ2) > x(1− 3(1/2− σb)) + x(1− (1/2− σb)) > 4σbx\nwhich implies the contradiction\nξ1 > 4σbx− y4x 2(1 + x2)2 − x 3 1 + y2 > σx\n( 4b− σ 4\n25 − σ\n2\n22\n) > σx ( 4b− 1\n25 − 1 22\n) > 0 .\nFinally assume x ≥ σ/2− bσ2. Then we have\n(y−x)(1− 2y(x+ y)/σ2) < bσ2(1− 4x2/σ2) < bσ2(1− (1− 2σb)2) = 4σ3b2(1−σb) < 4σ3b2\nand obtain\nξ2 < y 5 + 4σ3b2 − y\n3\n1 + x2 < σ3\n( σ2/25 + 4b2 − (1/2− σb) 3\n1 + σ2/4\n) .\nWe claim that the rightmost term is negative. Indeed, the quantity inside the brackets has derivative\nσ/24 + (1/2− σb)2 (1 + σ2/4)2 ( 3b(1 + σ2/4) + σ(1/2− σb)/2 ) > 0\nand so its supremum across σ ∈ [0, 0.1] must be attained at σ = 0.1. We obtain the contradiction\nξ2 < σ 3 ( 0.01/25 + 4b2 − (1/2− b) 3\n1 + 0.01/4\n) < 0\nfor b = 0.08 and σ > 0, as required.\n3. Finally, consider the case x < σ/2 and |y| > σ/2. First, y < 0 implies the contradiction\nξ1 < x+ y − 2x(3x2 + y2)/σ2 < −2x(3x2 + y2) < 0\nso we can assume y > 0. Now assume y < σ − x(1 + σ2). Then\nx(1− 6x2/σ2) + y(1− 2xy/σ2) > −x/2 + y(1− y/σ) > −x/2 + x(1 + σ2) > x(1/2 + σ2) ,\nwhich yields the contradiction\nξ1 > x\n( 1\n2 + σ2 − y\n4\n2(1 + x2)2 − x\n2\n1 + y2\n) > x ( 1/2 + σ2 − σ4 − σ2/4 ) > x(1/2− 1/4) > 0 .\nWe can therefore assume y ≥ σ − x(1 + σ2). We have\n(y − x)(1− 2y(y + x)/σ2) < (y − x)(1− (y + x)/σ) ≤ (y − x)(1− (1− σx)) < σx(y − x)\nwhich attains its maximum in x at x = y/2, hence\nξ2 < y 5 − y\n3 1 + x2 + σy2 4 < σy2 4\n( 4σ2 − 2\n1 + σ2 + 4\n) .\nFinally we obtain the contradiction\nξ2 < σy2\n4\n( 5σ2 + 4σ4 − 1\n1 + σ2\n) < 0\nfor all σ < 0.1. All cases lead to contradictions, so we conclude that θ̄ is the only critical point, with positive definite Hessian\nH(θ̄) = ( 1 1 −1 1 ) 0 ,\nhence θ̄ is a strict minimum. Now notice thatM0 has the same dominant terms asM from Theorem 1, so coercivity ofM0 follows from the same argument. SinceMσ is identical toM0 outside the σ-ball Bσ = {(x, y) ∈ R2 | ‖θ‖ < σ}, coercivity ofM0 implies coercivity ofMσ for any σ. Fix any reasonable algorithm F , any bounded continuous measure ν on Rd with initial region U , and any > 0. We abuse notation somewhat and write F kσ (θ0) for the kth iterate of F inMσ with initial parameters θ0. We claim that there exists σ > 0 such that\nPν ( θ0 ∈ U and lim\nk F kσ (θ0) = θ̄\n) < .\nSince θ̄ is the only critical point and Mσ is coercive, this implies bounded but non-convergent iterates or divergent iterates with infinite losses with probability at least 1− , proving the theorem. To begin, µ(Bσ)→ 0 as σ → 0 implies that we can pick σ′ > 0 such that Pν(θ0 ∈ Bσ′) < /2 by continuity of ν with respect to Lebesgue measure.\nNow let Ū be the closure of U and define D = Ū ∩ {‖θ‖ ≥ σ′}. Note that D is compact since Ū is compact and closed subsets of a compact set are compact. F is reasonable, D is bounded and θ̄ = 0 is a strict maximum inM0, so there are hyperparameters such that the stable set\nZ = {θ0 ∈ D | lim k F k0 (θ0) = 0}\nhas zero measure. We claim that Zδ := {θ0 ∈ D | inf\nk∈N ∥∥F k0 (θ0)∥∥ < δ} has arbitrarily small measure as δ → 0. Assume for contradiction that there exists α > 0 such that µ(Zδ) ≥ α for all δ > 0. Then Zδ ⊂ Zδ′ and µ(Zδ) ≤ µ(D) <∞ for all δ < δ′ implies\nµ (⋂ n∈N Z 1 n ) = lim n→∞ µ ( Z 1 n ) ≥ α\nby Nelson (2015, Exercise 1.19). On the other hand,⋂ n∈N Z 1 n = Z0\nyields the contradiction 0 = µ(Z0) ≥ α. We conclude that Zδ has arbitrarily small measure, hence there exists δ > 0 such that Pν(θ0 ∈ Zδ) < /2 by continuity of ν. Now let σ = min{σ′, δ} and notice that\nθ0 ∈ D \\ Zδ =⇒ inf k ∥∥F k0 (θ0)∥∥ ≥ δ ≥ σ =⇒ inf k ∥∥F kσ (θ0)∥∥ ≥ σ , where the last implication holds since Mσ and M0 are indistinguishable in {‖θ‖ ≥ σ}, so the algorithm must have identical iterates F kσ (θ0) = F k 0 (θ0) for all k. It follows by contraposition that limk F k σ (θ0) = θ̄ implies infk\n∥∥F kσ (θ0)∥∥ < σ and so θ0 ∈ Zδ or θ0 /∈ D. Finally we obtain Pν ( θ0 ∈ U and lim\nk F kσ (θ0) = θ̄\n) = Pν (θ0 ∈ U ∩ Zδ or θ0 ∈ U \\D)\n≤ Pν (θ0 ∈ U ∩ Zδ) + Pν (θ0 ∈ U \\D) ≤ Pν (θ0 ∈ Zδ) + Pν (θ0 ∈ Bσ′) < /2 + /2 =\nas required. We plot iterates for a single run of each algorithm in Figure 3 with α = γ = 0.01." }, { "heading": "F PROOF OF THEOREM 3", "text": "Theorem 3. There is a weakly-coercive, nondegenerate, analytic two-player zero-sum game N whose only critical point is a strict maximum. Algorithms in A almost surely have bounded nonconvergent iterates in N for α, γ sufficiently small.\nProof. Consider the analytic zero-sum game N given by L1 = xy − x2/2 + y2/2 + x4/4− y4/4 = −L2\nwith simultaneous gradient\nξ = ( y − x+ x3 −x− y + y3 ) and Hessian\nH = ( −1 + 3x2 1 −1 −1 + 3y2 ) .\nWe show that the only solution to ξ = 0 is the origin. First we can assume x, y ≥ 0 since any other solution can be obtained by a quadrant variable change (†). Now assume for contradiction that y 6= 0, then ξ2 = 0 = −x− y + y3 ≤ −y + y3 = y(y2 − 1) implies y ≥ 1 and hence\nξ1 = 0 = y − x+ x3 ≥ 1− x+ x3 = (x+ 1)(x− 1)2 + x2 > 0 which is a contradiction. It follows that y = 0 and hence ξ2 = 0 = x as required. Now the origin has invertible, negative-definite Hessian\nH(0) = ( −1 1 −1 −1 ) ≺ 0\nso the unique critical point is a strict maximum. The game is nondegenerate since the only critical point has invertible Hessian. The game is weakly-coercive since L1(x, ȳ) → ∞ for any fixed ȳ by domination of the x4 term; similarly for L2(x̄, y) by domination of the y4 term.\nBounded iterates: strategy. We begin by showing that all algorithms have bounded iterates inN for α, γ sufficiently small. For each algorithm F , our strategy is to show that there exists r > 0 such that for any s > 0 we have ‖F (θ)‖ < ‖θ‖ for all r < ‖θ‖ < s and α, γ sufficiently small. This will be enough to prove bounded iteration upon bounded initialisation. Denote by Br the ball of radius r centered at the origin.\nGD. We have θT ξ = x(y − x+ x3) + y(−x− y + y3)\n= x4 − x2 + y4 − y2\n= (x2 − 1)2 + (y2 − 1)2 + x2 + y2 − 2 > 1\nfor all ‖θ‖2 = x2 + y2 > 3. For any s > 0 we obtain ‖F (θ)‖2 = ‖θ − αξ‖2 = ‖θ‖2 − 2αθT ξ + α2 ‖ξ‖2 < ‖θ‖ − α ( 2− α ‖ξ‖2 ) < ‖θ‖2\nfor all √\n3 < ‖θ‖ < s and α sufficiently small, namely 0 < α < 2/ supθ∈Bs ‖ξ‖ 2.\nEG. For any s > 0 and √\n4 < ‖θ‖ < s we have ‖θ − αξ(θ)‖2 > 4− 2αθT ξ > 3\nfor α < 1/ supθ∈Bs 2θ T ξ. Now using θT ξ > 1 for all ‖θ‖2 > 3 by the argument for GD above,\n‖F (θ)‖2 = ‖θ‖2 − 2αθT ξ(θ − αξ(θ)) + α2 ‖ξ(θ − αξ(θ))‖2\n= ‖θ‖2 − 2α(θ − αξ(θ))T ξ(θ − αξ(θ)) +O(α2) < ‖θ‖2 − α (2−O(α)) < ‖θ‖2\nfor α sufficiently small.\nAGD. For any s > 0, notice by continuity of ξ that there exists δ > 0 such that\nθT (ξ1, ξ2(θ1 − αξ1, θ2)) > θT ξ − 1/2 for all α < δ and θ ∈ Bs, since Bs is bounded and θ1 − αξ1 → θ1 as α→ 0. It follows that\n‖F (θ)‖2 = ‖θ‖2 − 2αθT (ξ1, ξ2(θ1 − αξ1, θ2)) +O(α2) < ‖θ‖2 − 2α(θT ξ − 1/2) +O(α2) < ‖θ‖2 − 2α(1− 1/2) +O(α2) < ‖θ‖2 − α(1−O(α)) < ‖θ‖2\nfor all √ 3 < ‖θ‖ < s and α < δ sufficiently small.\nOMD. For any s > 0, notice by continuity of ξ that there exists δ > 0 such that∣∣θT (ξ(θ)− ξ((id− αξ)−1(θ))∣∣ < 1/2 for all α < δ and θ ∈ Bs, since Bs is bounded and (id− αξ)−1(θ)→ θ as α→ 0. It follows that\n‖F (θ)‖2 = ‖θ‖2 − 2αθT ξ − 2αθT (ξ(θ)− ξ((id− αξ)−1(θ)) +O(α2) < ‖θ‖2 − 2α+ α+O(α2) = ‖θ‖2 − α(1−O(α)) < ‖θ‖2\nfor all √ 3 < ‖θ‖ < s and α < δ sufficiently small.\nCO, CGD, LA, LOLA, SOS. Writing ν for γ if F = FCO and ν for α otherwise, for each algorithm we have F (θ) = θ − αξ + ανK for some continuous function K : Rd → R. For instance, K = −HT ξ for CO (see Appendix A). We obtain\n‖F (θ)‖2 = ‖θ − αξ + ανK‖2\n= ‖θ‖2 − 2αθT ξ + 2ανθTK − 2α2νξTK + α2 ‖ξ‖2 + α2ν2 ‖K‖ = ‖θ‖2 − α ( 2θT ξ − 2νθTK + 2ανξTK − α ‖ξ‖2 − αν2 ‖K‖ ) .\nNotice that every term in the brackets contains an α or ν except for the first. We have already shown that θT ξ > 1 for all ‖θ‖2 > 3 for GD above, hence for any s > 0 we have\n‖F (θ)‖2 < ‖θ‖2 − α (\n2− 2ν sup θ∈Bs θTK + 2αν inf θ∈Bs ξTK − α sup θ∈Bs ‖ξ‖2 − α sup θ∈Bs\nν2 ‖K‖ )\n= ‖θ‖2 − α (2−O(α, ν)) < ‖θ‖2\nfor all √ 3 < ‖θ‖2 < s and α, ν sufficiently small.\nSGA. The situation differs from the above since parameter λ follows an alignment criterion, namely λ = sign ( 〈ξ,HT ξ〉〈AT ξ,HT ξ〉 ) , which cannot be made small. First note that\nθTGSGA = θ tξ + λθT (AT ξ) = x4 + y4 − x2 − y2 + λ(x2 + y2 + x3y − xy3) .\nIf λ = −1, θTGSGA = x 4 + y4 − 2x2 − 2y2 − x3y + xy3 and splitting x4 + y4 in two yields\nx4 + y4\n2 − 2x2 − 2y2 = 1 4\n[ (x2 − y2)2 + (x2 + y2)(x2 + y2 − 8) ] > 1\nfor ‖θ‖2 = x2 + y2 > 9, while\nx4 + y4\n2 − x3y + xy3 = 1 2\n[ (−x2 + xy + y2)2 + x2y2 ] > 0\nfor ‖θ‖ > 0. Summing the two yields θTGSGA > 1 for ‖θ‖2 > 9 and λ = −1. If λ = 1,\nθTGSGA = x 4 + y4 + x3y − xy3\n= x4 + y4 − 2x2 − 2y2 + x3y − xy3 + 2(x2 + y2) ≥ x4 + y4 − 2x2 − 2y2 + x3y − xy3 > 1\nfor ‖θ‖2 > 9 by swapping x and y in the λ = −1 case above. We conclude θTGSGA > 1 for ‖θ‖2 > 9 regardless of λ. For any s > 0 we obtain\n‖F (θ)‖2 = ‖θ‖2 − 2αθTGSGA + α2 ‖GSGA‖2 < ‖θ‖2 − α ( 2− α ‖GSGA‖2 ) < ‖θ‖2\nfor all 3 < ‖θ‖ < s and α < 2/ supθ∈Bs GSGA.\nBounded iterates: conclusion. Now assume as usual that θ0 is initalised in any bounded region U . For each algorithm we have found r such that for any s > 0 we have ‖F (θ)‖ < ‖θ‖ for all r < ‖θ‖ < s and α, γ sufficiently small. Now pick r′ ≥ r such that U ⊂ Br′ . Define the bounded region V = {θ − tG(θ) | t ∈ [0, 1], θ ∈ Br′} . and pick s ≥ r′ such that V ⊂ Bs. By the above we have ‖F (θ)‖ < ‖θ‖ for all r < ‖θ‖ < s and α, γ sufficiently small. In particular, fix any α, γ < 1 satisfying this condition. We claim that F (θ) ∈ Bs for all θ ∈ Bs. Indeed, either θ ∈ Br implies F (θ) = θ − αG(θ) ∈ V ⊂ Bs or θ /∈ Br implies ‖F (θ)‖ < ‖θ‖ < s and so F (θ) ∈ Bs. We conclude that θ0 ∈ U ⊂ Bs implies bounded iterates θk = F k(θ) ∈ Bs for all k.\nNon-convergence: strategy. We show that all methods inA have the origin as unique fixed points for α, γ sufficiently small. Fixed points of each gradient-based method are given by G = 0, where G is given in Appendix A, and we moreover show that the Jacobian ∇G at the origin is negativedefinite. Non-convergence will follow from this for α sufficiently small.\nGD. Fixed points of simultaneous GD correspond by definition to critical points:\nGGD = ξ = 0 ⇐⇒ θ = 0 .\nThe Jacobian of G at 0 is\n∇ξ = H = ( −1 1 −1 −1 ) ≺ 0 .\nAGD. We have GAGD = 0 ⇐⇒ { ξ1 = 0\nξ2(θ1 − αξ1, θ2) = 0 ⇐⇒\n{ ξ1 = 0\nξ2 = 0 ⇐⇒ ξ = 0 ⇐⇒ θ = 0 .\nNow\nξ2(x− αξ1(x, y), y) = −(x− α(y − x+ x3))− y + y3\n= x(−1− α) + y(−1 + α) + αx3 + y3\nso the Jacobian at the origin is\nJAGD = ( −1 1 −1− α −1 + α ) with symmetric part\nSAGD = ( −1 −α/2 −α/2 −1 + α ) which has negative trace for all α < 2 and positive determinant\n−α2/2− α+ 1 = −(α+ 1)2/2 + 3/2 > −9/8 + 3/2 > 0\nfor all α < 1/2, which together imply negative eigenvalues and hence SAGD ≺ 0. Recall that a matrix is negative-definite iff its symmetric part is, hence JAGD ≺ 0 for all α < 1/2.\nEG. We have\nGEG = ξ ◦ (id− αξ) = 0 ⇐⇒ id− αξ = 0 ⇐⇒\n{ x− α(y − x+ x3) = 0\ny − α(−x− y + y3) = 0 .\nWe have shown that any bounded initialisation results in bounded iterates for EG for α sufficiently small. Let U be this bounded region and assume for contradiction that id − αξ = 0 with x, y 6= 0 (noting that x = 0 implies y = 0 by the first equation and vice-versa). We can assume x, y > 0 since any other solution can be obtained by a quadrant change of variable (†). We first prove that x, y < 1 for 0 < α < 1/ supθ∈U{y − x+ x3}. Indeed we have\n0 = ξ1 > x− α sup θ∈U > x− 1\nhence x < 1. A similar derivation holds for y, hence 0 < x, y < 1. But now x ≥ y implies\n0 = ξ1 ≥ x− α(y − y + x3) = x(1− αx2) ≥ x(1− α) > 0\nfor α < 1 while x < y implies\n0 = ξ2 ≥ y − α(−x− x+ y3) = y(1− αy2) ≥ y(1− α) > 0\nand the contradiction is complete, hence θ = 0 is the only fixed point of EG. Now JEG = H(I − αH) = ( −1 1 −1 −1 )( 1 + α −α α 1 + α ) = ( −1 1 + 2α −1− 2α −1 ) with SEG = −I ≺ 0, hence JEG ≺ 0 for all α.\nOMD. By Daskalakis & Panageas (2018, Remark 1.5), fixed points of OMD must satisfy ξ = 0 by viewing OMD as mapping pairs (θk, θk−1) to pairs (θk+1, θk), hence θ = 0. Now\nJOMD = 2H −H(I − αH)−1 = 2 ( −1 1 −1 −1 ) − 1\n1 + 2α+ 2α2\n( −1− 2α 1 −1 −1− 2α ) .\nNow notice that 1 + 2α\n1 + 2α+ 2α2 ≤ 1\nand so\nSOMD =\n( −2 + 1+2α1+2α+2α2 0\n0 −2 + 1+2α1+2α+2α2\n) ≺ 0\nfor all α.\nCO. We have GCO = (I + γH T )ξ = 0 ⇐⇒ ξ = 0 ⇐⇒ θ = 0\nfor all γ since the matrix\n(I + γHT ) = ( 1− γ −γ γ 1− γ ) is always invertible with determinant (1− γ)2 + γ2 > 0. Now\nJCO = (I + γH T )H = ( 1− γ −γ γ 1− γ )( −1 1 −1 −1 ) = ( −1 + 2γ 1 −1 −1 + 2γ ) ≺ 0\nfor all γ < 1/2.\nSGA. We have GSGA = (I + λA\nT )ξ = 0 ⇐⇒ ξ = 0 ⇐⇒ θ = 0 since antisymmetric A with eigenvalues ia, a ∈ R implies that I + λAT is always invertible with eigenvalues 1 + iλa 6= 0. Now recall that λ is given by\nλ = sign ( 〈ξ,HT ξ〉〈AT , HT ξ〉 ) = sign ( ξTHT ξ · ξTAHT ξ ) .\nWe have\nHT =\n( −1 + 3x2 −1 1 −1 + 3y2 ) ≺ 0\nand\nAHT =\n( 1 −1 + 3y2\n1− 3x2 1\n) 0\nfor all ‖θ‖ sufficiently small, hence ξTHT ξ ≤ 0 and ξTAHT ξ ≥ 0 and thus λ = sign ( 〈ξ,HT ξ〉〈AT , HT ξ〉 ) = sign ( ξTHT ξ · ξTAHT ξ ) ≤ 0\naround the origin. Now\nJSGA = (I + λA T )H = ( 1 −λ λ 1 )( −1 1 −1 −1 ) = ( −1 + λ 1 + λ −1− λ −1 + λ ) ≺ 0\nfor all λ < 1, which holds in particular for λ ≤ 0.\nCGD. Note that Ho = ( 0 1 −1 0 ) = A\nis antisymmetric, hence I + αHo is always invertible as for SGA and\nGCGD = (I + αHo) −1ξ = 0 ⇐⇒ ξ = 0 ⇐⇒ θ = 0 .\nNow\nJCGD = (I + αHo) −1H =\n1\n1 + α2\n( 1 −α α 1 )( −1 1 −1 −1 ) =\n1\n1 + α2\n( −1 + α 1 + α −1− α −1 + α ) ≺ 0\nfor all α < 1.\nLA. As above, GLA = (I − αHo)ξ = 0 ⇐⇒ ξ = 0 ⇐⇒ θ = 0\nsince (I − αHo) is always invertible. Now JLA = (I − αHo)H = (I − αA)H = ( −1 + α 1 + α −1− α −1 + α ) ≺ 0\nfor all α < 1.\nLOLA. Notice that\ndiag ( HTo ∇L ) = diag (( 0 −1 1 0 )( y − x+ x3 −y + x− x3 x+ y − y3 −x− y + y3 )) = ( −x− y + y3 −y + x− x3 ) = Hoξ\nand so GLOLA = (I − αHo)ξ − α diag ( HTo ∇L ) = (I − 2αHo)ξ ⇐⇒ ξ = 0 ⇐⇒ θ = 0\nas for LA. Similarly, substituting 2α for α in the derivation for LA yields\nJLOLA = (I − 2αHo)H ≺ 0\nfor all α < 1/2.\nSOS. As for LOLA we have GSOS = (I − αHo)ξ − pα diag ( HTo ∇L ) = (I − α(1 + p)Ho)ξ ⇐⇒ ξ = 0 ⇐⇒ θ = 0\nfor any α, p. Now p(θ̄) = 0 for fixed points θ̄ by Letcher et al. (2019b, Lemma D.7), hence\nJSOS = JLA = ( −1 + α 1 + α −1− α −1 + α ) ≺ 0\nfor all α < 1.\nNon-convergence: conclusion. We conclude that all algorithms in A have the origin as unique fixed points, with negative-definite Jacobian, for α, γ sufficiently small. If a method converges, it must therefore converge to the origin. We show that this occurs with zero probability. One may invoke the Stable Manifold Theorem from dynamical systems, but there is a more direct proof.\nTake any algorithm F in A and let U be the initialisation region. We prove that the stable set\nZ = {θ0 ∈ U | lim k F k(θ0) = 0}\nhas Lebesgue measure zero for α sufficiently small. First assume for contradiction that θk → 0 with θk 6= 0 for all k. Then\nG(θk) = G(0) +∇G(0)θk +O(‖θk‖2) = ∇G(θ̄)(θk) +O(‖θk‖2)\nsince G(0) = 0, and we obtain\n‖θk+1‖2 = ‖θk − αG(θk)‖2\n= ‖θk‖2 − 2αθTkG(θk) + α2 ‖G(θk)‖ 2\n≥ ‖θk‖2 − 2αθTk∇G(0)θk +O(‖θk‖ 3 ) > ‖θk‖2\nfor all k sufficiently large, since ∇G(0) ≺ 0. This is a contradiction to θk → 0, so θk → 0 implies θk = 0 for some k and so, writing FU : U → Rd for the restriction of F to U ,\nZ ⊂ ∪∞k=0F−kU ({0}) .\nWe claim that FU is a C1 local diffeomorphism, and a diffeomorphism onto its image. Now GU is C1 with bounded domain, hence L-Lipschitz for some finite L. By Lemma 0, the eigenvalues of∇G in U satisfy |λ| ≤ ‖∇G‖ ≤ L, hence ∇FU = I − α∇GU has eigenvalues 1 − αλ ≥ 1 − α|λ| ≥ 1 − αL > 0. It follows that ∇FU is invertible everywhere, so FU is a local diffeomorphism by the Inverse Function Theorem (Spivak, 1971, Th. 2.11). To prove that FU : U → F (U) is a diffeomorphism, it is sufficient to show injectivity of FU . Assume for contradiction that FU (θ) = FU (θ ′) with θ 6= θ′. Then by definition,\nθ − θ′ = α(GU (θ′)−GU (θ))\nand so ‖θ − θ′‖ = α ‖GU (θ′)−GU (θ)‖ ≤ αL ‖θ − θ′‖ < ‖θ − θ′‖ ,\na contradiction. We conclude that FU is a diffeomorphism onto its image with continuously differentiable inverse F−1U , hence F −1 U is locally Lipschitz and preserves measure zero sets. It follows by induction that µ(F−kU ({0})) = 0 for all k, and so\nµ(Z) ≤ µ ( ∪∞k=0F−kU ({0}) ) = 0\nsince countable unions of measure zero sets have zero measure. Since θ0 follows a continuous distribution ν, we conclude\nPν ( lim k F k(θ0) = 0 ) = 0\nas required. Since all algorithms were also shown to produce bounded iterates, they almost surely have bounded non-convergent iterates for α, γ sufficiently small. The proof is complete; iterates are plotted for a single run of each algorithm in Figure 3 with α = γ = 0.01." }, { "heading": "G PROOF OF COROLLARY 1", "text": "Corollary 1. There are no measures of progress for reasonable algorithms which produce bounded iterates inM or N .\nProof. Assume for contradiction that a measure of progressM exists for some reasonable algorithm F and consider the iterates θk produced in the gameM orN . We prove that the set of accumulation points of θk is a subset of critical points, following Lange (2013, Prop. 12.4.2). Consider any accumulation point θ̄ = limm→∞ θkm . The sequence M(θk) is monotonically decreasing and bounded below, hence convergent. In particular,\nlim m M(F (θkm)) = lim m M(θkm+1) = lim m M(θkm) .\nBy continuity of M and F , we obtain\nM(F (θ̄)) = M(lim m F (θkm)) = lim m M(F (θkm)) = lim m M(θkm) = M(θ̄)\nand hence F (θ̄) = θ̄. Since F is reasonable, θ̄ must be a critical point. Now the only critical point of M orN is the strict maximum θ̄ = 0, so any accumulation point of θk must be θ̄. The sequence θk is assumed to be bounded, so it must have at least one accumulation point by Bolzano-Weierstrass. A sequence with exactly one accumulation point is convergent, hence θk → θ̄. This is in contradiction with the algorithm being reasonable." } ]
2,021
null
SP:bac034cc8f02b43a03e24f0a8d327c4b68afed09
[ "The authors propose a new neural architecture search algorithm combining Bayesian optimization with the expressive and popular Weisfeiler-Lehman (WL) Graph Kernel. One advantage of using WL is the interpretable results that stem from the nature of how the kernel is computed, namely a propagation scheme through the graph. Combined the derivative of Eq. 3.2, one can extract subgraphs that are directly responsible for increased performance. In a variety of experiments, the authors show not only increased performance of detected architectures but also find subgraphs that are found by other algorithms as well. " ]
Current neural architecture search (NAS) strategies focus only on finding a single, good, architecture. They offer little insight into why a specific network is performing well, or how we should modify the architecture if we want further improvements. We propose a Bayesian optimisation (BO) approach for NAS that combines the Weisfeiler-Lehman graph kernel with a Gaussian process surrogate. Our method optimises the architecture in a highly data-efficient manner: it is capable of capturing the topological structures of the architectures and is scalable to large graphs, thus making the high-dimensional and graph-like search spaces amenable to BO. More importantly, our method affords interpretability by discovering useful network features and their corresponding impact on the network performance. Indeed, we demonstrate empirically that our surrogate model is capable of identifying useful motifs which can guide the generation of new architectures. We finally show that our method outperforms existing NAS approaches to achieve the state of the art on both closedand open-domain search spaces.
[ { "affiliations": [], "name": "LEHMAN KERNELS" }, { "affiliations": [], "name": "Binxin Ru" }, { "affiliations": [], "name": "Xingchen Wan" }, { "affiliations": [], "name": "Xiaowen Dong" }, { "affiliations": [], "name": "Michael A. Osborne" } ]
[ { "authors": [ "Esteban Real", "Sherry Moore", "Andrew Selle", "Saurabh Saxena", "Yutaka Leon Suematsu", "Jie Tan", "Quoc V Le", "Alexey Kurakin" ], "title": "Large-scale evolution of image classifiers", "venue": "In International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Barret Zoph", "Quoc Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Han Cai", "Tianyao Chen", "Weinan Zhang", "Yong Yu", "Jun Wang" ], "title": "Efficient architecture search by network transformation", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "Darts: Differentiable architecture search", "venue": "arXiv preprint arXiv:1806.09055,", "year": 2018 }, { "authors": [ "Chenxi Liu", "Barret Zoph", "Maxim Neumann", "Jonathon Shlens", "Wei Hua", "Li-Jia Li", "Li Fei-Fei", "Alan Yuille", "Jonathan Huang", "Kevin Murphy" ], "title": "Progressive neural architecture search", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Renqian Luo", "Fei Tian", "Tao Qin", "Enhong Chen", "Tie-Yan Liu" ], "title": "Neural architecture optimization", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2018 }, { "authors": [ "Hieu Pham", "Melody Guan", "Barret Zoph", "Quoc Le", "Jeff Dean" ], "title": "Efficient neural architecture search via parameter sharing", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Esteban Real", "Alok Aggarwal", "Yanping Huang", "Quoc V Le" ], "title": "Regularized evolution for image classifier architecture search", "venue": null, "year": 2018 }, { "authors": [ "Barret Zoph", "Vijay Vasudevan", "Jonathon Shlens", "Quoc V Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "In Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Sirui Xie", "Hehui Zheng", "Chunxiao Liu", "Liang Lin" ], "title": "Snas: stochastic neural architecture search", "venue": "arXiv preprint arXiv:1812.09926,", "year": 2018 }, { "authors": [ "Chris Ying", "Aaron Klein", "Eric Christiansen", "Esteban Real", "Kevin Murphy", "Frank Hutter" ], "title": "NASBench-101: Towards reproducible neural architecture search", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Xuanyi Dong", "Yi Yang" ], "title": "Nas-bench-201: Extending the scope of reproducible neural architecture search", "venue": "arXiv preprint arXiv:2001.00326,", "year": 2020 }, { "authors": [ "Colin White", "Willie Neiswanger", "Yash Savani" ], "title": "Bananas: Bayesian optimization with neural architectures for neural architecture search", "venue": "arXiv preprint arXiv:1910.11858,", "year": 2019 }, { "authors": [ "Kirthevasan Kandasamy", "Willie Neiswanger", "Jeff Schneider", "Barnabas Poczos", "Eric P Xing" ], "title": "Neural architecture search with Bayesian optimisation and optimal transport", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2018 }, { "authors": [ "Lizheng Ma", "Jiaxu Cui", "Bo Yang" ], "title": "Deep neural architecture search with deep graph Bayesian optimization", "venue": "In Web Intelligence (WI),", "year": 2019 }, { "authors": [ "Chris Zhang", "Mengye Ren", "Raquel Urtasun" ], "title": "Graph hypernetworks for neural architecture search", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Han Shi", "Renjie Pi", "Hang Xu", "Zhenguo Li", "James T Kwok", "Tong Zhang" ], "title": "Multi-objective neural architecture search via predictive network performance optimization, 2019", "venue": null, "year": 2019 }, { "authors": [ "Thomas Elsken", "Jan Hendrik Metzen", "Frank Hutter" ], "title": "Neural architecture search: A survey", "venue": null, "year": 2018 }, { "authors": [ "Barret Zoph", "Vijay Vasudevan", "Jonathon Shlens", "Quoc V Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Saining Xie", "Alexander Kirillov", "Ross Girshick", "Kaiming He" ], "title": "Exploring randomly wired neural networks for image recognition", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Eric Brochu", "Vlad M Cora", "Nando De Freitas" ], "title": "A tutorial on bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning", "venue": "arXiv preprint arXiv:1012.2599,", "year": 2010 }, { "authors": [ "Christopher KI Williams", "Carl Edward Rasmussen" ], "title": "Gaussian processes for machine learning, volume 2. MIT press", "venue": null, "year": 2006 }, { "authors": [ "Jonas Mockus", "Vytautas Tiesis", "Antanas Zilinskas" ], "title": "The application of bayesian methods for seeking the extremum", "venue": "Towards global optimization,", "year": 1978 }, { "authors": [ "Nils M Kriege", "Fredrik D Johansson", "Christopher Morris" ], "title": "A survey on graph kernels", "venue": "Applied Network Science,", "year": 2020 }, { "authors": [ "Giannis Nikolentzos", "Giannis Siglidis", "Michalis Vazirgiannis" ], "title": "Graph kernels: A survey", "venue": "arXiv preprint arXiv:1904.12218,", "year": 2019 }, { "authors": [ "Swarnendu Ghosh", "Nibaran Das", "Teresa Gonçalves", "Paulo Quaresma", "Mahantapas Kundu" ], "title": "The journey of graph kernels through two decades", "venue": "Computer Science Review,", "year": 2018 }, { "authors": [ "Nino Shervashidze", "Pascal Schweitzer", "Erik Jan Van Leeuwen", "Kurt Mehlhorn", "Karsten M Borgwardt" ], "title": "Weisfeiler-lehman graph kernels", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Frank Höppner", "Maximilian Jahnke" ], "title": "Enriched weisfeiler-lehman kernel for improved graph clustering of source code", "venue": "In International Symposium on Intelligent Data Analysis,", "year": 2020 }, { "authors": [ "Christopher Morris", "Martin Ritzert", "Matthias Fey", "William L Hamilton", "Jan Eric Lenssen", "Gaurav Rattan", "Martin Grohe" ], "title": "Weisfeiler and leman go neural: Higher-order graph neural networks", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Haifeng Jin", "Qingquan Song", "Xia Hu" ], "title": "Auto-keras: An efficient neural architecture search system", "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2019 }, { "authors": [ "Zhiping Zeng", "Anthony KH Tung", "Jianyong Wang", "Jianhua Feng", "Lizhu Zhou" ], "title": "Comparing stars: On approximating graph edit distance", "venue": "Proceedings of the VLDB Endowment,", "year": 2009 }, { "authors": [ "Andries Petrus Engelbrecht", "Ian Cloete", "Jacek M Zurada" ], "title": "Determining the significance of input parameters using sensitivity analysis", "venue": "In International Workshop on Artificial Neural Networks,", "year": 1995 }, { "authors": [ "Pang Wei Koh", "Percy Liang" ], "title": "Understanding black-box predictions via influence functions", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Marco Tulio Ribeiro", "Sameer Singh", "Carlos Guestrin" ], "title": " why should i trust you?\" explaining the predictions of any classifier", "venue": "In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining,", "year": 2016 }, { "authors": [ "Mukund Sundararajan", "Ankur Taly", "Qiqi Yan" ], "title": "Axiomatic attribution for deep networks", "venue": "arXiv preprint arXiv:1703.01365,", "year": 2017 }, { "authors": [ "Marco Ancona", "Enea Ceolini", "Cengiz Öztireli", "Markus Gross" ], "title": "Towards better understanding of gradient-based attribution methods for deep neural networks", "venue": "arXiv preprint arXiv:1711.06104,", "year": 2017 }, { "authors": [ "Linnan Wang", "Saining Xie", "Teng Li", "Rodrigo Fonseca", "Yuandong Tian" ], "title": "Sample-efficient neural architecture search by learning action space", "venue": null, "year": 1906 }, { "authors": [ "Saining Xie", "Ross Girshick", "Piotr Dollár", "Zhuowen Tu", "Kaiming He" ], "title": "Aggregated residual transformations for deep neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Yao Shu", "Wei Wang", "Shaofeng Cai" ], "title": "Understanding architectures learnt by cell-based neural architecture search", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Dhanesh Ramachandram", "Michal Lisicki", "Timothy J Shields", "Mohamed R Amer", "Graham W Taylor" ], "title": "Bayesian optimization on graph-structured search spaces: Optimizing deep multimodal fusion", "venue": "architectures. Neurocomputing,", "year": 2018 }, { "authors": [ "Jiaxuan You", "J. Leskovec", "Kaiming He", "Saining Xie" ], "title": "Graph structure of neural networks", "venue": "International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Ilija Radosavovic", "Raj Prateek Kosaraju", "Ross Girshick", "Kaiming He", "Piotr Dollár" ], "title": "Designing network design spaces", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Maria-Elena Nilsback", "Andrew Zisserman" ], "title": "Automated flower classification over a large number of classes", "venue": "Sixth Indian Conference on Computer Vision, Graphics and Image Processing,", "year": 2008 }, { "authors": [ "Changyong Oh", "Jakub Tomczak", "Efstratios Gavves", "Max Welling" ], "title": "Combinatorial bayesian optimization using the graph cartesian product", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "James S Bergstra", "Rémi Bardenet", "Yoshua Bengio", "Balázs Kégl" ], "title": "Algorithms for hyper-parameter optimization", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2011 }, { "authors": [ "Barret Zoph", "Quoc V Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "arXiv preprint arXiv:1611.01578,", "year": 2016 }, { "authors": [ "Frank Hutter", "Holger H Hoos", "Kevin Leyton-Brown" ], "title": "Sequential model-based optimization for general algorithm configuration", "venue": "In International conference on learning and intelligent optimization,", "year": 2011 }, { "authors": [ "Esteban Real", "Alok Aggarwal", "Yanping Huang", "Quoc V Le" ], "title": "Regularized evolution for image classifier architecture search", "venue": "In Proceedings of the aaai conference on artificial intelligence,", "year": 2019 }, { "authors": [ "Hanwen Liang", "Shifeng Zhang", "Jiacheng Sun", "Xingqiu He", "Weiran Huang", "Kechen Zhuang", "Zhenguo Li" ], "title": "Darts+: Improved differentiable architecture search with early stopping", "venue": null, "year": 1909 }, { "authors": [ "Han Cai", "Ligeng Zhu", "Song Han" ], "title": "ProxylessNAS: Direct neural architecture search on target task and hardware", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Zhihang Li", "Teng Xi", "Jiankang Deng", "Gang Zhang", "Shengzhao Wen", "Ran He" ], "title": "Gp-nas: Gaussian process based neural architecture search", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Liam Li", "Ameet Talwalkar" ], "title": "Random search and reproducibility for neural architecture search", "venue": null, "year": 2019 }, { "authors": [ "Nino Shervashidze", "SVN Vishwanathan", "Tobias Petri", "Kurt Mehlhorn", "Karsten Borgwardt" ], "title": "Efficient graphlet kernels for large graph comparison", "venue": "In Artificial Intelligence and Statistics,", "year": 2009 }, { "authors": [ "Risi Kondor", "Horace Pan" ], "title": "The multiscale laplacian graph kernel", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Nathan de Lara", "Edouard Pineau" ], "title": "A simple baseline algorithm for graph classification", "venue": "arXiv preprint arXiv:1810.09155,", "year": 2018 }, { "authors": [ "Hisashi Kashima", "Koji Tsuda", "Akihiro Inokuchi" ], "title": "Marginalized kernels between labeled graphs", "venue": "In Proceedings of the 20th international conference on machine learning", "year": 2003 }, { "authors": [ "Boris Weisfeiler", "Andrei A Lehman" ], "title": "A reduction of a graph to a canonical form and an algebra arising during this reduction", "venue": "Nauchno-Technicheskaya Informatsia,", "year": 1968 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "arXiv preprint arXiv:1810.00826,", "year": 2018 }, { "authors": [ "Thomas Gärtner", "Peter Flach", "Stefan Wrobel" ], "title": "On graph kernels: Hardness results and efficient alternatives", "venue": "In Learning theory and kernel machines,", "year": 2003 }, { "authors": [ "Karsten M Borgwardt", "Hans-Peter Kriegel" ], "title": "Shortest-path kernels on graphs", "venue": "In Fifth IEEE International Conference on Data Mining (ICDM’05), pages 8–pp. IEEE,", "year": 2005 }, { "authors": [ "Chih-Long Lin" ], "title": "Hardness of approximating graph transformation problem", "venue": "In International Symposium on Algorithms and Computation,", "year": 1994 }, { "authors": [ "Carl Edward Rasmussen" ], "title": "Gaussian processes in machine learning", "venue": "In Summer School on Machine Learning,", "year": 2003 }, { "authors": [ "Mehmet Gönen", "Ethem Alpaydin" ], "title": "Multiple kernel learning algorithms", "venue": "Journal of machine learning research,", "year": 2011 }, { "authors": [ "Corinna Cortes", "Mehryar Mohri", "Afshin Rostamizadeh" ], "title": "Algorithms for learning kernels based on centered alignment", "venue": "The Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "DARTS: Differentiable architecture search", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Nils M Kriege", "Pierre-Louis Giscard", "Richard Wilson" ], "title": "On valid optimal assignment kernels and applications to graph classification", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Ahsan Alvi", "Binxin Ru", "Jan-Peter Calliess", "Stephen Roberts", "Michael A Osborne" ], "title": "Asynchronous batch bayesian optimisation with improved local penalisation", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "James Bergstra", "Yoshua Bengio" ], "title": "Random search for hyper-parameter optimization", "venue": "Journal of machine learning research,", "year": 2012 }, { "authors": [ "Niranjan Srinivas", "Andreas Krause", "Sham M Kakade", "Matthias Seeger" ], "title": "Gaussian process optimization in the bandit setting: No regret and experimental design", "venue": "arXiv preprint arXiv:0912.3995,", "year": 2009 }, { "authors": [ "Kondor", "Pan", "de Lara", "Pineau" ], "title": "Converting architectures into undirected graphs", "venue": null, "year": 2018 }, { "authors": [ "Liu" ], "title": "This results in our dataset of 552 randomly wired neural", "venue": null, "year": 2019 }, { "authors": [ "Liu" ], "title": "2018a), the validation performance on CIFAR-10 is very volatile, and to ameliorate", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Neural architecture search (NAS) aims to automate the design of good neural network architectures for a given task and dataset. Although different NAS strategies have led to state-of-the-art neural architectures, outperforming human experts’ design on a variety of tasks (Real et al., 2017; Zoph and Le, 2017; Cai et al., 2018; Liu et al., 2018a;b; Luo et al., 2018; Pham et al., 2018; Real et al., 2018; Zoph et al., 2018a; Xie et al., 2018), these strategies behave in a black-box fashion, which returns little design insight except for the final architecture for deployment. In this paper, we introduce the idea of interpretable NAS, extending the learning scope from simply the optimal architecture to interpretable features. These features can help explain the performance of networks searched and guide future architecture design. We make the first attempt at interpretable NAS by proposing a new NAS method, NAS-BOWL; our method combines a Gaussian process (GP) surrogate with the Weisfeiler-Lehman (WL) subtree graph kernel (we term this surrogate GPWL) and applies it within the Bayesian Optimisation (BO) framework to efficiently query the search space. During search, we harness the interpretable architecture features extracted by the WL kernel and learn their corresponding effects on the network performance based on the surrogate gradient information.\nBesides offering a new perspective on interpratability, our method also improves over the existing BO-based NAS approaches. To accommodate the popular cell-based search spaces, which are noncontinuous and graph-like (Zoph et al., 2018a; Ying et al., 2019; Dong and Yang, 2020), current approaches either rely on encoding schemes (Ying et al., 2019; White et al., 2019) or manually designed similarity metrics (Kandasamy et al., 2018), both of which are not scalable to large architectures and ignore the important topological structure of architectures. Another line of work employs graph neural networks (GNNs) to construct the BO surrogate (Ma et al., 2019; Zhang et al., 2019; Shi et al., 2019); however, the GNN design introduces additional hyperparameter tuning, and the training of the GNN also requires a large amount of architecture data, which is particularly ∗Equal contribution. Codes are available at https://github.com/xingchenwan/nasbowl\nexpensive to obtain in NAS. Our method, instead, uses the WL graph kernel to naturally handle the graph-like search spaces and capture the topological structure of architectures. Meanwhile, our surrogate preserves the merits of GPs in data-efficiency, uncertainty computation and automated hyperparameter treatment. In summary, our main contributions are as follows:\n• We introduce a GP-based BO strategy for NAS, NAS-BOWL, which is highly query-efficient and amenable to the graph-like NAS search spaces. Our proposed surrogate model combines a GP with the WL graph kernel (GPWL) to exploit the implicit topological structure of architectures. It is scalable to large architecture cells (e.g. 32 nodes) and can achieve better prediction performance than competing methods. • We propose the idea of interpretable NAS based on the graph features extracted by the WL kernel\nand their corresponding surrogate derivatives. We show that interpretability helps in explaining the performance of the searched neural architectures. As a singular example of concrete application, we propose a simple yet effective motif-based transfer learning baseline to warm-start search on a new image tasks. • We demonstrate that our surrogate model achieves superior performance with much fewer ob-\nservations in search spaces of different sizes, and that our strategy both achieves state-of-the-art performances on both NAS-Bench datasets and open-domain experiments while being much more efficient than comparable methods." }, { "heading": "2 PRELIMINARIES", "text": "Graph Representation of Neural Networks Architectures in popular NAS search spaces can be represented as an acyclic directed graph (Elsken et al., 2018; Zoph et al., 2018b; Ying et al., 2019; Dong and Yang, 2020; Xie et al., 2019), where each graph node represents an operation unit or layer (e.g. a conv3×3-bn-relu in Ying et al. (2019)) and each edge defines the information flow from one layer to another. With this representation, NAS can be formulated as an optimisation problem to find the directed graph and its corresponding node operations (i.e. the directed attributed graph G) that give the best architecture validation performance y(G): G∗ = arg maxG y(G).\nBayesian Optimisation and Gaussian Processes To solve the above optimisation, we adopt BO, which is a query-efficient technique for optimising a black-box, expensive-to-evaluate objective (Brochu et al., 2010). BO uses a statistical surrogate to model the objective and builds an acquisition function based on the surrogate. The next query location is recommended by optimising the acquisition function which balances the exploitation and exploration. We use a GP as the surrogate model in this work, as it can achieve competitive modelling performance with small amount of query data (Williams and Rasmussen, 2006) and give analytic predictive posterior mean µ(Gt|Dt−1) and variance k(Gt, G′t|Dt−1) on the heretofore unseen graph Gt given t − 1 observations: µ(Gt|Dt−1) = k(Gt, G1:t−1)K−11:t−1y1:t−1 and k(Gt, G′t|Dt−1) = k(Gt, G ′ t) − k(Gt, G1:t−1)K−11:t−1k(G1:t−1, G′t) where G1:t−1 = {G1, . . . , Gt−1} and y1:t−1 = [y1, . . . , yt−1] T are the t − 1 observed graphs and objective function values, respectively, and Dt−1 = {G1:t−1,y1:t−1}. [K1:t−1]i,j = k(Gi, Gj) is the (i, j)-th element of Gram matrix induced on the (i, j)-th training samples by k(·, ·), the graph kernel function. We use Expected Improvement (Mockus et al., 1978) in this work though our approach is compatible with alternative choices.\nGraph Kernels Graph kernels are kernel functions defined over graphs to compute their level of similarity. A generic graph kernel may be represented by the function k(·, ·) over a pair of graphs G and G′ (Kriege et al., 2020):\nk(G,G′) = 〈φ(G),φ(G′)〉H (2.1)\nwhere φ(·) is some feature representation of the graph extracted by the graph kernel and 〈·, ·〉H denotes inner product in the associated reproducing kernel Hilbert space (RKHS) (Nikolentzos et al., 2019; Kriege et al., 2020). For more detailed reviews on graph kernels, the readers are referred to Nikolentzos et al. (2019), Ghosh et al. (2018) and Kriege et al. (2020).\nAlgorithm 1 NAS-BOWL Algorithm. Optional steps of the exemplary use of motifbased warm starting (Sec 3.2) are marked in gray italics. 1: Input: Maximum BO iterations T , BO batch\nsize b, acquisition function α(·), initial observed data on the target task D0, Optional: past-task query data Dpast and surrogate Spast 2: Output: The best architecture G∗T 3: Initialise the GPWL surrogate S with D0 4: for t = 1, . . . , T do 5: if Pruning based on the past-task motifs then 6: Compute the motif importance scores (equation 3.4) with Spast/S on Dpast/Dt 7: while |Gt| < B do 8: Generate a batch of candidate archi-\ntectures and reject those which contain none of the top 25% good motifs (similar procedure as Fig. 2(a))\n9: end while 10: else 11: Generate B candidate architectures Gt 12: end if 13: {Gt,i}bi=1 = argmaxG∈Gt αt(G|Dt−1) 14: Evaluate their validation accuracy {yt,i}bi=1\n15: Dt ← Dt−1 ∪ ({Gt,i}Bi=1, {yt,i}bi=1) 16: Update the surrogate S with Dt 17: end for 18: Return the best architecture seen so far G∗T" }, { "heading": "3 PROPOSED METHOD", "text": "We begin by presenting our proposed algorithm, NAS-BOWL in Algorithm 1, where there are a few key design features, namely the design of the GP surrogate suitable for architecture search (we term the surrogate GPWL) and the method to generate candidate architectures at each BO iteration. We will discuss the first one in Section 3.1. For architecture generation, we either generate the new candidates via random sampling the adjacency matrices, or use a mutation algorithm similar to those used in a number of previous works (Kandasamy et al., 2018; Ma et al., 2019; White et al., 2019; Shi et al., 2019): at each iteration, we generate the architectures by mutating a number of queried architectures that perform the best. Generating candidate architectures in this way enables us to exploit the prior information on the best architectures observed so far to explore the large search space more efficiently. We report NAS-BOWL with both strategies in our experiments. Finally, to give a demonstration of the new possibilities opened by our work, we give an exemplary practical use of intepretable motifs for transfer learning in Algorithm 1, which is elaborated in Sec 3.2." }, { "heading": "3.1 SURROGATE AND GRAPH KERNEL DESIGN", "text": "To enable the GP to work effectively on the graph-like architecture search space, selecting a suitable kernel function is arguably the most important design decision. We propose to use the WeisfeilerLehman (WL) graph kernel (Shervashidze et al., 2011) to enable the direct definition of a GP surrogate on the graph-like search space. The WL kernel compares two directed graphs based on both local and global structures. It starts by comparing the node labels of both graphs via a base kernel kbase ( φ0(G),φ0(G ′) )\nwhere φ0(G) denotes the histogram of features at level h = 0 (i.e. node features) in the graph, where h is both the index of WL iterations and the depth of the subtree features extracted. For the WL kernel with h > 0, as shown in Fig. 1, it then proceeds to collect features at h = 1 by aggregating neighbourhood labels, and compare the two graphs with kbase ( φ1(G),φ1(G ′) ) based on the subtree structures of depth 1 (Shervashidze et al., 2011; Höppner and Jahnke, 2020).\nThe procedure then repeats until the highest iteration level h = H specified and the resulting WL kernel is given by:\nkHWL(G,G ′) = H∑ h=0 kbase ( φh(G),φh(G ′) ) . (3.1)\nIn the above equation, kbase is a base kernel (such as dot product) over the vector feature embedding. As h increases, the WL kernel captures higher-order features which correspond to increasingly larger neighbourhoods and features at each h are concatenated to form the the final feature vector (φ(G) = [φ0(G), ...,φH(G)]). The readers are referred to App. A for more detailed algorithmic descriptions of the WL kernel.\nWe argue that WL is desirable for three reasons. First, in contrast to many ad hoc approaches, WL is established with proven successes on labelled and directed graphs, by which networks are represented. Second, the WL representation of graphs is expressive, topology-preserving yet interpretable: Morris et al. (2019) show that WL is as powerful as standard GNNs in terms of discrimination power. However, GNNs requires relatively large amount of training data and thus is more data-inefficient (we validate this in Sec. 5). Also, the features extracted by GNNs are harder to interpret compared to those by WL. Note that the WL kernel by itself only measures the similarity between graphs and does not aim to select useful substructures explicitly. It is our novel deployment of the WL procedure (App. A) for the NAS application that leads to the extraction of interpretable features while comparing different architectures. We further make smart use of these network features to help explain the architecture performance in Sec 3.2. Finally, WL is efficient and scalable: denoting {n,m} as the number of nodes and edges respectively, computing the Gram matrix on N training graphs may scale O(NHm + N2Hn) (Shervashidze et al., 2011). As we show in App. E.3, in typical cell-based spaces H ≤ 3 suffices, suggesting that the kernel computation cost is likely eclipsed by the O(N3) scaling of GP we incur nonetheless. This is to be contrasted to approaches such as path encoding in White et al. (2019), which scales exponentially with n without truncation, and the edit distance kernel in Jin et al. (2019), whose exact solution is NP-complete (Zeng et al., 2009).\nWith the above-mentioned merits, the incorporation of the WL kernel permits the usage of GP-based BO on various NAS search spaces. This enables the practitioners to harness the rich literature of GP-based BO methods on hyperparameter optimisation and redeploy them on NAS problems. Most prominently, the use of GP surrogate frees us from hand-picking the WL hyperparameter H as we can automatically learn the optimal values by maximising the Bayesian marginal likelihood. As we will justify in Sec. 5 and App. E.3, this process is extremely effective. This renders a further major advantage of our method as it has no inherent hyperparameters that require manual tuning. This reaffirms with our belief that a practical NAS method itself should require minimum tuning, as it is almost impossible to run traditional hyperparameter search given the vast resources required. Other enhancements, such as improving the expressiveness of the surrogate by combining multiple types of kernels, are briefly investigated in App. C. We find the amount of performance gain depends on the NAS search space and a WL kernel alone suffices for common cell-based spaces." }, { "heading": "3.2 INTERPRETABLE NAS", "text": "The unique advantage of the WL kernel is that it extracts interpretable features, i.e. network motifs from the original graphs. This in combination with our GP surrogate enables us to predict the effect of the extracted features on the architecture performance directly by examining the derivatives of the GP predictive mean w.r.t. the features. Derivatives as tools to interpret ML models have been used previously (Engelbrecht et al., 1995; Koh and Liang, 2017; Ribeiro et al., 2016) but, given the GP, we can compute these derivatives analytically. Following the notations in Sec. 2, the derivative with respect to φj(Gt), the j-th element of φ(Gt) (the feature vector of a graph Gt) is Gaussian with an expected value:\nEp(y|Gt,Dt−1) [ ∂y ∂φj(Gt) ] =\n∂µ ∂φj(Gt) = ∂〈φ(Gt),Φ1:t−1〉 ∂φj(Gt) K−11:t−1y1:t−1 (3.2)\nwhere Φ1:t−1 = [φ(G1), . . . ,φ(Gt−1)]T is the feature matrix stacked from the feature vectors of the previous observations. Intuitively, since each φj(Gt) denotes the count of a WL feature in Gt, its derivative naturally encodes the direction and sensitivity of the objective (in this case the predicted validation accuracy) about that feature. Computationally, since the costly term, K−11:t−1y1:t−1, is already computed in the posterior mean, the derivatives can be obtained at minimal additional cost.\n(a) Best and worst motifs identified on N201 (CIFAR-10) dataset using 300 training samples (left) and on DARTS search space after 3 GPU days of search by NAS-BOWL (right). For DARTS space, the motif boxed in pink is featured in all optimal cells found by various NAS methods in Fig 3.\n(b) Validation accuracy distributions of the validation architectures on different tasks of N201 (left 3) and DARTS (right). all denotes the entire validation set, while good/bad denote the distributions of the architectures with at least 1 best/worst motif, respectively; dashed lines denote the distribution medians. Note that in all cases, the good subset includes the population max and in N201, the patterns solely trained on the CIFAR-10 task also transfer well to CIFAR-100/ImageNet16.\nFigure 2: Motif discovery on N201 (CIFAR-10) and DARTS spaces.\nBy evaluating the aforementioned derivative at some graph G, we obtain the local sensitivities of the objective function around φ(G). To achieve global attribution of network performance w.r.t interpretable features which we are ultimately interested in, we take inspirations from the principled averaging approach featured in many gradient-based attribution methods (Sundararajan et al., 2017; Ancona et al., 2017), by computing and integrating over the aforementioned derivatives at all training samples to obtain the averaged gradient (AG). AG of the j-th feature φj is given by:\nAG(φj) = EG [ ∂µ ∂φj(G) ] = ∫ φj(G)>0 ∂µ ∂φj(G) p(φj(G))dφj(G). (3.3)\nFortunately, in WL kernel, φj(·) ∈ Z≥0 ∀j and thus p(φj(·)) is discrete, the expectation integral reduces to a weighted summation over the “prior” distribution p(φj(·))∀j. To approximate p(φj(·)), we count the number of occurrences of each feature φj(Gn) in all the training graphs {G1, ..., Gt−1} where φj(·) is present and assign weights according to its frequency of occurrence. Formally, denoting G as the subset of the training graphs where for each of its element φj > 0, we have\nAG(φj) ≈ ∑|G| n=1 wn(φj) ∂µ ∂φj(Gn)∑|G|\nn=1 wn(φj) where wn(φj) =\n1\n|G| |G|∑ n′=1 δ(φj(Gn), φj(Gn′)) (3.4)\nwhere δ(·, ·) is the Kronecker delta function and | · | the cardinality of a set. Finally, we additionally incorporate the uncertainty of the derivative estimation by also normalising AG with the square root of the empirical variance (EV) to penalise high-variance (hence less trustworthy as a whole) gradient estimates closer to 0. EV may be straightforwardly computed:\nEV(φj) = VG [ ∂µ ∂φj(G) ] = EG [( ∂µ ∂φj(G) )2]− (EG[ ∂µ ∂φj(G) ])2 . (3.5)\nThe resultant derivatives w.r.t. interpretable features AG(φj)/ √\nEV(φj) allow us to directly identify the most influential motifs on network performance. By considering the presence or absence of such motifs, we may explain the competitiveness of an architecture or the lack of it, provided the surrogate is accurate which we show is the case in Sec. 5. More importantly, beyond passive explaining, we can also actively use these features as building blocks to facilitate manual construction of promising networks, or as priors to prune the massive NAS search space, which we believe would be of interest to both human designers and NAS practitioners. To validate this, we train our GPWL on architectures drawn from various search spaces, rank all the features based on their computed derivatives and show the motifs with most positive and negative derivatives (hence the most and least desirable features)1.\n1For reproduciability, we include detailed procedures and selection criteria in App. D.1.\nWe present the extracted network motifs on the CIFAR-10 task of NAS-Bench-201 (N201) (Dong and Yang, 2020) and DARTS search space in Fig. 2(a). The motifs extracted on other N201 image tasks (CIFAR-100/ImageNet16) and on NAS-Bench-101 (N101) (Ying et al., 2019) are shown in App. D.2. The results reveal some interesting insights on network performance: for example, almost every good motif in N201 contains conv_3×3 and all-but-one good motifs in the DARTS results contain at least one separable conv_3×3. In fact, this preference over (separable) convolutions is almost universally observed in many popular NAS methods (Liu et al., 2018a; Shi et al., 2019; Wang et al., 2019; Pham et al., 2018) and ours: besides skip links, the operations in their best cells are dominated by separable convolutions (Fig. 3). Moving from node operation label to higher-order topological features, in both search spaces, our GPWL consistently finds a series of high-performing motifs that entail the parallel connection from input to multiple convs often of different filter sizes – this corresponds to the grouped convolution unit critical to the success of, e.g. ResNeXt (Xie et al., 2017). A specific example is the boxed motif in Fig. 2(a), which combines parallel convs with a skip link. This motif or other highly similar ones are consistently present in the optimal cells found by many NAS methods including ours (as shown in Fig. 3) despite the disparity in their search strategies. This suggests a correlation between these motifs and good architecture performance. Another observation is that a majority of the important motifs for both search spaces in Fig. 2(a) involve the input. From this and our previous remarks on the consensus amongst NAS methods in favouring certain operations and connections, we hypothesise that at least for a cell-based search space, the network performance might be determined more by the connections in the vicinity to the inputs (on which the optimal cells produced by different NAS methods are surprisingly consistent) than other parts of the network (on which they differ). This phenomenon is partly observed in Shu et al. (2019), where the authors found that NAS algorithms tend to favour architecture cells with most intermediate nodes having direct connection with the input nodes. The verification of this hypothesis is beyond the scope of this paper, but this shows the potential of our GPWL in discovering novel yet interpretable network features, with potential implications for both NAS and manual network design.\nGoing beyond the qualitative arguments above, we now quantitatively validate the informativeness of the motifs discovered. After identifying the motifs, in N201, we randomly draw another 1,000 validation architectures unseen by the surrogate. Given the motifs identified in Fig. 2(a), an architecture is labelled either “good” (≥ 1 good motif), “bad” (≥ 1 bad motif) or neither. Note if an architecture contains both good and bad motifs, it is both “good” and “bad”. As demonstrated in Fig. 2(b), we indeed find that the presence of important motifs is predictive of network performance. A similar conclusion holds for DARTS space. However, due to the extreme cost in sampling the open-domain space, we make two modifications to our strategy. Firstly, the training samples are taken from a BO run, instead of randomly sampled. Secondly, we reuse the training samples for Fig. 2(b) instead of sampling and evaluating the hold-out sets. The key takeaway here is that motifs are effective in identifying promising and unpromising candidates, and thus can be used to aid NAS agents to partition the vast combinatorial search space, which is often considered a key challenge of NAS, and to focus on the most promising sub-regions. More importantly, the motifs are also transferable: while the patterns in Fig. 2(a) are solely trained on the CIFAR-10, they generalise well to CIFAR-100/ImageNet16 tasks – this is unsurprising, as one key motivation of cell-based search space is exactly to improve transferability of the learnt structure across related tasks (Zoph et al., 2018a). Given that motifs are the building blocks of the cells, we expect them to transfer well, too.\nWith this, we propose a simple transfer learning baseline as a singular demonstration of how motifs could be practically useful for NAS. Specifically, we can exploit the motifs identified on one task to warm-start the search on a related new task. With reference to Algorithm 1, under the transfer learning setup, we use a GPWL surrogate trained on the query data of a past related task Spast as well as the surrogate on the new target task S to compute the AG of motifs present in queried architectures\n(equation 3.4) and identify the most positively influential motifs similar to Fig. 2(a) (Line 3). We then use these motifs to generate a set of candidate architectures Gt for optimising the acquisition function at every BO iteration on the new task; Specifically, we only accept a candidate if it contains at least one of the top 25% good motifs (i.e. pruning rule). Finally, with more query data obtained on the target task, we will dynamically update the surrogate S and the motif scores to mitigate the risk of discarding motifs purely based on the past task data. Through this, we force the BO agent to select from a smaller subset of architectures deemed more promising from a previous task, thereby “warm starting” the new task. We briefly validate this proposal in the N201 experiments of Sec. 5." }, { "heading": "4 RELATED WORK", "text": "In terms of NAS strategies, there have been several recent attempts in using BO (Kandasamy et al., 2018; Ying et al., 2019; Ma et al., 2019; Shi et al., 2019; White et al., 2019). To overcome the limitations of conventional BO for discrete and graph-like NAS search spaces, Kandasamy et al. (2018) use optimal transport to design a similarity measure among neural architectures while Ying et al. (2019) and White et al. (2019) suggest encoding schemes to characterise neural architectures with discrete and categorical variables. Yet, these methods are either computationally inefficient or not scalable to large architectures/cells (Shi et al., 2019; White et al., 2019). Alternatively, several works use graph neural networks (GNNs) as the surrogate model (Ma et al., 2019; Zhang et al., 2019; Shi et al., 2019) to capture the graph structure of neural networks. However, the design of the GNN introduces many additional hyperparameters to be tuned and GNN requires a relatively large number of training data to achieve decent prediction performance as shown in Sec. 5. Another related work (Ramachandram et al., 2018) apply GP-based BO with diffusion kernels to design multimodal fusion networks; however, it assigns each possible architecture as a node in an undirected super-graph and the need for construction of and computation on such super-graphs limits the method to relatively small search spaces. In terms of interpretability, Shu et al. (2019) study the connection pattern of network cells found by popular NAS methods and find a shared tendency for choosing wide and shallow cells which enjoy faster convergence. You et al. (2020), by representing neural networks as relational graphs, observe that the network performance depends on the clustering coefficient and average path length of its graph representation. Radosavovic et al. (2020) propose a series of manual design principles derived from extensive empirical comparison to refine a ResNet-based search space. Nevertheless, all these works do not offer a NAS strategy, and purely rely on human experts to derive insights on NAS architectures from extensive empirical studies. In contrast, our method learns the interpretable feature information without human inputs while searching for the optimal architecture." }, { "heading": "5 EXPERIMENTS", "text": "Surrogate Regression Performance We examine the regression performance of GPWL on several NAS datasets: NAS-Bench-101 (N101) on CIFAR-10 (Ying et al., 2019), and N201 on CIFAR-10, CIFAR-100 and ImageNet16. As both datasets only contain CIFAR-sized images and relatively small architecture cells2, to further demonstrate the scalability of our proposed methods to much larger architectures, we also construct a dataset with 547 architectures sampled from the randomly wired graph generator described in Xie et al. (2019); each architecture cell has 32 operation nodes and all the architectures are trained on the Flowers102 dataset (Nilsback and Zisserman, 2008) Similar to Ying et al. (2019); Dong and Yang (2020); Shi et al. (2019), we use Spearman’s rank correlation between predicted validation accuracy and the true validation accuracy as the performance metric, as what matters for comparing architectures is their relative performance ranking.\nWe compare the regression performance against various competitive baselines, including NASBOT (Kandasamy et al., 2018), GPs with path encodings (PathEncode) (White et al., 2019), GNN (Shi et al., 2019) which uses a combination of graph convolutional network and a final Bayesian linear regression layer as the surrogate, and COMBO (Oh et al., 2019)3, which use a GP with a diffusion kernel on a graph representation of the combinatorial search spaces. We report the results in Fig. 4: our GPWL surrogate clearly outperforms all competing methods on all the NAS datasets with much\n2In N101 and N201, each cell is a graph of 7 and 4 nodes, respectively. 3We choose COMBO as methodologically it is very close to the most related work (Ramachandram et al.,\n2018) whose implementation is not publicly available.\nless training data: specifically, GPWL requires at least 3 times less data than GNN and PathEncode and 10 times less than COMBO on N201 datasets. It is also able to achieve high rank correlation on datasets with larger search spaces such as N101 and Flowers102 while requiring 20 times less data than GNN on Flowers102 and 30 times less data on N101. Moreover, in BO, uncertainty estimates are as important as the prediction accuracy; we show that GPWL produces sound uncertainty estimates in App. E.1. Finally, in addition to these surrogates previously used in NAS, we also demonstrate that our surrogate compares favourably against other popular graph kernels, as discussed in App. E.2.\nArchitecture Search on NAS-Bench Datasets We benchmark our proposed method, NAS-BOWL, against a range of existing methods, including random search, TPE (Bergstra et al., 2011), Reinforcement Learning (rl) (Zoph and Le, 2016), BO with SMAC (smacbo) (Hutter et al., 2011), regularised evolution (Real et al., 2019) and BO with GNN surrogate (gcnbo) (Shi et al., 2019). On N101, we also include BANANAS (White et al., 2019) which claims the state-of-the-art performance. In both NAS-Bench datasets, validation errors of different random seeds are provided, thereby creating noisy objective functions. We perform experiments using the deterministic setup described in White et al. (2019), where the validation errors over multiple seeds are averaged to eliminate stochasticity, and also report results with noisy objective functions. We show the test results in both setups in Fig. 5 and the validation results in App. F.3. In these figures, we use NASBOWLm and NASBOWLr to denote NAS-BOWL with architectures generated from mutating good observed candidates and from random sampling, respectively. Similarly, BANANASm/BANANASr represent the BANANAS with mutation/random sampling (White et al., 2019). On CIFAR-100/ImageNet tasks of N201, we also include NASBOWLm(TL) which is NASBOWLm with additional knowledge on motifs transferred from a previous run on the CIFAR-10 task of N201 to prune the candidate architectures as described in Sec. 3.2. The readers are referred to App. F.2 for detailed setups.\nIt is evident that NAS-BOWL outperforms all baselines on all NAS-Bench tasks in achieving both lowest validation and test errors. The experiments with noisy observations further show that even in a more realistic setup with noisy objective function observations, NAS-BOWL still performs very well as it inherits the robustness against noise from the GP. The preliminary experiments on transfer learning also show that motifs contain extremely useful prior knowledge that may be transferred to warm-start a related task: notice that even the architectures at the very start without any search already perform well – this is particularly appealing, as in a realistic setting, searching directly on large-scale datasets like ImageNet from scratch is extremely expensive. While further experimental validation on a wider range of search spaces or tasks of varying degrees of similarity are required to fully verify the effectiveness of this particular method, we feel as an exemplary use of motifs, the promising preliminary results here already demonstrates the usefulness. Finally, we perform ablation studies in App. F.3.\nOpen-domain Search We finally test NAS-BOWL on the open-domain search space from DARTS (Liu et al., 2018a). We allow a maximum budget of 150 queries, and we follow the DARTS setup (See App. G for details): during the search phase, instead of training the final 20-cell architectures, we train a small 8-cell architectures for 50 epochs. Whereas this results in significant computational savings, it also leads to the degraded rank correlation of performance during search and evaluation stages. This leads to a more challenging setup than most other sample-based methods which train for longer epochs and/or search on the final 20-cell architectures directly. Beyond this, we also search a single cell structure and use it for the two cell types (normal and reduction) defined in DARTS search space. We also use a default maximum number of 4 operation blocks to fit the training on a single GPU; this is contrasted to, e.g. ENAS and LaNet that allow for up to 5 and 7 blocks, respectively.\nWe compare NAS-BOWL with other methods in Table 1, and the best cell found by NAS-BOWL is already shown in Fig. 3 in Sec. 3.2. To ensure fairness of comparison, we only include previous methods with comparable search spaces and training techniques, and exclude methods that train much longer and/or use additional tricks (Liang et al., 2019; Cai et al., 2019). It is evident that NASBOWL finds very promising architectures despite operating in a more restricted setup. Consuming 3 GPU-days, NAS-BOWL is of comparable computing cost to the one-shot methods but performs on par with or better than methods that consume orders of magnitude more resources, such as LaNet which is 50× more costly. Furthermore, it is worth noting that, if desired, NAS-BOWL may benefit from any higher computing budgets by relaxing the aforementioned restrictions (e.g. train longer on larger architectures during search). Finally, while we use a single GPU, NAS-BOWL can be easily deployed to run on parallel computing resources to further reduce wall-clock time." }, { "heading": "6 CONCLUSION", "text": "In this paper, we propose a novel BO-based NAS strategy, NAS-BOWL, which uses a GP surrogate with the WL graph kernel. We show that our method performs competitively on both closed- and open-domain experiments with high sample efficiency. More importantly, our method represents a first step towards interpretable NAS, where we propose to learn interpretable network features to help explain the architectures found as well as guide the search on new tasks. The potential for further work is ample: we may extend the afforded interpretability in discovering more use-cases such as on multi-objective settings and broader search spaces. Moreover, while the current work deals primarily with practical NAS, we feel a thorough theoretical analysis, on e.g., convergence guarantee, would also be beneficial both for this work and the broader NAS community in general." }, { "heading": "A ALGORITHMS", "text": "Description of the WL kernel Complementary to Fig. 1 in the main tex, in this section we include a formal, algorithmic description of the WL procedure in Algorithm 2.\nAlgorithm 2 Weisfeiler-Lehman subtree kernel computation between two graphs Shervashidze et al. (2011)\n1: Input: Graphs {G1, G2}, Maximum WL iterations H 2: Output: The kernel function value between the graphs k 3: Initialise the feature vectors {φ(G1),φ(G2)} with the respective counts of original node labels\ni.e. the h = 0 WL features. (E.g. φi(G1) is the count of i-th node label of graph G1) 4: for h = 1, . . . ,H do 5: Assign a multiset-label Mh(v) to each node v in G consisting of the multiset {lh−1|u ∈ N (v),\nwhere lh−1(v) is the node label of node v of the h−1-th WL iteration;N (v) are the neighbour nodes of node v\n6: Sort each elements in Mh(v) in ascending order and concatenate them into string sh(v) 7: Add lh−1(v) as a prefix to sh(v). 8: Compress each string sh(v) using hash function f so that f(sh(v)) = f(sh(w)) iff sh(v) = sh(w) for two nodes {v, w}. 9: Set lh(v) := f(sh(v))∀v ∈ G.\n10: Concatenate the φ(G1),φ(G2) with the respective counts of the new labels 11: end for 12: Compute inner product between the feature vectors in RKHS k = 〈φ(G1), φ(G2)〉H" }, { "heading": "B DETAILED REASONS FOR USING THE WL KERNEL", "text": "We argue that WL kernel is a desirable choice for the NAS application for the following reasons.\n1. WL kernel is able to compare labeled and directed graphs of different sizes. As discussed in Section 2, architectures in almost all popular NAS search spaces (Ying et al., 2019; Dong and Yang, 2020; Zoph et al., 2018b; Xie et al., 2019) can be represented as directed graphs with node/edge attributes. Thus, WL kernel can be directly applied on them. On the other hand, many graph kernels either do not handle node labels (Shervashidze et al., 2009), or are incompatible with directed graphs (Kondor and Pan, 2016; de Lara and Pineau, 2018). Converting architectures into undirected graphs can result in loss of valuable information such as the direction of data flow in the architecture (we show this in Section 5). 2. WL kernel is expressive yet highly interpretable. WL kernel is able to capture substructures that go from local to global scale with increasing h values. Such multi-scale comparison is similar to that enabled by a Multiscale Laplacian Kernel (Kondor and Pan, 2016) and is desirable for architecture comparison. This is in contrast to graph kernels such as Kashima et al. (2003); Shervashidze et al. (2009), which only focus on local substructures, or those based on graph spectra de Lara and Pineau (2018), which only look at global connectivities. Furthermore, the WL kernel is derived directly from the Weisfeiler-Lehman graph isomorphism test (Weisfeiler and Lehman, 1968), which is shown to be as powerful as a GNN in distinguishing non-isomorphic graphs (Morris et al., 2019; Xu et al., 2018). However, the higher-order graph features extracted by GNNs are hard to interpret by humans. On the other hand, the subtree features learnt by WL kernel (e.g. the h = 0 and h = 1 features in Figure ??) are easily interpretable. 3. WL kernel is relatively efficient and scalable. Other expressive graph kernels are often prohibitive to compute: for example, defining {n,m} to be the number of nodes and edges in a graph, random walk (Gärtner et al., 2003), shortest path (Borgwardt and Kriegel, 2005) and graphlet kernels (Shervashidze et al., 2009) incur a complexity of O(n3), O(n4) and O(nk) respectively where k is the maximum graphlet size. Another approach based on computing the architecture edit-distance (Jin et al., 2019) is also expensive: its exact solution is NP-complete (Zeng et al., 2009) and is provably difficult to approximate (Lin, 1994). On the other hand, the WL kernel only entails a complexity4 of O(Hm) (White et al., 2019), which without truncation scales exponentially with n.\n4Consequently, naively computing the Gram matrix consisting of pairwise kernel between all pairs in N graphs is of O(N2Hm), but this can be further improved to O(NHm+N2Hn). See Morris et al. (2019)." }, { "heading": "C COMBINING DIFFERENT KERNELS", "text": "In general, the sum or product of valid kernels gives another valid kernel, as such, combining different kernels to yield a better-performing kernel is commonly used in GP and Multiple Kernel Learning (MKL) literature (Rasmussen, 2003; Gönen and Alpaydin, 2011). In this section, we conduct a preliminary discussion on its usefulness to GPWL. As a singular example, we consider the additive kernel that is a linear combination of the WL kernel and the MLP kernel:\nkadd(G1, G2) = αkWL(G1, G2) + βkMLP(G1, G2) s.t. α+ β = 1, α, β ≥ 0 (C.1)\nwhere α, β are the kernel weights. We choose WL and MLP because we expect them to extract diverse information: whereas WL processes the graph node information directly, MLP consider the spectrum of the graph Laplacian matrix, which often reflect the global properties such as the topology and the graph connectivity. We expect the more diverse features captured by the constituent kernels will lead to a more effective additive kernel. While it is possible to determine the weights in a more principled way such as jointly optimising them in the GP log-marginal likelihood, in this example we simply set α = 0.7 and β = 0.3. We then perform regression on NAS-Bench-101 and Flower102 datasets following the setup as in Sec. 5. We repeat each experiment 20 times and report the mean and standard deviation in Table 2, and we show the uncertainty estimate of additive kernel in Fig. 6. In both search spaces the additive kernel outperforms the constituent kernels but the gain over the WL kernel is marginal. Interestingly, while MLP performs poorly on its own, it can be seen that the complementary spectral information extracted by it can be helpful when used alongside our WL kernel. Generally, we hypothesise that as the search space increases in complexity (e.g., larger graphs, more edge connections permitted, etc), we expect that the benefits from combining different kernels to increase and we defer a more comprehensive discussion on this to a future work. As a starting point, one concrete proposal would be applying a MKL method such as ALIGNF (Cortes et al., 2012) in our context directly." }, { "heading": "D FURTHER DETAILS ON INTERPRETABILITY", "text": "" }, { "heading": "D.1 SELECTION PROCEDURE FOR MOTIF DISCOVERY", "text": "NAS-Bench datasets In the closed-domain NAS-Bench datasets (including both NAS- bench-101 and NAS-Bench-201), we randomly sample 300 architectures from their respective search space, fit the GPWL surrogate, and compute the derivatives of all the features that appeared in the training\n(a) Best and worst motifs identified on the N101 dataset (left), and the validation accuracy distributions in the validation architectures (right).\n(b) Best and worst motifs identified on the CIFAR-100 task of N201 dataset (left), and the validation accuracy distributions transferred on CIFAR-10, CIFAR-100 and ImageNet (right 3).\n(c) Best and worst motifs identified on the ImageNet task of N201 dataset (left), and the validation accuracy distributions transferred on CIFAR-10, CIFAR-100 and ImageNet (right 3).\nFigure 7: Motif discovery on N101 and CIFAR-100 and ImageNet tasks of N201. Note that since N101 is trained on CIFAR-10 only, it is not possible to show the results transferred on another task. All symbols and legends have the same meaning as in Fig. 2 in the main text.\nset. As a regularity constraint, we then filter the motifs to only retain those that appear for more than 10 times to ensure the estimates of the derivatives by the GPWL surrogate are accurate enough and are not swayed by noises/outliers. We finally rank the features by the numerical values of the derivatives, and present the top and bottom quantiles of the features as “best motifs” and “worst motifs” respectively in Fig. 2 in the main text and Fig. 7 in Sec. D.2.\nDARTS Search Space In the open-domain search space, it is impossible to sample efficiently since each sample drawn requires us to evaluate the architecture in full, which is computationally prohibitive. Instead, we simply reuse the GPWL surrogate trained in one run of NAS-BOWL on the open-domain experiments described in Sec. 5, which contains 120 architecture5-validation accuracy pair evaluated over 3 GPU days. Due to the smaller number of available samples, here we only require each feature to appear at least twice as a prerequisite, and we select the top and bottom 15% of the features to be presented in the graphs. All other treatments are identical to the descriptions above." }, { "heading": "D.2 MOTIF DISCOVERY ON N101 AND OTHER TASKS OF N201", "text": "Supplementary to Fig. 2 in main text, here we outline the motifs discovered by GPWL also on the N101 search space and on the other tasks (CIFAR-100, ImageNet16) of N201 in Fig. 7. We follow the identical setup as described in both the main text and Sec. D.1. In all cases, the motifs are highly effective in separating the architecture pool, and it is also noteworthy that the motifs found in the other N201 tasks are highly consistent with those shown in Fig. 2 in the main text with only minor\n5The architecture here refers to the small architecture evaluated during search stage, instead of the final architecture during evaluation stage. Refer to Sec. 5 and App. G\ndifferences, further supporting our claim that the GPWL is capable of identifying transferable features without unduly overfitting to a particular task.\nTo give further concrete evidence on the working and advantage of the proposed method in N201, in Fig 8 we show the top-4 motifs in terms of the derivatives computed from one experiment on CIFAR-10 only, according to Sec 3.2, and the ground-truth best architectures in each of the three tasks included. In this case, while the optimal cells for the different tasks are similar (but not identical) and reflective of a high-level transferability of the cells, transferring the optimal cell in one task directly to another will be sub-optimal. However, using our method as described in Algorithm 1 by transferring the motifs in Fig 8(a) on CIFAR-100 and ImageNet tasks, we reduce the search space and resultantly search time drastically (as any cell to be evaluated now needs to contain one of the motifs in Fig. 8(a)) yet we do not preemptively rule out the optimal cell (as all optimal cells contain ≥ 1 \"good\" motifs). As such, our method strikes a balance between performance and efficiency." }, { "heading": "E FURTHER REGRESSION RESULTS", "text": "" }, { "heading": "E.1 PREDICTIVE MEAN ± 1 STANDARD DEVIATION OF GPWL SURROGATE ON NAS DATASETS", "text": "In this section, we show the GPWL predictions on the various NAS datasets when trained with 50 samples each. It can be shown that not only a satisfactory predictive mean is produced by GPWL in terms of the rank correlation and the agreement with the ground truth, there is also sound uncertainty estimates, as we can see that in most cases the ground truths are within the error bar representing one standard deviation of the GP predictive distributions. For the training of GPWL, we always transform the validation errors (the targets of the regression) into log-scale, normalise the data and transform it back at prediction, as empirically we find this leads to better uncertainty estimates." }, { "heading": "E.2 COMPARISON WITH OTHER GRAPH KERNELS", "text": "We further compare the performance of WL kernel against other popular graph kernels such as (fast) Random Walk (RW) (Kashima et al., 2003; Gärtner et al., 2003), Shortest-Path (SP) (Borgwardt and Kriegel, 2005), Multiscale Laplacian (MLP) (Kondor and Pan, 2016) kernels when combined\nwith GPs. These competing graph kernels are chosen because they represent distinct graph kernel classes and are suitable for NAS search space with small or no modifications. In each NAS dataset, we randomly sample 50 architecture data to train the GP surrogate and use another 400 architectures as the validation set to evaluate the rank correlation between the predicted and the ground-truth validation accuracy.\nWe repeat each trial 20 times, and report the mean and standard error of all the kernel choices on all NAS datasets in Table 3. We also include the worst-case complexity of the kernel computation between a pair of graphs in the table. The results in this section justify our reasoning in App. B; combined with the interpretability benefits we discussed, WL consistently outperforms other kernels across search spaces while retaining modest computational costs. RW often comes a close competitor, but its computational complexity is worse and does not always converge. MLP, which requires us to convert directed graphs to undirected graphs, performs poorly, thereby validating that directional information is highly important.\nE.3 VALUE OF H (MAXIMUM NUMBER OF WL ITERATIONS)\nAs discussed, the Weisfeiler-Lehman kernel is singly parameterised by H , the maximum number of WL iterations. The expressive power of the kernel generally increases withH , as the kernel is capable of covering increasingly global features, but at the same time we might overfit into the training set, posing a classical problem of variance-bias trade-off. In this work, by combining WL with GP, we optimise H against the negative log-marginal likelihood of the GP. In this section, on different data-sets we show that this approach satisfactorily balances data-fitting with model complexity.\nTo verify, on both N101 and Flowers102 data-sets we described, we train GPWL surrogates on 50 random training samples. On N101, we draw another 400 testing samples and on Flowers102, we use the rest of the data-set as the validation set. We use the Spearman correlation between prediction and the ground truths of the validation set as the performance metric. We summarise our result in Fig. 10: in both data-sets, we observe a large jump in performance from H = 0 to 1 (measured by the improvements in both validation and training Spearman correlation), and a slight dip in validation correlation from H = 2 to 3, suggesting an increasing amount of overfitting if we increase H further. In both cases, the automatic selection described above succeeded in finding the “sweet spot” of H = 1 or 2, demonstrating the effectiveness of the approach." }, { "heading": "F CLOSED-DOMAIN EXPERIMENTAL DETAILS", "text": "All experiments were conducted on a 36-core 2.3GHz Intel Xeon processor with 512 GB RAM." }, { "heading": "F.1 DATASETS", "text": "We experiment on the following datasets:\n• NAS-Bench-101 (Ying et al., 2019): The search space is an acyclic directed graph with 7 nodes and a maximum of 9 edges. Besides the input node and output node, the remaining 5 operation nodes can choose one of the three possible operations: conv3×3-bn-relu, conv1×1-bn-relu and maxpool3×3. The dataset contains all 423,624 unique neural architectures in the search space. Each architecture is trained for 108 epochs and evaluated on CIFAR10 image data. The evaluation is repeated over 3 random initialisation seeds. We can access the final training/validation/test accuracy, the number of parameters as well as training time of each architecture from the dataset. The dataset and its API can be downloaded from https://github.com/google-research/nasbench/.\n• NAS-Bench-201 (Dong and Yang, 2020): The search space is an acyclic directed graph with 4 nodes and 6 edges. Each edge corresponds to an operation selected from the set of 5 possible options: conv1×1, conv3×3, avgpool3×3, skip-connect and zeroize. This search space is applicable to almost all up-to-date NAS algorithms. Note although the search space of NAS-Bench-201 is more general, it’s smaller than that of NAS-Bench-101. The dataset contains all 15,625 unique neural architectures in the search space. Each architecture is trained for 200 epochs and evaluated on 3 image datasets: CIFAR10, CIFAR100, ImageNet16-120. The evaluation is repeated over 3 random initialisation seeds. We can access the training accuracy/loss, validation accuracy/loss after every training epoch, the final test accuracy/loss, number of parameters as well as FLOPs from the dataset. The dataset and its API can be downloaded from https://github.com/D-X-Y/NAS-Bench-201.\n• Flowers102: We generate this dataset based on the random graph generators proposed in Xie et al. (2019). The search space is an acyclic directed graph with 32 nodes and a varying number of edges. All the nodes can take one of the three possible options: input, output, relu-conv3×3-bn. Thus, the graph can have multiple inputs and outputs. This search space is very different from those of NAS-Bench-101 and NAS-Bench-201 and is used to test the scalability of our surrogate model for a large-scale search space (i.t.o number of numbers in the graph). The edges/wiring/connection in the graph is created by one of the three classic random graph models: Erdos-Renyi (ER), Barabasi-Albert (BA) and Watt-Strogatz (WS). Different random graph models result in graphs of different topological structures and connectivity patterns and are defined by one or two hyperparameters. We investigate a total of 69 different sets of hyperparameters: 8 values for the hyperparameter of ER model, 6 values for the hyperparameter of BA model and 55 different value combinations for the two hyperparameters of WS model. For each hyperparameter set, we generate 8 different architectures using the random graph model and train each architecture for 250 epochs before evaluating on Flowers102 dataset. The training set-ups follow Liu et al. (2019). This results in our dataset of 552 randomly wired neural architectures." }, { "heading": "F.2 EXPERIMENTAL SETUP", "text": "NAS-BOWL We use a batch size B = 5 (i.e., at each BO iteration, architectures yielding top 5 acquisition function values are selected to be evaluated in parallel). When mutation algorithm described in Sec. 3.2 is used, we use a pool size of P = 200, and half of which is generated from mutating the top-10 best performing architectures already queried and the other half is generated from random sampling to encourage more explorations in NAS-Bench-101. In NAS-Bench-201, accounting for the much smaller search space and consequently the lesser need to exploration, we simply generate all architectures from mutation. For experiments with random acquisition, we also use P = 200 throughout, and we also study the effect of varying P later in this section. We use WL with optimal assignment (OA) (Kriege et al., 2016) for all datasets apart from NAS-Bench-201. Denoting the feature vectors of two graphs G1 and G2 as φ(G1) and φ(G2) respectively, the OA inner product in the WL case is given by the histogram intersection 〈φ(G1),φ(G2)〉 = ∑ j min(φ j(G1), φ j(G2), where φj(·) is the j-th element of the vector. On NAS-Bench-201 which features a much smaller search space which we find a simple dot product of the feature vectors φ(G1)Tφ(G2) to perform empirically better. We always use 10 random samples to initialise NAS-BOWL.\nOn NAS-Bench-101 dataset, we always apply pruning (which is available in the NAS-Bench-101 API) to remove the invalid nodes and edges from the graphs. On NAS-Bench-201 dataset, since the architectures are defined over a DARTS-like, edge-labelled search space, we first convert the edgelabelled graphs to node-labelled graphs as a pre-processing step. It is worth noting that it is possible to use WL kernel defined over edge-labelled graphs directly (e.g the WL-edge kernel proposed by Shervashidze et al. (2011)), although in this paper we find the WL kernels over node-labelled graphs to perform empirically better.\nOn the transfer learning setup of N201, we first run a standard optimisation task on the CIFAR-10 task (we term this the base task) but we allow for up to an expanded budget of 250 architecture evaluations to build up the confidence of GPWL derivative estimates. We then extract the “good motifs” identified by GPWL (i.e. those features with derivatives in the top quantile). For the subsequent CIFAR100/ImageNet16 optimisations (we term this the transferred tasks). On the transferred tasks, with everything else unmodified from standard runs (e.g. budget, pool size, batch size, acquisition function choice, etc), we additionally enforce the pruning rule such that only candidates in the pool with at least 1 match to the previously identified “good motifs” are allowed for evaluations and the rest are removed. The key difference is that under standard runs, the pool of size B is generated once per BO iteration via random sampling/mutation algorithm since all candidates are accepted; here, this procedure is executed for many times as required until we have a pool of B architectures where each meets the pruning criteria.\nBANANAS We use the code made public by the authors (White et al., 2019) (https://github. com/naszilla/bananas), and use the default settings contained in the code with the exception of the number of architectures queried at each BO iteration (i.e. BO batch size): the default is 10, but to conform to our test settings we use 5 instead. While we do not change the default pool size of P = 200 at each BO iteration, instead of filling the pool entirely from mutation of the best architectures, we only mutate 100 architectures from top-10 best architectures and generate the other 100 randomly to enable a fair comparison with our method. It is worth noting that neither changes led to a significant deterioration in the performance of BANANAS: under the deterministic validation error setup, the results we report are largely consistent with the results reported in White et al. (2019); under the stochastic validation error setup, our BANANAS results actually slightly outperform results in the original paper. It is finally worth noting that the public implementation of BANANAS on NAS-Bench-201 was not released by the authors.\nGCNBO for NAS We implemented the GNN surrogate in Sec. 5.1 by ourselves following the description in the most recent work (Shi et al., 2019), which uses a graph convolution neural network in combination with a Bayesian linear regression layer to predict architecture performance in its BO-based NAS 6. To ensure fair comparison with our NAS-BOWL, we then define a normal Expected Improvement (EI) acquisition function based on the predictive distribution by the GNN surrogate to obtain another BO-based NAS baseline in Sec. 5.2, GCNBO. Similar to all the other baselines including our NASBOWLr and BANANASr, we use random sampling to generate candidate architectures for acquisition function optimisation. However, different from NAS-BOWL and BANANAS, GCNBO uses a batch sizeB = 1, i.e. at each BO iteration, NAS-BOWL and BANANAS select 5 new architectures to evaluate next but GCNBO select 1 new architecture to evaluate next. This setup should favour GCNBO if we measure the optimisation performance against the number of architecture evaluations which is the metric used in Figs. 4 and 5 because at each BO iteration, GCNBO selects the next architecture Gt based on the most up-to-date information αt(G|Dt−1) whereas NAS-BOWL and BANANAS only select one architecture Gt,1 in such fully informed way but select the other four architectures {Gt,i}5i=2 with outdated information. Specifically, in the sequential case (B = 1), Gt,2 is selected only after we have evaluated Gt,1, Gt,2 is selected by maximising αt(G|{Dt−1, (Gt,1, yt,1)}); the same procedure applies for Gt,3, Gt,4 and Gt,5. However, in the batch case (B = 5) where Gt,i for 2 ≤ i ≤ 5 need to be selected before Gt,i−1 is evaluated, {Gt,i}5i=2 are all decided based on αt(G|Dt−1) like Gt,1. For a more detailed discussion on sequential (B = 1) and batch (B > 1) BO, the readers are referred to Alvi et al. (2019).\nOther Baselines For all the other baselines: random search (Bergstra and Bengio, 2012), TPE (Bergstra et al., 2011), Reinforcement Learning (Zoph and Le, 2016), BO with SMAC (Hutter\n6Shi et al. (2019) did not publicly release their code.\net al., 2011), regularised evolution (Real et al., 2019), we follow the implementation available at https://github.com/automl/nas_benchmarks for NAS-Bench-101 (Ying et al., 2019). We modify them to be applicable on NAS-Bench-201 (Dong and Yang, 2020). Note that like GCNBO, all these methods are sequential B = 1, and thus should enjoy the same advantage mentioned above when measured against the number of architectures evaluated." }, { "heading": "F.3 ADDITIONAL NAS-BENCH RESULTS", "text": "Validation Errors Against Number of Evaluations We show the validation errors against number of evaluations using both stochastic and deterministic validation errors of NAS-Bench datasets in Fig. 11. It is worth noting that regardless of whether the validation errors are stochastic or not, the test errors are always averaged to deterministic values for fair comparison. It is obvious that NAS-BOWL still outperforms the other methods under this metric in achieving lower test error or enjoying faster convergence, or having both under most circumstances. This corresponds well with the results on the test error in Fig. 5 and double-confirms the superior performance of our proposed NAS-BOWL in searching optimal architectures.\nEffect of Varying Pool Size As discussed in the main text, NAS-BOWL introduces no inherent hyperparameters that require manual tuning as it relies on a non-parametric surrogate. Nonetheless, besides the surrogate, the choice on how to generate the candidate architectures requires us to specify a number of parameters such as the pool size (P , the number of candidate architectures to generate at each BO iteration) and batch size B. In our main experiments, we have set P = 200 and B = 5 throughout; in this section, we consider the effect of varying P to investigate whether the performance of NAS-BOWL is sensitive to this parameter.\nWe keep B = 5 but adjust P ∈ {50, 100, 200, 400}, and keep all other settings to be consistent with the other experiments using the deterministic validation errors on NAS-Bench-101 (N101) (i.e. averaging the validation error seeds to remove stochasticity), and we report our results in Fig. 12 where the median result is computed from 20 experiment repeats. It can be shown that while the convergence speed varies slightly between the different P choices, for all choices of P apart from 50 which performs slightly worse, NAS-BOWL converges to similar validation and test errors at the end of 150 architecture evaluations – this suggests that the performance of NAS-BOWL is rather robust to the value of P and that our recommendation of P = 200 does perform well both in terms of both the final solution returned and the convergence speed.\nAblation Studies In this section we perform ablation studies on the NAS-BOWL performance on both N101 and N201 (with deterministic validation errors). We repeat each experiment 20 times, and\nwe present the median and standard error in terms of both validation and test performances in Fig. 13 (N101 in (a)(b) and N201 in (c)(d)). We now explain each legend as follow:\n1. mutate: Full NAS-BOWL with the mutation described in Sec. 3.2 (identical to NASBOWLm in Figs. 11 and 5);\n2. rand: NAS-BOWL with random candidate generation. This is identical to NASBOWLr in Figs. 11 and 5;\n3. UCB: NAS-BOWL with random candidate generation, but with the acquisition function changed from Expected Improvement (EI) to Upper Confidence Bound (UCB) Srinivas et al. (2009)αacq = µ + βnσ, where µ, σ are the predictive mean and standard deviation of the GPWL surrogate, respectively and βn is a coefficient that changes as a function of n, the number of BO iterations.\nWe select β at initialisation (β0) to 3, but decay it according to βn = β0 √ 1 2 log(2(n+ 1)) as suggested by Srinivas et al. (2009), where n is the number of BO iterations.\n4. VH: NAS-BOWL with random candidate generation, but instead of leaving the value of h (number of WL iterations) to be automatically determined by the optimisation of the GP log marginal likelihood, we set h = 0, i.e. no WL iteration takes place and the only features we use are the counts of each type of original node operation features (e.g. conv3×3-bn-relu). This essentially reduces the WL kernel to a Vertex Histogram (VH) kernel.\nWe find that topological information and using an appropriate h are highly crucial: in both N101 and N201, VH significantly underperforms the other variants, although the extent of underperformance is smaller in N201 likely due to its smaller search space. This suggests that how the nodes are connected, which are extracted as higher-order WL features, are very important, and the multi-scale feature extraction in the WL kernel is crucial to the success of NAS-BOWL. On the other hand, the choice of the acquisition function seems not to matter as much, as there is little difference between UCB and WL runs in both N101 and N201. Finally, using mutation algorithm leads to a significant improvement in the performance of NAS-BOWL, as we have already seen in the main text." }, { "heading": "G OPEN-DOMAIN EXPERIMENTAL DETAILS", "text": "All experiments were conducted on a machine with an Intel Xeon-W processor with 64 GB RAM and a single NVIDIA GeForce RTX 2080 Ti GPU with 11 GB VRAM." }, { "heading": "G.1 SEARCH SPACE", "text": "Our search space is identical to that of DARTS (Liu et al., 2018a): it is in the popular NASNet search space, and we limit the maximum number of operation nodes to be 4 (in addition to 2 input nodes and 1 input node in each cell), and the possible node operations are 3 × 3 and 5 × 5 separable convolutions (sep-conv-3×3 and sep-conv-5×5), 3 × 3 and 5 × 5 dilated convolutions (dil-conv-3×3 and dil-conv-5×5), 3 × 3 max pooling and average pooling (max-pool-3×3 and avg-pool-3×3), identity skip connection (skip-connect) and zeroise (none).To enable the application of the GPWL surrogate without modification, we use the ENASstyle node-attributed DAG representation of the cells (this representation can be easily converted to the DARTS-style edge-attributed DAG without any loss of information). We show the best cell identified by NAS-BOWL in the DARTS search space in both edge- and node-attributed DAG representations in Fig, 14 as an example." }, { "heading": "G.2 EXPERIMENTAL SETUP", "text": "We mostly follow the setup and the code base (https://github.com/quark0/darts) from DARTS (Liu et al., 2018a), and we detail the setup in full below:\nArchitecture Search During the architecture search, we use half of the CIFAR-10 training data and leave the other half as the validation set. We use stack the search cell 8 times to produced a small network, and use batch size of 64 and initial number of channels of 16. As discussed, we only search one cell, and use this for both normal and reduction cells (in DARTS the two cells are searched separately). We use SGD optimiser with momentum of 0.9, weight decay of 3× 10−4 and an initial learning rate of 0.025 which is cosine annealed to zero over 50 epochs. As known by previous works (Liu et al., 2018a), the validation performance on CIFAR-10 is very volatile, and to ameliorate this we feed the average of the validation accuracy of the final 5 epochs as the observed accuracy to the GPWL surrogate. We use the identical setup for GPWL surrogate as the NAS-Bench experiments and we use the standard mutation algorithm described to generate the candidates every BO iteration.\nArchitecture Evaluation After the search budget (set to 150 architectures) is exhausted, we evaluate the neural network stacked from the best architecture found, based on the validation accuracy during the search stage. During evaluation, we construct a larger network of 20 cells and is trained for 600 epochs with batch size 96 and initial number of channels of 36. Additional enhancements that are almost universally used in previous works, such as path dropout of probability 0.2, cutout, and auxiliary towers with weight 0.4 are also applied in this stage (these techniques are all identical to those used in Liu et al. (2018a)). Any other enhancements not used in DARTS such as mixup, AutoAugment and test-time data augmentation are not applied. The optimiser setting is identical to that during architecture search, with the exception that the cosine annealing is over the full 600 epochs instead of 50 during search. During this stage, we use the entire CIFAR-10 training set for training, and report best accuracy encountered during evaluation on the validation set in Table 1. We finally train the final architecture for 4 random repeats on CIFAR-10 dataset." } ]
2,021
null
SP:90e01288266255a58201a01f06dd8fcc4cac4034
[ "This paper improves upon existing approaches for learning to fill forms on the web automatically. The main idea is to train an adversary to generate a curriculum of environments to train an agent to learn to fill forms on the web. Training such an adversary can be challenging since the adversary may prove to be too strong for the main agent to learn anything from. Thus, the paper proposes few techniques to control or shape this adversary such that the main agent is able to learn quickly as compared to similar existing approach. " ]
Learning to autonomously navigate the web is a difficult sequential decisionmaking task. The state and action spaces are large and combinatorial in nature, and successful navigation may require traversing several partially-observed pages. One of the bottlenecks of training web navigation agents is providing a learnable curriculum of training environments that can cover the large variety of real-world websites. Therefore, we propose using Adversarial Environment Generation (AEG) to generate challenging web environments in which to train reinforcement learning (RL) agents. We introduce a new benchmarking environment, gMiniWoB, which enables an RL adversary to use compositional primitives to learn to generate complex websites. To train the adversary, we present a new decoder-like architecture that can directly control the difficulty of the environment, and a new training technique Flexible b-PAIRED. Flexible b-PAIRED jointly trains the adversary and a population of navigator agents and incentivizes the adversary to generate ”just-the-right-challenge” environments by simultaneously learning two policies encoded in the adversary’s architecture. First, for its environment complexity choice (difficulty budget), the adversary is rewarded with the performance of the best-performing agent in the population. Second, for selecting the design elements the adversary learns to maximize the regret using the difference in capabilities of navigator agents in population (flexible regret). The results show that the navigator agent trained with Flexible b-PAIRED generalizes to new environments, significantly outperforms competitive automatic curriculum generation baselines—including a state-of-the-art RL web navigation approach and prior methods for minimax regret AEG—on a set of challenging unseen test environments that are order of magnitude more complex than the previous benchmarks. The navigator agent achieves more than 75% success rate on all tasks, yielding 4x higher success rate that the strongest baseline.
[]
[ { "authors": [ "Andres Campero", "Roberta Raileanu", "Heinrich Küttler", "Joshua B Tenenbaum", "Tim Rocktäschel", "Edward Grefenstette" ], "title": "Learning with amigo: Adversarially motivated intrinsic goals", "venue": "arXiv preprint arXiv:2006.12122,", "year": 2020 }, { "authors": [ "Michael Dennis", "Natasha Jaques", "Eugene Vinitsky", "Alexandre Bayen", "Stuart Russell", "Andrew Critch", "Sergey Levine" ], "title": "Emergent complexity and zero-shot transfer via unsupervised environment design", "venue": "Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Carlos Florensa", "David Held", "Xinyang Geng", "Pieter Abbeel" ], "title": "Automatic goal generation for reinforcement learning agents", "venue": "Proceedings of Machine Learning Research,", "year": 2018 }, { "authors": [ "Alex Graves", "Marc G Bellemare", "Jacob Menick", "Remi Munos", "Koray Kavukcuoglu" ], "title": "Automated curriculum learning for neural networks", "venue": "arXiv preprint arXiv:1704.03003,", "year": 2017 }, { "authors": [ "Izzeddin Gur", "Uli Rueckert", "Aleksandra Faust", "Dilek Hakkani-Tur" ], "title": "Learning to navigate the web", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Nick Jakobi" ], "title": "Evolutionary robotics and the radical envelope-of-noise hypothesis", "venue": "Adaptive behavior,", "year": 1997 }, { "authors": [ "Y. Keneshloo", "T. Shi", "N. Ramakrishnan", "C.K. Reddy" ], "title": "Deep reinforcement learning for sequence-to-sequence models", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2020 }, { "authors": [ "Joel Z Leibo", "Edward Hughes", "Marc Lanctot", "Thore Graepel" ], "title": "Autocurricula and the emergence of innovation from social interaction: A manifesto for multi-agent intelligence research", "venue": null, "year": 1903 }, { "authors": [ "Evan Zheran Liu", "Kelvin Guu", "Panupong Pasupat", "Tianlin Shi", "Percy Liang" ], "title": "Reinforcement learning on web interfaces using workflow-guided exploration", "venue": "arXiv preprint arXiv:1802.08802,", "year": 2018 }, { "authors": [ "Tambet Matiisen", "Avital Oliver", "Taco Cohen", "John Schulman" ], "title": "Teacher-student curriculum learning", "venue": "IEEE transactions on neural networks and learning systems,", "year": 2019 }, { "authors": [ "Eric Mazumdar", "Lillian J Ratliff", "Michael I Jordan", "S Shankar Sastry" ], "title": "Policy-gradient algorithms have no guarantees of convergence in continuous action and state multi-agent settings", "venue": "arXiv preprint arXiv:1907.03712,", "year": 2019 }, { "authors": [ "Eric V Mazumdar", "Michael I Jordan", "S Shankar Sastry" ], "title": "On finding local nash equilibria (and only local nash equilibria) in zero-sum games", "venue": "arXiv preprint arXiv:1901.00838,", "year": 2019 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "Proceedings of Machine Learning Research. PMLR,", "year": 2016 }, { "authors": [ "Rémy Portelas", "Cédric Colas", "Katja Hofmann", "Pierre-Yves Oudeyer" ], "title": "Teacher algorithms for curriculum learning of deep rl in continuously parameterized environments", "venue": "In Conference on Robot Learning,", "year": 2020 }, { "authors": [ "Fereshteh Sadeghi", "Sergey Levine" ], "title": "Cad2rl: Real single-image flight without a single real image", "venue": "arXiv preprint arXiv:1611.04201,", "year": 2016 }, { "authors": [ "Tianlin Shi", "Andrej Karpathy", "Linxi Fan", "Jonathan Hernandez", "Percy Liang" ], "title": "World of bits: An open-domain platform for web-based agents", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Sainbayar Sukhbaatar", "Zeming Lin", "Ilya Kostrikov", "Gabriel Synnaeve", "Arthur Szlam", "Rob Fergus" ], "title": "Intrinsic motivation and automatic curricula via asymmetric self-play", "venue": "arXiv preprint arXiv:1703.05407,", "year": 2017 }, { "authors": [ "Josh Tobin", "Rachel Fong", "Alex Ray", "Jonas Schneider", "Wojciech Zaremba", "Pieter Abbeel" ], "title": "Domain randomization for transferring deep neural networks from simulation to the real world", "venue": "IEEE/RSJ international conference on intelligent robots and systems (IROS),", "year": 2017 }, { "authors": [ "Rui Wang", "Joel Lehman", "Jeff Clune", "Kenneth O Stanley" ], "title": "Paired open-ended trailblazer (poet): Endlessly generating increasingly complex and diverse learning environments and their solutions", "venue": null, "year": 1901 }, { "authors": [ "Rui Wang", "Joel Lehman", "Aditya Rawal", "Jiale Zhi", "Yulun Li", "Jeff Clune", "Kenneth O Stanley" ], "title": "Enhanced poet: Open-ended reinforcement learning through unbounded invention of learning challenges and their solutions", "venue": "arXiv preprint arXiv:2003.08536,", "year": 2020 }, { "authors": [ "As Dennis" ], "title": "show, if the adversary and antagonist coordinate and reach a Nash equilibrium with the protagonist, then the protagonist will have learned to minimize the regret. However, in practice gradient-based multi-agent RL has no convergence guarantees, is highly non-stationary, and will often fail to converge (Mazumdar et al., 2019a;b). If the antagonist and adversary in PAIRED fail to coordinate, then PAIRED minimizes regret with respect to the antagonist’s policy", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Autonomous web navigation agents that complete tedious, digital tasks, such a booking a flight or filling out forms, have a potential to significantly improve user experience and systems’ accessibility. The agents could enable a user to issue requests such as, “Buy me a plane ticket to Los Angeles leaving on Friday”, and have the agent automatically handle the details of completing these tasks. However, the complexity and diversity of real-world websites make this a formidable challenge.\nGeneral web navigation form-filling tasks such as these require an agent to navigate through a set of web pages, matching user’s information to the appropriate elements on a web page. This is a highly challenging decision-making problem for several reasons. First, the observation space is large, and partially-observable, consisting of a single web page in the flow of several web pages (e.g. the payment information page is only one part of a shopping task). Web pages are represented using the Document Object Model (DOM), a tree of web elements with hundreds of nodes. Second, actions are all possible combination of the web elements (fill-in boxes, drop-downs, click on the buttons) and their possible values. For example, the drop-down selection actions are only appropriate if there there is a drop-down menu present. Even if the agent is able to navigate the site to arrive at the correct page, and eventually select the correct element (e.g. the ‘departure’ field for booking a flight), there are many possible values it can insert (e.g. all user input). Therefore, the action space is discrete and prohibitively large, with only a valid set of actions changing with the context. Finally, the same task, such as booking a flight, results in a very different experience and workflow depending on the\nwebsite. The agent must be able to adapt and operate in the new environment to complete the task. Therefore, the reinforcement learning (RL) agents should be capable of zero-shot generalization to new environments.\nPrior work made significant strides toward learning web navigation on a single website, yet the existing methods do not scale. Behavior cloning from expert demonstrations (Shi et al., 2017; Liu et al., 2018) shows promising results, however, it requires a number of demonstrations for every single website. RL agent trained using synthetic demonstrations created with a generative model Gur et al. (2019) improves the performance. Yet, the method still requires training a separate policy for every single website requiring tens of thousands of interactions with every website. Lastly, the existing benchmarks (Shi et al., 2017; Liu et al., 2018) have limited complexity. Their DOM trees are fixed and considerably smaller than real websites.\nWe aim to train RL agents to solve web navigation form-filling tasks; by correctly entering relevant information into unknown websites. Successful generalization to new websites requires training an agent on a large distribution of possible tasks and environments. The question is how to create a distribution that will not only cover most realistic tasks, but can be presented in a curriculum that is learnable by the agent. Manually designing a pre-defined curriculum of hand-built websites is tedious, and intractable. Another option would be to apply domain randomization (DR) (as in e.g. Jakobi (1997); Sadeghi & Levine (2016); Tobin et al. (2017)) to randomize parameters of websites, or automatically increase some parameter controlling the difficulty over time (as in Gur et al. (2019)). However, all these approaches are likely to fail to cover important test cases, and cannot tailor the difficulty of the parameter configuration to the current ability of the agent.\nAdversarial Environment Generation (AEG) trains a learning adversary to automatically generate a curriculum of training environments, enabling both increased complexity of training environments, and generalization to new, unforeseen test environments. However, if we naively train a minimax adversary—i.e. an adversary that seeks to minimize the performance of the learning agent—the adversary is motivated to create the hardest possible website, preventing learning. Instead, PAIRED (Protagonist Antagonist Induced Regret Environment Design) (Dennis et al., 2020), trains the adversary to maximize the regret, estimated as a difference between two navigation agents (protagonist and antagonist). While PAIRED shows exciting results, without an explicit feedback on how skillful antagonist is and mechanism to control the difficulty of the environment, the method is susceptible to local minima, and has hard time learning in the complex environments when the regret is zero.\nWe present Flexible b-PAIRED, which builds on PAIRED framework, and jointly trains the adversarial RL agent (adversary) and a population of navigator agents. Flexible b-PAIRED adversary learns to present ”just-the-right-challenge” to the navigation agents. We enable Flexible b-PAIRED adversary to tailor the environment difficulty to the ability of the best performing agent by introducing an explicit difficulty budgeting mechanism, and a novel multi-objective loss function. The budgeting mechanism gives the adversary the direct control of the difficulty of the generated environment. The\nadversary training simultaneously optimizes for an objective that ties in adversary difficulty budget with the navigator agent’s performance (observed expected return), and the population-based regret similar to PAIRED. Lastly, to enable AEG web-design, we present a new benchmarking environment, gMiniWoB, and a web-design adversary architecture. gMiniWoB enables an adversary to construct websites of increasing complexity out of common design primitives such as navigation bars, product carousels, item decks, web forms, and item carts. The evaluation environments in gMiniWob are order of magnitude more complex than miniWob (Shi et al., 2017). The adversary architecture is a LSTM-based decoder, seeded with a random seed. It first selects number of web pages. Then, at each step of an open loop, the adversary either emits a design element and its placement, or opts to skip an element and save design budget. The adversary’s used difficulty budget is a log-likelihood of joint probability of not adding design elements.\nThis paper makes the following contributions: i) A new benchmarking environment, gMiniWoB, which empowers the use of AEG for web navigation, by enabling the construction of websites out of compositional design primitives; ii) The Flexible b-PAIRED algorithm, which computes a more stable estimate of regret and directly incentivizes the adversary to tailor the complexity of the generated environment to the performance of the best-performing agent; iii) web navigation adversary decoder architecture, and iv) empirical results demonstrating that Flexible b-PAIRED generates a curriculum of increasingly challenging websites, and produces agents that can successfully generalize to navigating complex, unseen sites at test time. Flexible b-PAIRED approach significantly outperforms prior work on minimax regret AEG (Dennis et al., 2020), as well as a state-of-the-art approach for using RL to train web navigation agents (Gur et al., 2019), resulting in agents that complete the most difficult tasks with more than 75% success rate, 4x improvement over the strongest baseline. We are releasing gMiniWoB in open-source in the hopes of enabling further progress on this problem. We hope that this work will provide a meaningful way to make progress on the exceptionally challenging problem of learning to navigate the web, and will be of interest to the wider RL research community for auto-curriculum design in complex and compositional environments." }, { "heading": "2 RELATED WORK", "text": "Web navigation benchmarks and tasks: Prior work on training agents to navigate the web introduced the MiniWoB (Shi et al., 2017) and MiniWoB++ (Liu et al., 2018) environments, a fixed set of manually curated toy websites, but relied on obtaining expert demonstrations for each website, which cannot scale effectively to cover the large variety of real-world websites, and cannot adapt to changing websites. Further, these methods failed to solve complex web navigation tasks such as flight booking or social media interaction (Gur et al., 2019). Gur et al. (2019) take a step farther by training an RL agent to solve complex web navigation tasks using a scheduled curriculum. The curriculum linearly increases a parameter p, in which 1 − p controls the number of web elements that are solved by querying an oracle policy, which is obtained via expert data. This work differs in several ways. First, we introduce a new framework, gMiniWoB, that allows generating complex websites on-the-fly with tunable difficulty levels. Additionally, we do not rely on any expert demonstrations to augment sparse rewards. We use AEG to automatically learn to generate a curriculum of web navigation tasks that are tailored to the current skill level of the agent. Next, we make no assumption on the availability of any website while they assume websites are given a priori. Lastly, our web navigation agents generalize to unseen environments.\nGoal Generation: Florensa et al. (2018) trains a Generative Adversarial Network (GAN) for generating a curriculum of goals with fixed environment dynamics. A generator is trained to output new goals and the discriminator is trained to predict if the goal is achievable. The generator is bootstrapped from sample goals that the initial agent is able to reach in the environment. It is tested on simple navigation tasks with the same environments. In contrast, we train an adversary that generates a curriculum of environments, including goals, starting with an empty environment in which bootstrapping a generator network from sample episodes is not possible. We test on unseen environments with more complicated and high dimensional state and action spaces.\nAdversarial Environment Generation: Multi-agent training can be an effective method for automatically generating a curriculum of RL tasks (e.g. Leibo et al. (2019); Matiisen et al. (2019); Graves et al. (2017); Portelas et al. (2020)). For example, Asymmetric Self Play (ASP) (Sukhbaatar et al., 2017) trains two agents, in which the second agent must learn to repeat the actions taken by the first, demonstrator agent. Both agents play in the same, fixed environment. In contrast, we use a third agent to learn to generate challenging new environments. POET (Wang et al., 2019; 2020) is\nUnder review as a conference paper at ICLR 2021\nDIV\n#text VAR*\nINPUT text=VAR*\nLABEL*\nDIV\nINPUT text=”Username”\nDIV\n#text “First Name”\nINPUT\nLABEL\n(a) An underspecified DOM tree template. The text box is always included, its text and label element are variables.\nDIV\n#text VAR*\nINPUT text=VAR*\nLABEL*\nDIV\nINPUT text=”Username”\nDIV\n#text “First Name”\nINPUT\nLABEL\n(b) A fully specified DOM primitive where a label is created and its text is assigned.\nDIV\n#text VAR*\nINPUT text=VAR*\nLABEL*\nDIV\nINPUT text=”Username”\nDIV\n#text “First Name”\nINPUT\nLABEL\n(c) A fully specified DOM primitive where only the inner text within the text box is assigned.\nFigure 2: An example underspecified DOM tree template (a) and its instantiations (b,c) with different values. (*) indicates a variable; either an element or one of its attributes. (c) is used in Page 1 and (b) is used in Page 2 in Figure 3.\nan AEG technique which uses a population of adversaries to generate the terrain a 2D walker agent must learn to navigate. To create a curriculum, POET requires generating many new environments, testing all agents within each one, and discarding environments based on a manually chosen a reward threshold, which wastes a significant amount of computation. Campero et al. (2020) use a teacher to propose navigation tasks; the teacher’s reward is based on whether the agent takes more steps than a threshold, a hyperparameter that is linearly increased over the course of training.\nMost closely related to our work is PAIRED (Dennis et al., 2020), which is an AEG method for training agents with minimal regret that works by constraining the environment-generating adversary using the performance of a second agent. However, PAIRED only demonstrated results on simple gridworld environments, and did not expand to the type of complex, high-dimensional state-action space required for web navigation. We improve on PAIRED using a more flexible estimate of the regret, as well as a budget mechanism, and show that this significantly improves performance.\nRL with Autoregressive Models: Keneshloo et al. (2020) outlines training sequence-to-sequence (seq2seq) models with RL algorithms. Previous models first pretrained a seq2seq model with ground-truth inputs and outputs and then finetuned with RL using different reward functions such as BLEU score. In this work, we propose a decoder-like autoregressive adversary model that is trained without any ground-truth data. The model is fed its own predictions from previous time steps and updated using a novel adversarial objective." }, { "heading": "3 BACKGROUND ON WEB NAVIGATION PROBLEM", "text": "Following previous work (Shi et al., 2017; Gur et al., 2019; Liu et al., 2018), we formulate web navigation as a sequential decision making problem where we train an agent, parameterized by a network π(at|st; Θi), that maps an input state st to output action distribution to maximize the cumulative discounted reward, .i.e., O = ∑T t=0 γ\ntrt where rt is the reward at time step t, γ is a discount factor, and T is the length of an episode. We use the web page and user instruction as the input state. The web page is dynamically updated at each time step, while the instruction is fixed at the beginning of an episode. We represent web pages using Document Object Model (DOM), a tree of elements in a page, where each element is denoted by a set of (attribute, value) pairs and an array of features (such as spatial coordinates). Instructions are given as a set of fields where each field is a (key, value) pair. Keys are fixed for each task and values dynamically change based on user input.\nEach action is represented as a tuple (element, field) that denotes acting on the element using the field as an input; i.e. typing the value of the field into the element. Agents receive a task success reward (1.0 or -1.0) at the end of each episode, a potential-based reward when the value of an element in the page is updated, and a small penalty each timestep to encourage efficient navigation. As an example, consider a flight booking task where the agent is given an instruction {\"Departure Date\": \"Friday\", Destination Airport: \"Los Angeles (LAX)\"}. The agent first picks a field (e.g. destination airport) and finds the corresponding text box in the page; then the corresponding value (“Los Angeles (LAX)”) typed in to the text box. If this value is correct, the agent receives a positive reward of 1/2 where 2 is the number of fields in the instruction." }, { "heading": "4 METHODS FOR LEARNING TO DESIGN WEB ENVIRONMENTS", "text": "This Section presents the generative MiniWob environment (Section 4.1), the adversary neural network architecture (Section 4.2) and the adversary training procedure in Section 4.3." }, { "heading": "4.1 GENERATIVE MINIWOB (GMINIWOB) ENVIRONMENT", "text": "Generative MiniWoB (gMiniWoB) creates website environments for form-filling tasks, which consist of a linked-list of webpages, Ew = [W1, · · · ,WK ]. Each webpage, Wi, is a DOM tree that contains a number of elements, such as fill-in boxes, drop downs, and buttons. To create a new environment Ew, gMiniWoB starts with an empty website that is gradually populated by new pages, Wi. A subset of elements are also augmented with events that enable page transitions. For example, an ”on-click” event on a Submit button on a pageWi, will link to a pageWi+1. We formulate the website design as combining a set of primitive DOM sub-trees that are general enough to create complex websites but also facilitate controllable generation. The order in which the primitives are combined also defines how the web page will be rendered as well. Let’s assume that we are given a list of DOM tree primitives T and an empty web pageW = (S, C) where S is a single root node of the DOM tree and C is an ordered list of subtrees rooted at S which is initially empty. By repetitively sampling new primitives from T and appending them to C, we create a new page,W, which follows the order of primitives in C when rendered (see Figure 3). We first create a set of underspecified DOM tree templates, a sub-tree with certain elements and attributes are replaced with variables. Assigning values to variables in a template fully specifies a DOM tree primitive that is placed in a subtree C to create a new web page, W (Algorithm 2). For example, an input template (Figure 2a) as a variable label and text box with a common parent. There two ways to assign values, either by picking the label element and assigning a value to its text attribute (Figure 2b), assigning a value to the inner text of the text box and ignoring the label element (Figure 2c).\nWebsite Design Primitives: gMiniWoB implements 40 different design primitives from 11 different underspecified DOM templates. The primitives are widely used across the web and include ‘navigation bars’, ‘product carousels’, ‘item decks’, ‘web forms’, ‘item carts’, ‘dropdowns’, etc. Every primitive includes at least one actionable element that changes the DOM structure when the agent interacts with it. Each primitive belongs to one of the two categories based on their use in the reward computation: (i) Active primitives (used), and (ii) Passive primitives (not used). 26 of the 40 primitives are active, while the rest are passive. When an active primitive is added to a web page, the instruction automatically grows as well. For example, adding ‘First Name’ text box in Figure 2c also adds a new “firstname” field into user instruction. Expanding the instruction set makes active primitives more complicated to learn, while the passive primitives mostly serve as noise. However, real websites contain many distracting elements (passive primitives), so it is important for agents to learn to ignore them. Appendix A.9 details all the design primitives used, and Appendix A.10 shows the websites in the test set.\n1For simplicity of illustration, we show an example generation process where primitives are generated in an increasing order of the page indices; however, in our formulation (see Section 4.2 for details), the page indices corresponding to consecutive LSTM timesteps do not necessarily increase monotonically." }, { "heading": "4.2 ADVERSERIAL ENVIRONMENT DECODER ARCHITECTURE", "text": "We present an adversary decoder policy for the compositional environment generation problem where the goal is to place a set of design primitives to a set of locations. Adversary generates a (website) environment, Ew = [W1, · · · ,WK ]. We assume fixed maximum number of pages K, although to control complexity, we allow pages and subtress to be empty.\nWe parametrize the adversary with a policy πE(aA|oA) such that\nπE(a A|oA) = πEw(k|K) N∏ i=0 π(ai, bi|a0···i−1, b0···i−1, k) (1)\nwhere N is an upper limit on the number of outputs (total number of primitives in the environment Ew), K is an upper limit on the number of web pages, ai is a design primitive. We augment the primitive design actions described in Section 4.1 with a special SKIP action that does nothing when executed by the renderer. This allows the adversary to control the number of primitives added. bi is a web page index of where the primitive ai should be placed in on the page. Observation oA is an initial observation. The adversary first samples the number of locations k from a parametrized Categorical distribution Cat(0,K). Conditioned on oA, it executes an autoregressive model to generate a set of primitives and their corresponding locations within [0, · · · , k]. aA is the resulting environment totalling at most N primitives placed over k pages.\nThe initial observation oA is sampled from the standard normal distribution, to allow the adversary to diversify its design distribution. This observation is encoded with a feed forward network h0 = f0(o\nA) and h0 is passed to another network fK that outputs a distribution over number of empty pages. The same hidden state h0 is passed to an LSTM network as the initial input vector and output of the LSTM is used by two independent networks fP and fL to (i) learn a distribution over design primitives and (ii) learn a distribution over locations, respectively. We sample an action (a joint primitive and location pair) from these distributions and they are encoded by another network fI into a hidden state which is used as the input to the LSTM at the next step. After N steps, sampled design actions are sent to a renderer module which generates the environment (Figure 3).\nNote that the adversary is domain independent, and creates a generic compositional task environment with K linked sub-environments, each containing sub tasks sampled from the design primitives. Since the renderer interprets the design decisions and builds the environments for the navigator agents to use, the adversary architecture can be used in other domains without modification." }, { "heading": "4.3 ADVERSARY TRAINING", "text": "We train both the adversary and navigation agents with reinforcement learning. At every step t of the training, the adversary generates an environment Ew(t). The web navigation training initializes population of navigation agents A = {Ai, i = 1, · · · , na}. The agents Ai ∈ A collect M trajectories τi,j with returns Ri,j . Ri,j is a discounted cumulative reward that agent Ai observers by navigating the environment, Ew, and resulting in trajectory τi,j ∼ πAi . The navigation agents use the standard task related reward described in Section 3 for training and out-of-box A2C with entropy (Mnih et al., 2016). To train the adversary, we present a new loss function which augments A2C with entropy with a custom loss function that encourages the adversary to control the complexity of the environment, by presenting ”just-the-right” challenge for the agents in A (see Algorithm 1)." }, { "heading": "4.3.1 LOSS FUNCTIONS", "text": "Let Ri,j(t) be observed returns (cumulative discounted reward) of an agent Ai(t) ∈ A while sampling jth trajectory during training iteration t. Let RA and RP be maximum and average expected returns at the iteration t,\nRA = max i\nE(Ri), RP = E[E[Ri]], E[Ri] = 1\nM ∑ j Ri,j , i = 1, · · · ,M (2)\nThen, the adversary loss function consists of two terms,\nJ (θ|A, Ew) = Jrl(θ |REGRET(A|Ew)) + α ∗ Jbudget(θ | A, Ew) (3)\nAlgorithm 1 b-Flexible PAIRED training. Joint training of the adversary and navigation agents. 1: Input:A: Initialize the agents independently 2: for all training iterations do 3: Ew ←− Run the adversary πE to generate a new website 4: for i = 1, · · · , na do 5: Ri ←− 0 6: for j = 1, · · · ,M do 7: Ri,j ←− Run agent Ai ∈ A in the environment Ew and collect rewards. 8: Ri ←− Ri + Ri,jM . Expected return for agent Ai. 9: end for 10: end for 11: RA ←− maxiRi, RP ←− E[Ri] . Maximum and mean expected return from the agents. 12: REGRET(A|Ew)←− RA −RP . Compute regret as in Equation 4. 13: Update adversary using Equation 3 and REGRET(A|Ew) as the reward. . Train adversary. 14: Update parameters of ∀Ai ∈ A using Ri returns in A2C. . Train navigation agents. 15: end for\nwhere α is a balancing factor between the two losses. Jrl is standard A2C loss with cross-entropy regularizer added to encourage adversary’s exploration. The reward function for the Jrl is regret, estimated as the difference between expected performance of the best and average agents:\nREGRET(A|Ew) = RA −RP . (4)\nThe second loss term is budget loss,\nJbudget(θ | A, Ew) = RA ∗ N∑ i=1 log πθ(ai = SKIP|a0···i−1, b0,··· ,i−1). (5)\nWe use an environment difficulty objective to bind the adversary’s design budget to the performance of the best agent. We approximate the effective budget of the adversary as the expected number of non-SKIP actions over N time steps and update this budget according to whether the agents are learning. This objective encourages the adversary to use less budget (more SKIP actions) when the agents are not yet learning (i.e., RA is negative or low); it encourages the adversary to use more budget (less SKIP actions) when the navigator agents are performing well and collecting positive rewards in the environment. On the flip side, when the agents are collecting negative reward, the adversary is encouraged to decrease the budget and sample less design elements (and more SKIP actions)." }, { "heading": "4.3.2 LOSS FUNCTIONS DISCUSSION", "text": "Budget loss provides training signal for the adversary when the regret reward is sparse, which happens when all agents are performing very similarly, and it acts to encourage the adversary to decrease the difficulty of the environment. Consider the scenario where agents are placed on the home page of a shopping website where there are many possible elements, but only a single button that takes them to their account page. During exploration, agents mostly collect negative rewards for taking incorrect actions, bounded to a very narrow interval (as there is only a single optimal action). In this case, the regret is very small and sparse, which hinders the adversary’s ability to design environments at an appropriate difficulty for agents to learn. We approximate the environment difficulty and the effective budget of the adversary as the expected number of non-SKIP actions over N created elements, and update this budget according to whether the agents are learning.\nThe regret presented in Equation 4 contributes two subtle, but important changes over the prior work. First, the presented regret does not make a distinction between antagonist and protagonist agents, and instead annotates the best performing agent as the antagonist. As long as any agent has a higher performance than the other agent, the objective will continue to improve the weakest agent. During that time, the other agents continue learning, and therefore provide a stronger maximum performance against which we measure the regret. This is why we call it flexible. Second, the estimates for the protagonist returns are smoothed and computed as a best mean return, instead of\nmaximum of a pre-selected agent. While this might be further underestimating the regret, it provides a more stable estimate.\nNote that when the adversary creates non-trivial environments, REGRET is close to zero and the budget loss in Equation 5 dominates the RL loss. However, when the adversary creates trivial websites, the RL loss and regret encourage the adversary to explore towards environments that promote variation in performance. Those are less trivial, hence it pushes the adversary towards increasing the difficulty." }, { "heading": "5 EVALUATIONS", "text": "" }, { "heading": "5.1 EVALUATION SETUP", "text": "We evaluate our models on a variety of web environments implemented in gMiniWoB as well as MiniWoB frameworks (Shi et al., 2017; Liu et al., 2018). We implemented several challenging websites with varying difficulty levels using the same set of design primitives in gMiniWoB. These environments include ‘Login’, ‘Enter Address’, ‘Flight Booking’, ‘Enter Payment’, and ‘Shopping’ websites, where the agents need to enter text or select information in the website while navigating between pages. Each environment comes with 4 different difficulty levels by gradually adding more primitives to websites. These environments are never explicitly presented to agents during training, so performance in them measures how well agents can generalize to unseen websites at test time.\nAgent architecture: Following Gur et al. (2019), we utilize an LSTM based DOM tree encoder and a feed forward network to encode profile fields. The navigator agent policy outputs a joint distribution over elements and fields by measuring pairwise similarities between element encodings and profile fields. We compute the state-value by using the marginal distribution of elements as attention weights over element encodings and passing the context vector through a FF network. Web navigation agents are trained with an actor-critic algorithm (Liu et al., 2018). We train the LSTMbased adversary network using Flexible PAIRED and Flexible b-PAIRED with policy gradient (See Appendix A.12 for more details on adversary policy network).\nBaselines: We benchmark PAIRED, Flexible PAIRED, and Flexible b-PAIRED against two additional baselines. First, a Domain Randomization (DR) agent, which we implement using a similar approach as Dennis et al. (2020). We first sample the number of empty pages k from a uniform distribution U [0,K]. Next, we randomly sample a primitive (including SKIP), and a page from U [0, k] for N steps. Second, a Curriculum Learning (CL) approach, which adapts the scheduled curriculum idea of Gur et al. (2019) to zero-shot environment generation where we are not given a specific website but a set of design primitives. We randomly sample each primitive w.r.t. a probability p where p is initialized with a small number and scheduled to reach 1.0 during training." }, { "heading": "5.2 RESULTS", "text": "We first compare the original PAIRED algorithm (which used separate antagonist and protagonist agents) to the proposed Flexible PAIRED algorithm that annotates the best performing agent as the antagonist. Flexible PAIRED considerably improves upon PAIRED, which fails to learn in this environment (Figure 4). One reason is that when agents are separate and have very similar rewards, especially early during training, the regret becomes very small. This uninformative signal makes it difficult for the adversary to learn. On the other hand, Flexible PAIRED computes a consistently positive regret signal, which more clearly indicates to the adversary which environments are challenging, but still feasible. The further ablation studies show that adding budget improves performance for both flexible, and original PAIRED method (see Appendix A.7 for MiniWoB studies and Appendix A.8 for budget weight ablation).\nComparison on test environments: We evaluate the performance of the proposed models and baselines on task success rate computed across test environments with different difficulty levels. Flexible b-PAIRED outperforms Flexible PAIRED indicating the budget objective significantly improves performance (Figure 5). Further, both techniques significantly outperform the baseline models on all tasks, with Flexible b-PAIRED effectively reaching more than 75% task success across all difficulty levels. Even as the complexity of the environments continues to increase (see Section 5.2), Flexible b-PAIRED agents still perform consistently well without degrading performance. While CL outperforms Flexible PAIRED early in the training, its performance drops significantly due to ignoring\nagents’ skill level, and making environments that are too challenging for agents to complete. We also observe that Flexible b-PAIRED learns faster than Flexible PAIRED on all environments as Flexible b-PAIRED reacts to agents’ performance faster than Flexible PAIRED (see Appendix A.6).\nEnvironments complexity: While agent performance improves over time, we would like to know if they are presented with more challenging environments over training. We estimate the number of active and passive primitives generated as a measure of environment complexity. Learning a web page with more passive primitives is a relatively easier task than a page with more active primitives, because passive primitives either add noise and should ignored by the agents, or are used by agents only to navigate to another page. On the other hand, if there are more active primitives, not only will the size of the DOM tree increase but the number of profile fields will increase, making the matching between elements and profile more challenging. Flexible b-PAIRED starts around 60% random selection of primitives, and gradually generates more active primitives early on (Figure 4f). Although presented with more active primitives by Flexible b-PAIRED, agents are still able to improve thanks to Flexible b-PAIRED’s ability to accurately tune the difficulty of the environments according to agents’ skill. During training, more and more passive primitives are also introduced where number of active primitives also keeps increasing (see Appendix A.4). We also observe that the distribution of the primitives shifts later in the training to more complex and relevant primitives (see Appendix A.3)." }, { "heading": "6 CONCLUSION", "text": "This work presents a novel technique for Adversarial Environment Generation (AEG), which we show improves significantly over prior work. In addition, we apply AEG to the problem of web navigation, and provide an open-source environment that enables learning to design complex websites out of a set of compositional primitives. Our Flexible b-PAIRED method is able to generate a curriculum of increasingly complicated websites, and successfully trains agents which can navigate challenging, high-dimensional websites." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 PROTAGONIST ANTAGONIST INDUCED REGRET ENVIRONMENT DESIGN (PAIRED)", "text": "Adversarial Environment Generation (AEG) trains an adversary policy πE to design environments to minimize the performance of an agent’s policy, πP . Let RPi = ∑T t=1 γ\ntrPt be the total reward received by the agent for trajectory i. In minimax AEG, the objective for the adversary is simply: −RP . Thus, minimax adversaries are incentivized to create excessively difficult or impossible environments, which may not enable the agent to learn. Instead, PAIRED (Dennis et al., 2020) trains the adversary to maximize the agent’s regret, which is defined as the difference between the agent’s return and the return of the optimal policy, R∗−RP . When the reward function includes an incentive to complete the task more efficiently (which is true in our case), the regret will be highest for easy tasks which could be completed in a few steps by the optimal policy, but which the current policy fails to complete. Therefore, an adversary that maximizes the regret will continue to propose easier tasks until the agent begins to solve them, making regret a desirable objective for AEG.\nTo estimate the regret, PAIRED introduces a third agent, the antagonist (with policy πA), and constrains the adversary to only generate feasible environments which the antagonist can complete. When the adversary generates an environment E, both the protagonist and antagonist collect M trajectories with returns RP1 , ..., R P M , R A 1 , ..., R A M in E. The regret is then estimated as:\nREGRET = max i RAi −\n1\nM M∑ m=1 RPm (6)\nAs Dennis et al. (2020) show, if the adversary and antagonist coordinate and reach a Nash equilibrium with the protagonist, then the protagonist will have learned to minimize the regret. However, in practice gradient-based multi-agent RL has no convergence guarantees, is highly non-stationary, and will often fail to converge (Mazumdar et al., 2019a;b). If the antagonist and adversary in PAIRED fail to coordinate, then PAIRED minimizes regret with respect to the antagonist’s policy. In that case, the objective in Equation 6 only forces the protagonist to learn to be as good as the antagonist. If the antagonist fails to improve, or reaches a local optimum, then the adversary cannot continue to train the protagonist. In Section 4.3.1 we propose an improved objective which addresses this problem." }, { "heading": "A.2 TRAINING FLOW", "text": "In Figure 6, we illustrate the high level workflow of the AEG with budget mechanism." }, { "heading": "A.3 DISTRIBUTION OF PRIMITIVES DURING TRAINING", "text": "During training, the distribution of primitives become more skewed towards active primitives early on (as shown in Figure 4f), but as the environments get more challenging, more passive primitives are introduced as well (Figure 7). What we observe from the histograms in Figure 7 is that new primitives are slowly introduced while the ranking of the primitives is also slightly changed." }, { "heading": "A.4 ACTIVE AND PASSIVE PRIMITIVE FREQUENCIES", "text": "In Figure 8, we present frequencies of active and passive primitives during training. With FlexiblebPAIRED, number of both active and passive primitives increase resulting in more complex websites." }, { "heading": "A.5 CREATING FULLY-SPECIFIED PRIMITIVES FROM UNDERSPECIFIED TEMPLATES", "text": "In Algorithm 2, we outline the process for generating a new fully-specified primitive from a given underspecified DOM template.\nAlgorithm 2 Generating a new fully-specified primitive from an underspecified primitive. 1: Input:D = (Dn, De): An underspecified DOM template, a sub-tree with elements Dn and\nedges De 2: Input:V ⊂ Dn: A list of elements that correspond to variables in Dn 3: Input:Av,i: A list of variable attributes Av,i for an element v ∈ Dn 4: for v ∈ V do . Iterate over variable elements. 5: Flip a coin. If it is heads, Dn ←− Dn \\ {v}. . Add/remove a variable element. 6: end for 7: for v ∈ Dn do . Iterate over non-variable elements. 8: for a ∈ Av,i do . Iterate over variable attributes for element v. 9: Flip a coin. If it is heads, sample and assign a value for a. . Add/remove an attribute.\n10: end for 11: If there is at least one variable attribute remaining for element v, Dn ←− Dn \\ {v}. 12: end for" }, { "heading": "A.6 DETAILED RESULTS ON TEST ENVIRONMENTS", "text": "We detail the aggregated results in Figure 5 and present performance of agents across tasks and difficulty levels (Figure 1). On the easiest level of tasks, CL achieves slightly lower performance than Flexible b-PAIRED early in the training while as the task difficulty increases, the gap becomes more apparent. We observe that the primitive distribution in Figure 7c and task success rate results are consistent in which late in the training, the adversary focuses more on the ’Flight Booking’ related primitives and its performance still strongly increases." }, { "heading": "A.7 RESULTS ON MINIWOB ENVIRONMENTS", "text": "In Table 2, we present results on MiniWoB form-filling tasks and compare them to gMiniWoB test tasks. MiniWoB tasks are independent of gMiniWoB and they have completely unobserved DOM structures and labels. We load only the trained embedding layers from the final checkpoint as there is no element dependency on these DOMs. We show that we navigation agents trained with Flexible b-PAIRED are able to solve all MiniWoB tasks.\nCompared to gMiniWoB tasks, they are much simpler where DOM and instruction sizes are up to 10 times smaller. They also have only a few input elements that the agent can interact with while in gMiniWoB there are 10s of input elements making the gMiniWoB a formidable benchmark. As an example, in the Shopping task, size of the state and action spaces reach 5550 (number of tokens for all attributes in all elements in a DOM) and 240 (total number of element and instruction pairs), respectively.\nA.8 COMPARISON OF α IN BUDGET WEIGHTING\nIn Figure 9, we plot results where Flexible b-PAIRED is trained with different α weights. For α = 0.25, the performance drops substantially as the model gives more weight to the RL loss overall. In this work, we used α = 1.25 for our main results." }, { "heading": "A.9 WEB ENVIRONMENT DESIGN PRIMITIVES", "text": "Design Primitives and Their Descriptions Design Primitive Design Template Active/Passive Description\naddressline1 input active Main address information addressline2 input active Secondary address information\ncabin multi-selection active Multiple cabin options captcha input active Captcha information\ncarousel carousel passive Items with images in a carousel with previous and next buttons\ncart cart passive Items in a product cart with promo code information\ncc multi-selection active Multiple credit card type options cccvv input active Credit card CVV information\nccexpdate input active Credit card expiration date information ccnumber input active Credit card number information city input active City address information dealmedia media passive Product media with image, label, and link\ndeck deck passive Multiple product decks with image, label, and link\ndepartureairport input active Departure airport information departuredate input active Departure date information\ndestinationairport input active Destination airport information destinationdate input active Destination date information\nfirstname input active First name information flighttype multi-selection active Multiple flight type options\nfooter1 footer passive Footer with links and information forgotpassword link passive Link with forgot password context forgotusername link passive Link with forgot username context\nfullname input active First and last name information header label passive Generic header\nheader login label passive Header for login form header select items label passive Header for item selection\ninpgroup1 input passive Generic input with default search context\nlastname input active Last name information navbar navigation bar passive A navigation bar with a menu\nnext checkout button passive Next button with checkout context next login button passive Next button with login context next login page button passive Next button with login context numberofpeople multi-selection active Multiple number of people options\npassword input active Password information rememberme selection active Checkbox with remember me con-\ntext state input active State information\nstayloggedin selection active Checkbox with stay logged in context\nsubmit button passive Submit button username input active Username information\nzipcode input active Zipcode information\nIn Table A.9, we present the list of design primitives, corresponding templates, types, and descriptions." }, { "heading": "A.10 LIST OF TEST ENVIRONMENTS", "text": "In Figure 11, we present screenshots of the testing environments with the hardest difficulty levels. While “Login”, “Enter Address”, “Enter Payment”, and “Flight Booking” are single page environments, “Shopping” is a multi-page environment where an agent needs to first navigate the home page and then solve “Login” and “Enter Address” tasks." }, { "heading": "A.11 EXAMPLE WEB PAGE DESIGNS", "text": "In Figure 12, we present more screenshots of generated pages by the adversary from including multipage websites. They cover a very broad spectrum of complexities and DOM tree structures. As an example, two web pages on the top right both have ”City”, ”CVV”, and ”Address” elements but with different orders. This allows the web navigation agents to observe a website in multiple different ways for better generalization.\nA.12 IMPLEMENTATION DETAILS ON WEB NAVIGATION AND ADVERSARY NETWORKS\nFollowing Gur et al. (2019), we design web navigation agent networks as DOM and profile encoders with pairwise similarity scoring. Each web navigation agent policy network has 104501 parameters.\nIn Figure 13, we detail the adversary network architecture for a single design action with the parameters used in this work. We use 100 dimensions for hidden vectors for all dense layers as well as the LSTM network. Every dense layer is stacked twice and tanh activation function is applied on the output of all non-final dense layers. Total number of parameters for the adversary policy network is 152461." } ]
2,020
null
SP:deb175b73241e3a04c2d2887934889508db4e39e
[ "This paper proposes to apply uncertainty-based measures to guide the collection of training samples for reading comprehension. The paper describes a relatively simple metric to estimate model uncertainty of unlabeled examples, and develops an algorithm to sample examples that exhibit least model certainty. They describe a learning and regularization model for this scenario and evaluate their proposal on SQuAD and NewsQA datasets." ]
Recent years have witnessed a surge of successful applications of machine reading comprehension. Of central importance to the tasks is the availability of massive amount of labeled data, which facilitates the training of large-scale neural networks. However, in many real-world problems, annotated data are expensive to gather not only because of time cost and budget, but also of certain domainspecific restrictions such as privacy for healthcare data. In this regard, we propose an uncertainty-based adaptive learning algorithm for reading comprehension, which interleaves data annotation and model updating to mitigate the demand of labeling. Our key techniques are two-fold: 1) an unsupervised uncertaintybased sampling scheme that queries the labels of the most informative instances with respect to the currently learned model; and 2) an adaptive loss minimization paradigm that simultaneously fits the data and controls the degree of model updating. We demonstrate on the benchmark dataset that 25% less labeled samples suffice to guarantee similar, or even improved performance. Our results demonstrate a strong evidence that for label-demanding scenarios, the proposed approach offers a practical guide on data collection and model training.
[]
[ { "authors": [ "Naoki Abe", "Philip M Long" ], "title": "Associative reinforcement learning using linear probabilistic concepts", "venue": "In Proceedings of the Sixteenth International Conference on Machine Learning,", "year": 1999 }, { "authors": [ "David Arthur", "Sergei Vassilvitskii" ], "title": "k-means++: the advantages of careful seeding", "venue": "Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms,", "year": 2007 }, { "authors": [ "Jordan T. Ash", "Chicheng Zhang", "Akshay Krishnamurthy", "John Langford", "Alekh Agarwal" ], "title": "Deep batch active learning by diverse, uncertain gradient lower bounds", "venue": "In Proceedings of the 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Pranjal Awasthi", "Maria-Florina Balcan", "Philip M. Long" ], "title": "The power of localization for efficiently learning linear separators with noise", "venue": "In Symposium on Theory of Computing,", "year": 2017 }, { "authors": [ "Maria-Florina Balcan", "Philip M. Long" ], "title": "Active and passive learning of linear separators under log-concave distributions", "venue": "The 26th Annual Conference on Learning Theory,", "year": 2013 }, { "authors": [ "Maria-Florina Balcan", "Andrei Broder", "Tong Zhang" ], "title": "Margin based active learning", "venue": "In International Conference on Computational Learning Theory,", "year": 2007 }, { "authors": [ "Maria-Florina Balcan", "Alina Beygelzimer", "John Langford" ], "title": "Agnostic active learning", "venue": "J. Comput. Syst. Sci.,", "year": 2009 }, { "authors": [ "Aron Culotta", "Andrew McCallum" ], "title": "Reducing labeling effort for structured prediction tasks", "venue": "Proceedings of the 20th AAAI Conference on Artificial Intelligence,", "year": 2005 }, { "authors": [ "Zihang Dai", "Zhilin Yang", "Yiming Yang", "Jaime G. Carbonell", "Quoc Viet Le", "Ruslan Salakhutdinov" ], "title": "Transformer-xl: Attentive language models beyond a fixed-length context", "venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2019 }, { "authors": [ "Li Dong", "Nan Yang", "Wenhui Wang", "Furu Wei", "Xiaodong Liu", "Yu Wang", "Jianfeng Gao", "Ming Zhou", "Hsiao-Wuen Hon" ], "title": "Unified language model pre-training for natural language understanding and generation", "venue": "In Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Adam Fisch", "Alon Talmor", "Robin Jia", "Minjoon Seo", "Eunsol Choi", "Danqi Chen" ], "title": "MRQA 2019 shared task: Evaluating generalization in reading comprehension", "venue": "In Proceedings of 2nd Machine Reading for Reading Comprehension (MRQA),", "year": 2019 }, { "authors": [ "Snehal Gaikwad", "Durim Morina", "Rohit Nistala", "Megha Agarwal", "Alison Cossette", "Radhika Bhanu", "Saiph Savage", "Vishwajeet Narwal", "Karan Rajpal", "Jeff Regino" ], "title": "Daemo: A self-governed crowdsourcing marketplace", "venue": "In Adjunct proceedings of the 28th annual ACM symposium on user interface software & technology,", "year": 2015 }, { "authors": [ "Yonatan Geifman", "Ran El-Yaniv" ], "title": "Deep active learning with a neural architecture search", "venue": "In Annual Conference on Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Daniel Gissin", "Shai Shalev-Shwartz" ], "title": "Discriminative active learning", "venue": "arXiv preprint arXiv:1907.06347,", "year": 2019 }, { "authors": [ "Steve Hanneke" ], "title": "Theory of disagreement-based active learning", "venue": "Found. Trends Mach. Learn.,", "year": 2014 }, { "authors": [ "Lynette Hirschman", "Marc Light", "Eric Breck", "John D. Burger" ], "title": "Deep read: A reading comprehension system", "venue": "Proceedings of the 27th Annual Meeting of the Association for Computational Linguistics,", "year": 1999 }, { "authors": [ "Ajay J. Joshi", "Fatih Porikli", "Nikolaos Papanikolopoulos" ], "title": "Multi-class active learning for image classification", "venue": "In Proceedings of the 2009th IEEE Computer Society Conference on Computer Vision and Pattern,", "year": 2009 }, { "authors": [ "Hao Karen" ], "title": "10 breakthrough technologies 2019", "venue": "MIT Technology Review,", "year": 2019 }, { "authors": [ "Tom Kwiatkowski", "Jennimaria Palomaki", "Olivia Redfield", "Michael Collins", "Ankur Parikh", "Chris Alberti", "Danielle Epstein", "Illia Polosukhin", "Matthew Kelcey", "Jacob Devlin", "Kenton Lee", "Kristina N. Toutanova", "Llion Jones", "Ming-Wei Chang", "Andrew Dai", "Jakob Uszkoreit", "Quoc Le", "Slav Petrov" ], "title": "Natural questions: a benchmark for question answering research", "venue": "Transactions of the Association of Computational Linguistics,", "year": 2019 }, { "authors": [ "Xiao Lin", "Devi Parikh" ], "title": "Active learning for visual question answering: An empirical study", "venue": "CoRR, abs/1711.01732,", "year": 2017 }, { "authors": [ "Tri Nguyen", "Mir Rosenberg", "Xia Song", "Jianfeng Gao", "Saurabh Tiwary", "Rangan Majumder", "Li Deng" ], "title": "MS MARCO: A human generated machine reading comprehension dataset", "venue": "In Proceedings of the Workshop on Cognitive Computation: Integrating neural and symbolic approaches,", "year": 2016 }, { "authors": [ "Matthew E. Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations", "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2018 }, { "authors": [ "Alec Radford", "Karthik Narasimhan", "Tim Salimans", "Ilya Sutskever" ], "title": "Improving language understanding with unsupervised learning", "venue": "Technical report,", "year": 2018 }, { "authors": [ "Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy Liang" ], "title": "Squad: 100, 000+ questions for machine comprehension of text", "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Pranav Rajpurkar", "Robin Jia", "Percy Liang" ], "title": "Know what you don’t know: Unanswerable questions for squad", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Siva Reddy", "Danqi Chen", "Christopher D. Manning" ], "title": "Coqa: A conversational question answering challenge", "venue": "Trans. Assoc. Comput. Linguistics,", "year": 2019 }, { "authors": [ "Roi Reichart", "Katrin Tomanek", "Udo Hahn", "Ari Rappoport" ], "title": "Multi-task active learning for linguistic annotations", "venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics,", "year": 2008 }, { "authors": [ "Dan Roth", "Kevin Small" ], "title": "Margin-based active learning for structured output spaces", "venue": "In Proceedings of the 17th European Conference on Machine Learning,", "year": 2006 }, { "authors": [ "Ozan Sener", "Silvio Savarese" ], "title": "Active learning for convolutional neural networks: A core-set approach", "venue": "In Proceedings of the 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Burr Settles" ], "title": "Active learning literature survey", "venue": "Technical report, University of Wisconsin-Madison Department of Computer Sciences,", "year": 2009 }, { "authors": [ "Dan Shen", "Jie Zhang", "Jian Su", "Guodong Zhou", "Chew-Lim Tan" ], "title": "Multi-criteria-based active learning for named entity recognition", "venue": "In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics,", "year": 2004 }, { "authors": [ "Yanyao Shen", "Hyokun Yun", "Zachary C. Lipton", "Yakov Kronrod", "Animashree Anandkumar" ], "title": "Deep active learning for named entity recognition", "venue": "In Proceedings of the 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Adam Trischler", "Tong Wang", "Xingdi Yuan", "Justin Harris", "Alessandro Sordoni", "Philip Bachman", "Kaheer Suleman" ], "title": "Newsqa: A machine comprehension dataset", "venue": "In Proceedings of the 2nd Workshop on Representation Learning for NLP,", "year": 2017 }, { "authors": [ "Gökhan Tür", "Dilek Hakkani-Tür", "Robert E. Schapire" ], "title": "Combining active and semi-supervised learning for spoken language understanding", "venue": "Speech Commun.,", "year": 2005 }, { "authors": [ "Leslie G. Valiant" ], "title": "A theory of the learnable", "venue": "Proceedings of the 16th Annual ACM Symposium on Theory of Computing,", "year": 1984 }, { "authors": [ "Dan Wang", "Yi Shang" ], "title": "A new active labeling method for deep learning", "venue": "In Proceedings of the 2014 International Joint Conference on Neural Networks,", "year": 2014 }, { "authors": [ "Keze Wang", "Dongyu Zhang", "Ya Li", "Ruimao Zhang", "Liang Lin" ], "title": "Cost-effective active learning for deep image classification", "venue": "IEEE Transactions on Circuits and Systems for Video Technology,", "year": 2016 }, { "authors": [ "Zhiguo Wang", "Patrick Ng", "Xiaofei Ma", "Ramesh Nallapati", "Bing Xiang" ], "title": "Multi-passage BERT: A globally normalized BERT model for open-domain question answering", "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing,", "year": 2019 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Yiming Yang", "Jaime G. Carbonell", "Ruslan Salakhutdinov", "Quoc V. Le" ], "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "venue": "In Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Xiang Yue", "Bernal Jimenez Gutierrez", "Huan Sun" ], "title": "Clinical reading comprehension: A thorough analysis of the emrqa dataset", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Chicheng Zhang" ], "title": "Efficient active learning of sparse halfspaces", "venue": "In The 31th Annual Conference on Learning Theory,", "year": 2018 }, { "authors": [ "Chicheng Zhang", "Jie Shen", "Pranjal Awasthi" ], "title": "Efficient active learning of sparse halfspaces with arbitrary bounded noise", "venue": null, "year": 2002 }, { "authors": [ "Ye Zhang", "Matthew Lease", "Byron C Wallace" ], "title": "Active discriminative text representation learning", "venue": "In Proceedings of the 31st AAAI Conference on Artificial Intelligence,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "The goal of machine reading comprehension (MRC) is to train an AI model which is able to understand natural language text (e.g. a passage), and answer questions related to it (Hirschman et al., 1999); see Figure 1 for an example. MRC has been one of the most important problems in natural language processing thanks to its various successful applications, such as smooth-talking AI speaker assistants – a technology that was highlighted as among 10 breakthrough technologies by MIT Technology Review very recently (Karen, 2019).\nOf central importance to MRC is the availability of benchmarking question-answering datasets, where a larger dataset often enables training of a more informative neural networks. In this regard, there have been a number of benchmark datasets proposed in recent years, with the efforts of pushing forward the development of MRC. A partial list includes SQuAD (Rajpurkar et al., 2016), NewsQA (Trischler et al., 2017), MSMARCO (Nguyen et al., 2016), and Natural Questions (Kwiatkowski et al., 2019). While the emergence of these high-quality datasets have stimulated a surge of research and have witness a large volume of deployments of MRC, it is often challenging to go beyond the scale of the current architectures of neural networks, in that it is extremely expensive to obtain massive amount of labeled data. The barrier of data collection can be seen from SQuAD: the research group at Standford University spent 1,547 working hours for the annotation of SQuAD dataset, with the cost over $14,000. This issue was also set out and addressed by AI companies. However, even equipped with machine learning assisted labeling tools (e.g. Amazon SageMaker Ground Truth), it is still expensive to hire and educate expert workers for annotation. What makes the issue more serious is that there is a rise of security and privacy concerns in various problems, which prevents researchers from scaling their projects to diverse domains efficiently. For example, all annotators are advised to get a series of training about privacy rules, such as Health Insurance Portability & Accountability Act, before they can work on the medical records.\nIn this work, we tackle the challenge by proposing a computationally efficient learning algorithm that is amenable for label-demanding problems. Unlike prior MRC methods that separate data annotation and model training, our algorithm interleaves these two phases. Our algorithm, in spirit,\nresembles the theme of active learning (Balcan et al., 2007), where the promise of active learning is that we can always concentrate on fitting only the most informative examples without suffering a degraded performance. While there have been a considerable number of works showing that active learning often guarantees exponential savings of labels, the analysis holds typically for linear classification models Awasthi et al. (2017); Zhang (2018); Zhang et al. (2020). In stark contrast, less is explored for the more practical neural network based models since it is nontrivial to extend important concepts such as large margin of linear classifiers to neural networks. As a remedy, we consider an unsupervised sampling scheme based on the uncertainty of the instances (Settles, 2009). Our sampling scheme is adaptive (i.e. active) in the sense that it chooses instances that the currently learned model is most uncertain on. To this end, we recall that the purpose of MRC is to take as input a passage and a question, and finds the most accurate answer from the passage. Roughly speaking, this can be thought of as a weight assignment problem, where we need to calculate how likely each word span in the passage could be the correct answer. Ideally, we would hope that the algorithm assigns 1 to the correct answer, and assigns 0 to the remaining, leading to a large separation between the correct and those incorrect. Alternatively, if the algorithm assigns, say 0.5 to two different answers and assigns 0 to others, then it is very uncertain about its response – this is a strong criterion that we need to query the correct answer to an expert, i.e. performing active labeling. Our uncertainty-based sampling scheme is essentially motivated by this observation: the uncertainty of an instance (i.e. a pair of passage and question) is defined as the gap between the weight of the best candidate answer and the second best. We will present a more formal description in Section 2.\nAfter identifying these most uncertain, and hence most informative instances, we query their labels and use them to update the model. In this phase, in addition to minimize the widely used entropybased loss function, we consider an adaptive regularizer which has two important properties. First, it enforces that the new model will not deviate far from the current model, since 1) with reasonable initialization we would expect that the initial model should perform not too bad; and 2) we do not want to overfit the data even if they are recognized as informative. Second, the regularizer has a coefficient that is increasing with iterations. Namely, as the algorithm proceeds the stability of model updating outweighs loss minimization. In Section 2 we elaborate on the concrete form of our objective function. It is also worth mentioning that since in each iteration, the algorithm only fits the uncertain instances, the model updating is more faster than traditional methods.\nThe pipeline is illustrated in Figure 2. Given abundant unlabeled instances, our algorithm first evaluates their uncertainty and detects the most informative ones, marked as red. Then we send these instances to an expert to obtain the groundtruth answers, marked as yellow. With the newly added labeled samples, it is possible to perform incremental updating of the MRC model.\nRoadmap. We summarize our main technical contributions below, and discuss more related works in Section 5. In Section 2 we present a detailed description of the core components of our algorithm, and in Section 3 we provide an end-to-end learning paradigm for MRC with implementation details. In Section 4, we demonstrate the efficacy of our algorithm in terms of exact match, F-1 score, and the savings of labels. Finally we conclude this paper in Section 6." }, { "heading": "1.1 SUMMARY OF CONTRIBUTIONS", "text": "We consider the problem of learning an MRC model in the label-demanding context, and we propose a novel algorithm that interleaves data annotation and model updating. In particular, there are two core components for this end: an unsupervised uncertainty-based sampling scheme that only queries labels of the most informative instances with respect to the currently learned model, and\nan adaptive loss minimization paradigm that simultaneously fits the data and controls the degree of model updating. Moreover, our approach is modular in nature, meaning that the community would benefit from this work by leveraging our techniques into more real-world problems (e.g. image classification) where the availability of labels is a major concern." }, { "heading": "2 ALGORITHM", "text": "In this section, we formally introduce the problem setup and our main algorithm ALBUS (Algorithm 1). We use x := (p, q) to represent a pair of passage p and question q, which is also called an unlabeled instance, or simply an instance. If there are multiple questions, say q1, q2, to a same passage p, we will use two instances x1 := (p, q1) and x2 := (p, q2). Given an instance x, our goal is to predict an answer. We use a zero-one vector a to indicate the correct answer, and (x,a) is called a labeled instance. The prediction made by the learner is denoted by â. We will always assume that all the coordinates of â are non-negative, and their sum equals one, which can be easily satisfied if the last layer of the neural network is softmax." }, { "heading": "2.1 UNSUPERVISED UNCERTAINTY-BASED RANDOM SAMPLING", "text": "Since data annotation is expensive, we treat the problem as such that all the instances are unlabeled before running the algorithm, and as the algorithm proceeds, it may adaptively detects the most informative instances to be labeled by experts or crowd workers. Thus, the central questions to learning are: 1) how to measure the informativeness of the unlabeled instances in a computationally efficient manner; and 2) how to select a manageable number of instances for annotation (since the algorithm might identify bunch of useful instances). We address both questions in the following." }, { "heading": "2.1.1 METRIC OF INFORMATIVENESS", "text": "Intuition. We first address the first question, i.e. design a metric to evaluate the informativeness. To ease the discussion, suppose that for a given instance x, there are only two answers to choose from, i.e. a is a two-dimensional vector, and that the algorithm has been initialized, e.g. via pre-training. If the current model takes as input x, and predicts â = (1, 0), then we think of this instance as less informative, in that the algorithm has an extremely high confidence on its prediction.1 On the other spectrum, if the prediction â = (0.5, 0.5), then it indicates that the current model is not able to distinguish the two answers. Thus, sending the correct answer a together with the instance to the algorithm will lead to significant progress.\nWe observe that underlying the intuition is a notion of separation between the answer with highest confidence and that with second highest, denoted by ∆w(x), where w denotes the current model\n1The algorithm may of course make a mistake, but this will be treated by future model updating. Here we are just giving an intuitive explanation following the idealized scenario.\nAlgorithm 1 ALBUS: Adaptive Learning By Uncertainty-Based Sampling Require: a set of unlabeled instances U = {x1, . . . ,xn}, initial MRC model w0, maximum itera-\ntion number T , thresholds {τ1, . . . , τT }, number of instances to be labeled n0. Ensure: A new MRC model wT .\n1: U1 ← U . 2: for t = 1, · · · , T do 3: Compute ∆wt−1(x) for all x ∈ Ut. 4: Bt ← {x ∈ Ut : ∆wt−1(x) ≤ τt}. 5: Compute the sampling probability Pr(x) for all x ∈ Bt. 6: St ← randomly choose n0 instances from Bt by the distribution {Pr(x)}x∈Bt , and query their labels. 7: Update the model wt ← arg minw L(w;St). 8: Ut+1 ← Ut\\St. 9: end for\nparameters. In fact, let our algorithm be a function fw : x 7→ â. Denote by â(1) and â(2) the highest and second highest value in â. Then\n∆w(x) = â (1) − â(2). (1)\nGiven the unlabeled training set {x1,x2, . . . ,xn} and the currently learned model, we can evaluate the degree of separation {∆1,∆2, . . . ,∆n} where we write ∆i := ∆w(xi) to reduce notation clutter since most of the time, the model w is clear from the context. This answers the first question proposed at the beginning of the section, i.e. how to measure the informativeness of the instances." }, { "heading": "2.1.2 UNCERTAINTY-BASED SAMPLING", "text": "It remains to design a mechanism so that we can gather a manageable number of instances to be labeled. A natural approach will be specifying the maximum number n0, so that in each iteration the algorithm chooses at most n0 instances with lowest degree of separation. Yet, we observe that for some marginal cases, many instances have very close ∆i, e.g. ∆1 = 0.101 and ∆2 = 0.102. Using the above strategy may annotate x1 while throwing away x2. From the practical perspective, however, we hope both instances have a chance to be selected to increase diversity. Henceforth, we consider a “soft” approach based on random sampling in this paper.\nFix an iteration t of the algorithm. First, we define a threshold τt ∈ (0, 1]. Based on the current model wt−1, we calculate ∆1, . . . ,∆n. Then we obtain a sampling region\nBt := {xi : ∆i ≤ τt}, (2)\nwhich contains informative instances (recall that a lower degree of separation implies more informative). Inspired by the probability selection scheme (Abe & Long, 1999), we define the sampling probability as\nPr(x) =\n{ 1\n|Bt|+γ(∆x−∆x∗ ) , ∀x ∈ Bt\\x∗, 1− ∑\nx′ 6=x∗ 1 |Bt|+γ(∆x′−∆x∗ ) , when x = x∗.\n(3)\nIn the above expression, x∗ is the instance in Bt with lowest degree of separation, i.e. the most uncertain instance; γ ≥ 0 is a tunable hyper-parameters. Observe that if γ = 0, it becomes uniform sampling. In addition, in view of the sampling probability in (3), the instance x 6= x∗ will be sampled with probability less than 1/|Bt|, and x∗ is sampled with probability more than 1/µ, as\nPr(x∗) ≥ 1− ∑\nx′ 6=x∗\n1 |Bt| = 1− |B1| − 1 |Bt| = 1 |Bt| . (4)\nTherefore, the sampling scheme always guarantees that x∗ will be selected with highest probability, and if needed, it is possible to make this probability close to 1 by increasing γ. In our algorithm, we set γ = Θ( √ |Bt|) which works well in practice." }, { "heading": "2.2 ADAPTIVE LOSS MINIMIZATION", "text": "Another crucial component in ALBUS is loss minimization. Here our novelty is an introduction of an adaptive regularizer that balances the progress of model updating and per-iteration data fitting.\nLet St be the set of labeled instances determined by our random sampling method at the t-th iteration. For any (x,a) ∈ St, since a is an indicator vector, the problem can be thought of as multiclassification. Therefore, a typical choice of sample-wise loss function is logistic loss, denoted by `(w;x,a), which can be easily implemented by using a softmax layer in the neural network. On top of the logistic loss, we also consider an adaptive `2-norm regularizer, which gives the following objective function:\nL(w;St) := 1 |St| ∑\n(x,a)∈St\n`(w;x,a) + λt 2 ‖w −wt−1‖2 . (5)\nDifferent from the broadly utilized `2-norm regularizer ‖w‖2, we appeal to a localized form, in the sense that the objective function pushes the updated model to be close to the current model wt−1 under Euclidean distance. This is motivated by the fact that in many cases, warm starting the algorithm with pre-training often exhibits favorable performance. Hence, though we want the model to be adapted to the new dataset, we carefully control the progress of model updating so that it does not deviate far from the current.\nRegarding the coefficient λt, we increase it by a constant factor greater than one in each iteration. Therefore, as the algorithm proceeds, the localization property plays a more important role than the logistic loss. Our treatment is inspired by the literature of active learning, where similar localized `2-norm constraint is imposed (Balcan et al., 2007; Zhang et al., 2020). This can be viewed as a stability property of our algorithm, and we discover that it works very well on benchmark datasets." }, { "heading": "3 IMPLEMENTATION DETAILS", "text": "Uncertainty-based sampling. We introduce how to select the batch St in each iteration with current MRC model wt−1. For a given pair of (p, q), an answer is of the form of a word span from the i-th position to the j-th position of the passage. Given the span (i, j) and the passage p, we use BERT (Devlin et al., 2019) as our embedding method, which produces a feature description denoted by Ep(i, j). We then construct a probability matrix M̂ whose (i, j)-th entry M̂i,j is given by the following:\nM̂i,j = exp(wt−1 · Ep(i, j))∑\ni′,j′ exp(wt−1 · Ep(i′, j′)) . (6)\nObserve that the matrix M̂ forms a distribution over all possible word spans, i.e. all possible answers. It is then straightforward to convert M̂ into the vector â, for example, by concatenating all the columns. Based on the obtained answer â, we are able to perform uncertainty-based sampling as discussed in Section 2.\nAdaptive loss minimization. We already derived the probability matrix M̂ in (6). During loss minimization, i.e. supervised fine-tuning, we aim to update wt−1 by minimizing L(w;St). Since we have clarified the regularizer, it suffices to give the detailed form of the loss `(w;x,a) where we recall that x = (p, q). Note that using the groundtruth answer a, we know the correct span (ia, ja) for question q. Thus, the likelihood that we observe St is\nPr(St) = ∏\n(p,q,a)∈St\nexp(w · Ep(ia, ja))∑ i′,j′ exp(w · Ep(i′, j′))\n(7)\nThe loss function `(w;St) is simply the negative log-likelihood." }, { "heading": "4 EXPERIMENTS", "text": "Datasets. We focus on the span-based datasets, namely Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016) and NewsQA (Trischler et al., 2017).SQuAD consists of over\n100k questions posed by crowdworkers on a set of 536 Wikipedia articles. We use the original split 87,599 questions for training and test on the 10,570 questions. NewsQA is a machine comprehension dataset of over 100k human-generated question-answer pairs from over 10k news articles from CNN. The dataset consists of 74,160 questions for training and 4,212 questions for validation 2.\nEvaluation Metrics. We use two metrics: Exact Match (EM) and F1 score. EM measures the percentage of predictions that matches any one of the annotated answers exactly. EM gives credit for predictions that exactly match (one of) the gold answers. F1 score measures the average overlap between the prediction and the annotated answer.\nBaselines. We compare against the following baseline algorithms:\n• Badge (batched based sampling) (Ash et al., 2020): it learns the gradient embedding of samples and selects a set of samples by k-MEANS++ (Arthur & Vassilvitskii, 2007).\n• Conf (confidence sampling) (Wang & Shang, 2014): it is an uncertainty-based algorithm that selects samples with lowest class probability.\n• Entropy (Wang & Shang, 2014): it selects samples based on the entropy of the predicted probability distribution.\n• Marg (margin-based sampling) (Roth & Small, 2006): it also checks the degree of separation as our algorithm, but selects the n0 lowest rather than performing random sampling as we did.\n• Rand (Random sampling): It is the naive baseline of uniformly randomly selecting samples from unlabeled set.\nOther Settings. To ensure a comprehensive comparison among state-of-the-art approaches, we simulate the annotation process with human experts in the loop by selecting a fixed number of examples n0 to query their labels from training set in each iteration (we set n0 = 2, 000 for SQuAD and n0 = 5, 000 for NewsQA). The labeled data is used to update the MRC model. We report the exact match and F1 score with the number of iterations. The BERT-base is used as the pretrained model and fine-tuned for 2 epochs with a learning rate of 3e− 5 and a batch size of 12 3. The MRC model is initialized with 1,000 labeled samples for SQuAD and 10,000 for NewsQA. The parameter τ0 is tuned from the range of [0.01, 0.1] on the training set and decreases at the rate of 1.1.\nResults. Figure 3 and Figure 4 present EM and F1 score with the increase of the number of labeled samples selected by various active learning algorithms. We show the results with all labeled data (Figure 3(a) and Figure 4(a)) and 20,000 labeled data (Figure 3(b) and Figure 4(b)). Our algorithm outperforms state-of-the-art active learning algorithms in almost all the cases.\nTable 1 lists some detailed results with a specific number of labeled samples. Our algorithm reaches the best performance in all cases and the advantage is significant specially with a small subset of labeled samples available. For example, in the case of 5,000 labeled examples, our algorithm reaches the EM of 64.07 % while the best of compared algorithms is 62.71 %. Figure 3 and Figure 4 plot the trend EM and F1 score with the rise of labeled examples on SQuAD dataset. We observe that all active learning algorithms reach the best performance before accessing all labeled data compared with Rand. It demonstrates the active learning effectively reduces the number of required labeled data for learning process. Specifically, our algorithm reaches EM 80.44 % and F1 score 88.53 % with 61,000 queries which is close to the best result but with 25% less labeled samples. We can observe the same advantage of our algorithm on the NewsQA dataset as shown in Figure 5.\n2https://github.com/mrqa/MRQA-Shared-Task-2019 3https://github.com/huggingface/transformers/tree/master/examples/question-answering" }, { "heading": "5 RELATED WORKS", "text": "Machine Reading Comprehension. MRC is the ability to read text and answer questions about it. It is a challenging task as it requires the abilities of understanding both the questions and the context. A data-driven approach to reading comprehension goes back to (Hirschman et al., 1999). There are a number of works have proposed to create datasets (Rajpurkar et al., 2016; 2018; Kwiatkowski et al., 2019; Reddy et al., 2019). For example, Stanford Question Answering Dataset (SQuAD) dataset consists of 100K questions on a set of Wikipedia articles (Rajpurkar et al., 2016). Natural Questions dataset (Kwiatkowski et al., 2019) consists of queries issued to the Google search engine, the wikipedia page, long answers and short answers.\nRecently, researchers are devoted to develop unsupervised deep learning frameworks to learn the word representation based on a batch of unlabeled data which could be simply fine-tuned for multiple downstream tasks. For example, ELMo (Peters et al., 2018) learned forward and backward language models: the forward one reads the text from left to right, and the other one encodes the text from right to left. GPT (Radford et al., 2018) used a left-to-right Transformer to predict a text sequence wordby-word. Devlin et al. (2019) designed BERT to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. There are some following works aiming to improve the framework of BERT for different language modeling tasks (Yang et al., 2019; Dai et al., 2019; Dong et al., 2019).BERT significantly improves the performance of natural language understanding tasks. Our work is based on a pretrained BERT model. We finetuned BERT with one additional output layer to create the model for the reading comprehension task following (Devlin et al., 2019).\nAnother direction in reading comprehension is to explore different real-world settings. For example, in open-domain reading comprehension, the passage that contains the answer is not provided but requires retrieval from the knowledge pool (Wang et al., 2019). Yue et al. (2020) considered language\nunderstanding of clinical data. This work considers the general reading comprehension setting in which the question and related passage are provided.\nActive Learning. Active learning is a machine learning paradigm that mainly aims at reducing label requirements through interacting with the oracle (experts/annotators) (Settles, 2009).\nActive learning has been well studied in both theory and applications. One popular branch of theoretical works follows the PAC active learning setting: researchers are engaged to reduce the label complexity, i.e. the number of query requests, with the error bound guarantee for the produced halfspace with high probability Valiant (1984); Balcan et al. (2009); Hanneke (2014). There are some exciting works focusing on active halfspace learning, such as margin based active learning (Balcan et al., 2007; Balcan & Long, 2013). These approaches learn a classifier from the class of homogeneous linear classifiers, to predict labels from instances.\nExisting active learning approaches can be roughly divided to uncertainty-based sampling and representative sampling (Settles, 2009). Representative sampling based approaches select samples that are representative of the whole unlabeled dataset. It can be achieved by performing an optimization minimizing the difference between the selected subset and the global dataset (Sener & Savarese, 2018; Gissin & Shalev-Shwartz, 2019). The uncertainty sampling based algorithms select samples that maximally reduce the uncertainty the algorithm has on a target learning model, such as samples lying closest to the current decision boundary (Tür et al., 2005). The work in this paper belongs to uncertainty-based sampling but using a novel sampling scheme tailored to MRC.\nActive learning has shown outstanding performance in real-world applications, such as computer vision (Joshi et al., 2009) and natural language processing (Culotta & McCallum, 2005; Reichart et al., 2008). For example, Shen et al. (2004) combined multiple criteria of active learning for named entity recognition. Recent studies combining deep neural networks and active learning approaches have been proposed (Wang et al., 2016; Zhang et al., 2017; Shen et al., 2018; Geifman & El-Yaniv, 2019). However, these approaches do not consider the correlation between adaptively learned models of selected samples. There are some related work about active learning in visual question answering (Lin & Parikh, 2017). However, little is known about active learning for machine reading comprehension." }, { "heading": "6 CONCLUSION AND FUTURE WORKS", "text": "In this work, we have proposed a novel adaptive learning algorithm for the reading comprehension task. There are two crucial components in our algorithm: an unsupervised uncertainty-based random sampling scheme, and a localized loss minimization paradigm, both of which are adaptive to the currently learned model. We have described the strong motivation of using these techniques, and our empirical study serves as a clear evidence that our algorithm drastically mitigates the demand of labels on large-scale datasets. We highlight that our approach is not essentially tied to MRC, and we expect that it can be extended to other label-demanding problems in natural language processing and image classification." } ]
2,020
null
SP:53fbf29aa3f60001c2fc4f1a9bb797ebc9ceb986
[ "The authors prove several statements about the expressiveness of different classes of graph neural nets (GNNs): conventional message passing networks, linear GNNs (LGNN) and “folklore GNNs” (FGNN). The novel theoretical contributions include analysis of expressiveness of FGNNs that use tensors of arbitrary order in terms of comparison to the Weisfeiler-Lehman tests; a characterization of the functions that these classes of networks can approximate; universality of FGNN as the tensor order goes to infinity. The results are based on a general Stone-Weierstrass-like theorem for equivariant functions. Prior universality results can be recovered as special cases. The authors have a simple experiment that show in a limited setting that a practical implementation agrees with the theory." ]
Various classes of Graph Neural Networks (GNN) have been proposed and shown to be successful in a wide range of applications with graph structured data. In this paper, we propose a theoretical framework able to compare the expressive power of these GNN architectures. The current universality theorems only apply to intractable classes of GNNs. Here, we prove the first approximation guarantees for practical GNNs, paving the way for a better understanding of their generalization. Our theoretical results are proved for invariant GNNs computing a graph embedding (permutation of the nodes of the input graph does not affect the output) and equivariant GNNs computing an embedding of the nodes (permutation of the input permutes the output). We show that Folklore Graph Neural Networks (FGNN), which are tensor based GNNs augmented with matrix multiplication are the most expressive architectures proposed so far for a given tensor order. We illustrate our results on the Quadratic Assignment Problem (a NP-Hard combinatorial problem) by showing that FGNNs are able to learn how to solve the problem, leading to much better average performances than existing algorithms (based on spectral, SDP or other GNNs architectures). On a practical side, we also implement masked tensors to handle batches of graphs of varying sizes.
[ { "affiliations": [], "name": "Waïss Azizian" } ]
[ { "authors": [ "J.-Y. Cai", "M. Furer", "N. Immerman" ], "title": "An optimal lower bound on the number of variables for graph identification", "venue": "In Proceedings of the 30th Annual Symposium on Foundations of Computer Science,", "year": 1989 }, { "authors": [ "Zhengdao Chen", "Soledad Villar", "Lei Chen", "Joan Bruna" ], "title": "On the equivalence between graph isomorphism testing and function approximation with gnns", "venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Zhengdao Chen", "Lei Chen", "Soledad Villar", "Joan Bruna" ], "title": "Can graph neural networks count substructures", "venue": null, "year": 2002 }, { "authors": [ "George Cybenko" ], "title": "Approximation by superpositions of a sigmoidal function", "venue": "Mathematics of control, signals and systems,", "year": 1989 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Brendan L Douglas" ], "title": "The weisfeiler-lehman method and graph isomorphism testing", "venue": "arXiv preprint arXiv:1101.5211,", "year": 2011 }, { "authors": [ "Vijay Prakash Dwivedi", "Chaitanya K Joshi", "Thomas Laurent", "Yoshua Bengio", "Xavier Bresson" ], "title": "Benchmarking graph neural networks", "venue": "arXiv preprint arXiv:2003.00982,", "year": 2020 }, { "authors": [ "Martin Fürer" ], "title": "On the combinatorial power of the weisfeiler-lehman algorithm", "venue": "In International Conference on Algorithms and Complexity,", "year": 2017 }, { "authors": [ "Martin Fürer" ], "title": "On the power of combinatorial and spectral invariants", "venue": "Linear Algebra and its Applications,", "year": 2010 }, { "authors": [ "Vikas K Garg", "Stefanie Jegelka", "Tommi Jaakkola" ], "title": "Generalization and representational limits of graph neural networks", "venue": "arXiv preprint arXiv:2002.06157,", "year": 2020 }, { "authors": [ "Floris Geerts" ], "title": "The expressive power of kth-order invariant graph networks", "venue": "arXiv preprint arXiv:2007.12035,", "year": 2020 }, { "authors": [ "Floris Geerts" ], "title": "Walk message passing neural networks and second-order graph neural networks", "venue": "arXiv preprint arXiv:2006.09499,", "year": 2020 }, { "authors": [ "Marco Gori", "Gabriele Monfardini", "Franco Scarselli" ], "title": "A new model for learning in graph domains", "venue": "In Proceedings", "year": 2005 }, { "authors": [ "Martin Grohe" ], "title": "Descriptive Complexity, Canonisation, and Definable Graph Structure Theory. Lecture Notes in Logic", "venue": null, "year": 2017 }, { "authors": [ "Kurt Hornik", "Maxwell Stinchcombe", "Halbert White" ], "title": "Multilayer feedforward networks are universal approximators", "venue": "Neural Networks,", "year": 1989 }, { "authors": [ "Nicolas Keriven", "Gabriel Peyré" ], "title": "Universal invariant and equivariant graph neural networks", "venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "arXiv preprint arXiv:1609.02907,", "year": 2016 }, { "authors": [ "Johannes Klicpera", "Janek Groß", "Stephan Günnemann" ], "title": "Directional message passing for molecular graphs", "venue": "arXiv preprint arXiv:2003.03123,", "year": 2020 }, { "authors": [ "Andreas Loukas" ], "title": "What graph neural networks cannot learn: depth vs width", "venue": "arXiv preprint arXiv:1907.03199,", "year": 2019 }, { "authors": [ "Takanori Maehara", "NT Hoang" ], "title": "A simple proof of the universality of invariant/equivariant graph neural networks. ArXiv", "venue": null, "year": 1910 }, { "authors": [ "Haggai Maron", "Heli Ben-Hamu", "Nadav Shamir", "Yaron Lipman" ], "title": "Invariant and equivariant graph networks", "venue": "arXiv preprint arXiv:1812.09902,", "year": 2018 }, { "authors": [ "Haggai Maron", "Heli Ben-Hamu", "Hadar Serviansky", "Yaron Lipman" ], "title": "Provably powerful graph networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Haggai Maron", "Ethan Fetaya", "Nimrod Segol", "Yaron Lipman" ], "title": "On the universality of invariant networks", "venue": "arXiv preprint arXiv:1901.09342,", "year": 2019 }, { "authors": [ "Haggai Maron", "Or Litany", "Gal Chechik", "Ethan Fetaya" ], "title": "On learning sets of symmetric elements", "venue": null, "year": 2002 }, { "authors": [ "J.R. Munkres" ], "title": "Topology. Featured Titles for Topology", "venue": null, "year": 2000 }, { "authors": [ "Ryan Murphy", "Balasubramaniam Srinivasan", "Vinayak Rao", "Bruno Ribeiro" ], "title": "Relational pooling for graph representations", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Ryan L. Murphy", "Balasubramaniam Srinivasan", "Vinayak A. Rao", "Bruno Ribeiro" ], "title": "Janossy pooling: Learning deep permutation-invariant functions for variable-size inputs", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Alex Nowak", "Soledad Villar", "Afonso S. Bandeira", "Joan Bruna" ], "title": "Revised note on learning quadratic assignment with graph neural networks", "venue": "IEEE Data Science Workshop (DSW),", "year": 2018 }, { "authors": [ "Jiming Peng", "Hans D. Mittelmann", "Xiaoxue Li" ], "title": "A new relaxation framework for quadratic assignment problems based on matrix splitting", "venue": "Mathematical Programming Computation,", "year": 2010 }, { "authors": [ "Charles Ruizhongtai Qi", "Hao Su", "Kaichun Mo", "Leonidas J. Guibas" ], "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Siamak Ravanbakhsh" ], "title": "Universal equivariant multilayer perceptrons", "venue": "arXiv preprint arXiv:2002.02912,", "year": 2020 }, { "authors": [ "W. Rudin" ], "title": "Functional Analysis. International series in pure and applied mathematics", "venue": null, "year": 1991 }, { "authors": [ "Ryoma Sato", "Makoto Yamada", "Hisashi Kashima" ], "title": "Approximation ratios of graph neural networks for combinatorial problems", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Franco Scarselli", "Marco Gori", "Ah Chung Tsoi", "Markus Hagenbuchner", "Gabriele Monfardini" ], "title": "Computational capabilities of graph neural networks", "venue": "IEEE Trans. Neural Networks,", "year": 2009 }, { "authors": [ "Nimrod Segol", "Yaron Lipman" ], "title": "On universal equivariant set networks", "venue": "ArXiv, abs/1910.02421,", "year": 2020 }, { "authors": [ "Vlad Timofte" ], "title": "Stone–weierstrass theorems revisited", "venue": "Journal of Approximation Theory,", "year": 2005 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "arXiv preprint arXiv:1810.00826,", "year": 2018 }, { "authors": [ "Dmitry Yarotsky" ], "title": "Universal approximations of invariant maps by neural networks", "venue": "CoRR, abs/1804.10306,", "year": 2018 }, { "authors": [ "Feizi" ], "title": "For the experiment with Erdős–Rényi random graphs, we consider G1 to be a random Erdős–Rényi graph with edge density pe = 0.2 and n = 50 vertices. The graph G2 is a small perturbation of G1 according to the following error model considered", "venue": null, "year": 2016 }, { "authors": [ "Maron" ], "title": "WEISFEILER-LEHMAN AND FOLKLORE WEISFEILER-LEHMAN TESTS OF ORDER k ≥ 2 We now present the folklore version of the Weisfeiler-Lehman test of order k (k-FWL), for k ≥ 2, along with k-WL for clarity", "venue": "For both,", "year": 2021 }, { "authors": [ "Hornik" ], "title": "Cybenko (1989) for precise conditions for MLP to be universal. C.2 AUGMENTED SEPARATING POWER To factor the proof, we introduce another notion of separating power. Definition 7. For n,m ≥ 1 fixed, let X be a set, F be a some space and F be a set of functions from", "venue": null, "year": 1989 }, { "authors": [ "Xu" ], "title": "A, §B)", "venue": null, "year": 2018 }, { "authors": [ "cWL" ], "title": "The invariant case is exactly", "venue": "Maron et al. (2019a,", "year": 2020 }, { "authors": [ "Qi" ], "title": "Our approximation results for GNNs (Theorems Thm. 5 and Thm. 6) are then obtained from these theoretical results applied to tensors and the symmetric group in Section D.9", "venue": null, "year": 2017 }, { "authors": [ "Qi" ], "title": "CI(X,R) which means that PointNet is universal for approximating invariant functions", "venue": null, "year": 2017 }, { "authors": [ "{f(x", "f ∈ F" ], "title": "The proof of this theorem relies on two main ingredients. First, following the elegant idea of Maehara & Hoang (2019), we augment the input space to transform the vector-valued equivariant functions into scalar maps. Second, we apply the fine-grained approximation result Cor", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Graph Neural Networks (GNN) are designed to deal with graph structured data. Since a graph is not changed by permutation of its nodes, GNNs should be either invariant if they return a result that must not depend on the representation of the input (typically when building a graph embedding) or equivariant if the output must be permuted when the input is permuted (typically when building an embedding of the nodes). More fundamentally, incorporating symmetries in machine learning is a fundamental problem as it allows to reduce the number of degree of freedom to be learned.\nDeep learning on graphs. This paper focuses on learning deep representation of graphs with network architectures, namely GNN, designed to be invariant to permutation or equivariant by permutation. From a practical perspective, various message passing GNNs have been proposed, see Dwivedi et al. (2020) for a recent survey and benchmarking on learning tasks. In this paper, we study 3 architectures: Message passing GNN (MGNN) which is probably the most popular architecture used in practice, order-k Linear GNN (k-LGNN) proposed in Maron et al. (2018) and order-k Folklore GNN (kFGNN) first introduced by Maron et al. (2019a). MGNN layers are local thus highly parallelizable on GPUs which make them scalable for large sparse graphs. k-LGNN and k-FGNN are dealing with representations of graphs as tensors of order k which make them of little practical use for k ≥ 3. In order to compare these architectures, the separating power of these networks has been compared to a hierarchy of graph invariants developed for the graph isomorphism problem. Namely, for k ≥ 2, k-WL(G) are invariants based on the Weisfeiler-Lehman tests (described in Section 4.1). For each k ≥ 2, (k + 1)-WL has strictly more separating power than k-WL (in the sense that there is a pair of non-isomorphic graphs distinguishable by (k + 1)-WL and not by k-WL). GIN (which are invariant MGNN) introduced in Xu et al. (2018) are shown to be as powerful as 2-WL. In Maron et al. (2019a), Geerts (2020b) and Geerts (2020a), k-LGNN are shown to be as powerful as k-WL and 2-FGNN is shown to be as powerful as 3-WL. In this paper, we extend this last result about k-FGNN to general values of k. So in term of separating power, when restricted to tensors of order k, k-FGNN is the\nmost powerful architecture among the ones considered in this work. This means that for a given pair of graphs G and G′, if (k + 1)-WL(G) 6= (k + 1)-WL(G′), then there exists a k-FGNN, say GNNG,G′ such that GNNG,G′(G) 6= GNNG,G′(G′). Approximation results for GNNs. Results on the separating power of GNNs only deal with pairwise comparison of graphs: we need a priori a different GNN for each pair of graphs in order to distinguish them. Such results are of little help in a practical learning scenario. Our main contribution in this paper overcomes this issue and we show that a single GNN can give a meaningful representation for all graphs. More precisely, we characterize the set of functions that can be approximated by MGNNs, k-LGNNs and k-FGNNs respectively. Standard Stone-Weierstrass theorem shows that if an algebra A of real continuous functions separates points, then A is dense in the set of continuous function on a compact set. Here we extend such a theorem to general functions with symmetries and apply it to invariant and equivaraint functions to get our main result for GNNs. As a consequence, we show that k-FGNNs have the best approximation power among architectures dealing with tensors of order k.\nUniversality results for GNNs. Universal approximation theorems (similar to Cybenko (1989) for multi-layers perceptron) have been proved for linear GNNs in Maron et al. (2019b); Keriven & Peyré (2019); Chen et al. (2019). They show that some classes of GNNs can approximate any function defined on graphs. To be able to approximate any invariant function, they require the use of very complex networks, namely k-LGNN where k tends to infinity with n the number of nodes. Since we prove that any invariant function less powerful than (k + 1)-WL can be approximated by a k-FGNN, letting k tends to infinity directly implies universality. Universality results for k-FGNN is another contribution of our work.\nEquivariant GNNs. Our second set of results extends previous analysis from invariant functions to equivariant functions. There are much less results about equivariant GNNs: Keriven & Peyré (2019) proves the universality of linear equivariant GNNs, and Maehara & Hoang (2019) shows the universality of a new class of networks they introduced. Here, we consider a natural equivariant extension of k-WL and prove that equivariant (k + 1)-LGNNs and k-FGNN can approximate any equivariant function less powerful than this equivariant (k + 1)-WL for k ≥ 1. At this stage, we should note that all universality results for GNNs by Maron et al. (2019b); Keriven & Peyré (2019); Chen et al. (2019) are easily recovered from our main results. Also our analysis is valid for graphs of varying sizes.\nEmpirical results for the Quadratic Assigment Problem (QAP). To validate our theoretical contributions, we empirically show that 2-FGNN outperforms classical MGNN. Indeed, Maron et al. (2019a) already demonstrate state of the art results for the invariant version of 2-FGNNs (for graph classification or graph regression). Here we consider the graph alignment problem and show that the equivariant 2-FGNN is able to learn a node embedding which beats by a large margin other algorithms (based on spectral method, SDP or GNNs).\nOutline and contribution. After reviewing more previous works and notations in the next section, we define the various classes of GNNs studied in this paper in Section 3 : message passing GNN, linear GNN and folklore GNN. Section 4 contains our main theoretical results for GNNs. First in Section 4.2 we describe the separating power of each GNN architecture with respect to the WeisfeilerLehman test. In Section 4.3, we give approximation guarantees for MGNNs, LGNNs and FGNNs at fixed order of tensor. They cover both the invariant and equivariant cases and are our main theoretical contributions. For these, we develop in Section D a fine-grained Stone-Weierstrass approximation theorem for vector-valued functions with symmetries. Our theorem handles both invariant and equivariant cases and is inspired by recent works in approximation theory. In Section 6, we illustrate our theoretical results on a practical application: the graph alignment problem, a well-known NP-hard problem. We highlight a previously overlooked implementation question: the handling of batches of graphs of varying sizes. A PyTorch implementation of the code necessary to reproduce the results is available at https://github.com/mlelarge/graph_neural_net" }, { "heading": "2 RELATED WORK", "text": "The pioneering works that applied neural networks to graphs are Gori et al. (2005) and Scarselli et al. (2009) that learn node representation with recurrent neural networks. More recent message passing architectures make use of non-linear functions of the adjacency matrix (Kipf & Welling, 2016),\nfor example polynomials (Defferrard et al., 2016). For regular-grid graphs, they match classical convolutional networks which by design can only approximate translation-invariant functions and hence have limited expressive power. In this paper, we focus instead on more expressive architectures.\nFollowing the recent surge in interest in graph neural networks, some works have tried to extend the pioneering work of Cybenko (1989); Hornik et al. (1989) for various GNN architectures. Among the first ones is Scarselli et al. (2009), which studied invariant message-passing GNNs. They showed that such networks can approximate, in a weak sense, all functions whose discriminatory power is weaker than 1-WL. Yarotsky (2018) described universal architectures which are invariant or equivariant to some group action. These models rely on polynomial intermediate layers of arbitrary degrees, which would be prohibitive in practice. Maron et al. (2019b) leveraged classical results about the polynomials invariant to a group action to show that k-LGNN are universal as k tends to infinity with the number of nodes. Keriven & Peyré (2019) derived a similar result, in the more complicated equivariant case by introducing a new Stone-Weierstrass theorem. Similarly to Maron et al. (2019b), they require the order of tensors to go to infinity. Another route towards universality is the one of Chen et al. (2019). In the invariant setting, they show for a class of GNN that universality is equivalent to being able to discriminate between (non-isomorphic) graphs. However, the only way to achieve such discriminatory power is to use tensors of arbitrary high order, see also Ravanbakhsh (2020). Our work encompass and precise these results using high-order tensors as it yields approximation guarantees even at fixed order of tensor.\nCPNGNN in Sato et al. (2019) and DimeNet in Klicpera et al. (2020) are message passing GNN incorporating more information than those studied here. Partial results about their separating power follows from Garg et al. (2020) which provides impossibility results to decide graph properties including girth, circumference, diameter, radius, conjoint cycle, total number of cycles, and k-cliques. Chen et al. (2020) studies the ability of GNNs to count graph substructures. Though our theorems are much more general, note that their results are improved by the present work. Note also, that if the nodes are given distinct features, MGNNs become much more expressive Loukas (2019) but looses their invariant or equivariant properties. Averaging i.e. relational pooling (RP) has been proposed to recover these properties Murphy et al. (2019a). However, the ideal RP, leading to a universal approximation, cannot be used for large graphs due to its complexity of O(|V |!). Regarding the other classes of RPGNN i.e. the k-ary pooling (Murphy et al., 2019b), we will show how our general theorems in the invariant case can be applied to characterize their approximation power (see Section 5).\nNote that for neural networks on sets, the situation is a bit simpler. Efficient architectures such as DeepSets (Zaheer et al., 2017) or PointNet (Qi et al., 2017) have been shown to be invariant universal. Similar results exist in the equivariant case (Segol & Lipman, 2020; Maron et al., 2020), whose proofs rely on polynomial arguments. Though this is not our main motivation, our approximation theorems could also be applied in this context see Sections D.3 and D.4." }, { "heading": "2.1 NOTATIONS: GRAPHS AS TENSORS", "text": "We denote by F,F0,F1/2,F1, . . . arbitrary finite-dimensional spaces of the form Rp (for various values of p) typically representing the space of features. Product of vectors in Rp always refer to component-wise product. There are two ways to see graphs with features. First, graphs can be seen as tensors of order k: G ∈ Fnk . The classical representation of a graph by its (weighted) adjacency matrix for k = 2 is a tensor of order 2 in Rn2 . This case allows for features on edges by replacing Rn2 with Fn2 where F is some Rp. Second, graphs can also be represented by their discrete structure with an additional feature vector. More exactly, denote by Gn the set of discrete graphs G = (V,E) with n nodes V = [n] and edges E ⊆ V 2 (with no weights on edges). Such a G ∈ Gn with a vector h0 ∈ Fn represents a graphs with features on the vertices." }, { "heading": "2.2 DEFINITIONS: INVARIANT AND EQUIVARIANT OPERATORS", "text": "Let [n] = {1, . . . , n}. The set of permutations on [n] is denoted by Sn. For G ∈ Fn k\nand σ ∈ Sn, we define: (σ ? G)σ(i1),...,σ(ik) = Gi1,...,ik . Note that the ? operation is valid between a permutation in Sn and a graph G as soon as the number of nodes of G is n, i.e. it is valid for any order k tensor\nrepresentation of the graph. Two graphs G1, G2 are said isomorphic if they have the same number of nodes and there exists a permutation σ such that G1 = σ ? G2.\nDefinition 1. A function f : Fnk0 → F1 is said to be invariant if f(σ ? G) = f(G) for every permutation σ ∈ Sn and every G ∈ Fn k 0 . A function f : Fn k 0 → Fn `\n1 is said to be equivariant if f(σ ? G) = σ ? f(G) for every permutation σ ∈ Sn and every G ∈ Fn k 0 .\nNote that composing an equivariant function with an invariant function gives an invariant function. For k ≥ 1, we define the invariant summation layer Sk : Fnk → F by Sk(G) = ∑ i∈[n]k Gi for G ∈ Fnk . We also define the equivariant reduction layer Sk1 : Fn k → Fn as follows: Sk1 (G)i =∑\n1≤i2...ik≤nGi,i2,...ik . For message passing GNN, we will use the equivariant layer Id +λS 1 : Fn → Fn defined by, (Id +λS1)(G)i = Gi + λS1(G), where λ ∈ R is a learnable parameter. In the sequel, we will need a mapping Ik lifting the input graph to a higher order tensor. We denote by Ik : Fn20 → Fn k\n1 the initialization function mapping for a given graph each k-tuple to its isomorphism type. We refer to the appendix Section C.3 for a precise description of this linear equivariant function. Note at this stage that I2 is given by, for G ∈ Fn2 , I(G)i,j = (Gi,j , δi,j) where δi,j is 0 if i 6= j and 1 otherwise. Indeed for a pair of nodes i, j in a graph (without features), there are only three isomorphism types: i = j; i 6= j and (i, j) is an edge; i 6= j but (i, j) is not an edge." }, { "heading": "3 GNN DEFINITIONS", "text": "In this section, we define the various GNN architectures studied in this paper. In all architectures, there is a main building block or layer mapping Fnkt to Fn k t+1 where Fn k\nt can be seen as the space for the representation of the graph at layer t. We will define three different types of layers for message passing GNN, linear GNN and folklore GNN.The case k = 2 is probably the most interesting case from a practical point view and corresponds to a case where a layer takes as input a graph (with features on nodes and edges) and produces as output a graph (with new features on nodes and edges). For each type of GNNs, there will be an invariant and an equivaraint version. All architectures will share the last function: mI : FT+1 → F for the invariant case and mE : FnT+1 → Fn for the equivariant case which are continuous functions. It is typically modeled by a Multi Layer Perceptron, which is applied on each component for the equivariant case. In words, each network takes as input a graph G ∈ Fn20 , produces in the invariant case a graph embedding in FT+1 and in the equivaraint case a node embedding in FnT+1, then these embeddings are passed through the function mI or mE respectively to get a feature in F or Fn for the learning task." }, { "heading": "3.1 MESSAGE PASSING GNN", "text": "Message passing GNN (MGNN) are defined for classical graphs G with features on the nodes. More exactly they take as input a discrete graph G = (V,E) ∈ Gn and features on the nodes h0 ∈ Fn. MGNN are then defined inductively as follows: let h`i ∈ F` denote the feature at layer ` associated with node i, the updated features h`+1i are obtained as: h `+1 i = f ( h`i , {{ h`j }} j∼i ) , where j ∼ i means that nodes j and i are neighbors in the graphG, i.e. (i, j) ∈ E, and the function f is a learnable function taking as input the feature vector of the center vertex h`i and the multiset of features of the neighboring vertices {{ h`j }} j∼i. Indeed, it follows from Lem. 33 in Appendix, that any such function f can be approximated by a layer of the form,\nh`+1i = f0 h`i ,∑ j∼i f1 ( h`i , h ` j ) , (1) where f0 : F` × F`+1/2 → F`+1 and f1 : F` × F` → F`+1/2, so that F` is the field for the features at the `-th layer. We call such a function a message passing layer and denote it by F : Fn` → Fn`+1 (note that F depends implicitly from the graph). Then an equivariant message passing GNN is simply obtained by the composition of message passing layers: FT ◦ . . . F2 ◦F1, where each Fi is a message passing layer. Clearly since each Fi is equivariant, this message passing GNN is also equivariant and produces features on each node in the space FT . In order to obtain an invariant GNN, we apply\nan invariant function from FnT → FT+1 on the output of an equivariant message passing GNN. In practice, a symmetric function is applied on the vectors of features indexed by the nodes, typically the sum of the features ∑ i(FT ◦ . . . F2 ◦ F1(G))i is taken as an invariant feature for the graph G. With our notation, S1 ◦ FT ◦ . . . F2 ◦ F1 (where S1 was defined in Section2.2) defines an invariant message passing GNN.\nHence, we define the sets of message passing GNNs as follows: MGNNI = {mI ◦ S1 ◦ FT ◦ . . . F2 ◦ F1,∀T} MGNNE = {mE ◦ (Id +λS1) ◦ FT ◦ . . . F2 ◦ F1,∀T} where Ft : Fnt → Fnt+1 are message passing layers." }, { "heading": "3.2 LINEAR GNN", "text": "We define the linear graph layer of order k as F : Fnk` → Fn k `+1, where for all G ∈ Fn k\n` , F (G) = f (L[G]) where L : Fnk` → Fn k\n` is a linear equivariant function, and f : F` → F`+1 is a learnable function applied on each of the nk features and F` is the field for the features at the `-th layer.\nWe then define the sets of linear GNNs as follows: k-LGNNI = {mI ◦ Sk ◦ FT ◦ . . . F2 ◦ F1 ◦ Ik,∀T} k-LGNNE = {mE ◦ Sk1 ◦ FT ◦ . . . F2 ◦ F1 ◦ Ik,∀T} where Ik : Fn20 → Fn k 1 is defined in §2.2 and for t ≥ 1, Ft : Fn k t → Fn k\nt+1 are linear equivariant layers." }, { "heading": "3.3 FOLKLORE GNN", "text": "The main building block of Folklore GNN (FGNN) is what we call the folklore graph layer (FGL) of order k defined as follows: for k ≥ 1, F : Fnk` → Fn k `+1 where for all G ∈ Fn k ` and all i ∈ [n]k,\nF (G)i = f0 Gi, n∑ j=1 k∏ w=1 fw ( Gi1,...,iw−1,j,iw+1,...,ik ) , (2) where f0 : F`×F`+1/2 → F`+1 and fk : F` → F`+1/2 are learnable functions. As shown in Lem. 33 in Appendix, FGL is an equivariant function which is indeed very expressive.\nFor classical graphs G ∈ Fn20 , we can now define 2-FGNN by composing folklore graph layers Ft : Fn 2 t → Fn 2 t+1, so that FT ◦ . . . F1 ◦ F0 is an equivariant GNN producing a graph in Fn 2\nT+1. To obtain an invariant feature of the graph, we use the summation layer S2 defined in Section 2.2 so that S2 ◦ FT ◦ . . . F1 ◦ F0 is now an invariant 2-FGNN. In order to define general k-FGNN, we first need to lift the classical graph to a tensor in Fnk , then we apply folklore graph layers of order k and finally we need to project the tensor in Fnk to a tensor in Fn for the equivariant version and to a tensor in F for the invariant version. The first step is done with the linear equivariant function Ik : Fn20 → Fn k 1 defined in Section 2.2. The last step is done with the reduction layer Sk1 for the equivariant case and the summation layer Sk for the invariant case, both defined in Section 2.2.\nWe define the sets of folklore GNNs as follows: k-FGNNI = {mI ◦ Sk ◦ FT ◦ . . . F2 ◦ F1 ◦ Ik,∀T} k-FGNNE = {mE ◦ Sk1 ◦ FT ◦ . . . F2 ◦ F1 ◦ Ik,∀T} where Ft : Fn k t → Fn k t+1 are FGLs." }, { "heading": "4 THEORETICAL RESULTS FOR GNNS", "text": "" }, { "heading": "4.1 WEISFEILER-LEHMAN INVARIANT AND EQUIVARIANT VERSIONS", "text": "We introduce a family of functions on graphs parametrized by integers k ≥ 2 developed for the graph isomorphism problem and working with tuples of k vertices. Each k-tuple i ∈ V k = [n]k is given\na color c0(i) corresponding to its isomorphism type (see Section B.2). The k-WL test relies on the following notion of neighborhood, defined by, for any w ∈ [k], and i = (i1, . . . , ik) ∈ V k, Nw(i) = {(i1, . . . , iw−1, j, iw+1, . . . , ik) : j ∈ V }. Then, the colors of the k-tuples are refined as follows, ct+1(i) = Lex (ct(i), (Ct1(i), . . . , C t k(i))) where, for w ∈ [k], Ctw(i) = {{ ct(̃i) : ĩ ∈ Nw(i) }} and the function Lex means that all occuring colors are lexicographically ordered and replaced by an initial segment of the natural numbers.\nFor a graphG, let k-WLTI (G) denote the multiset of colors of the k-WL algorithm at the T th iteration. After a finite number of steps (which depends on the number of vertices in the graph), the algorithm stops because a stable coloring is reached (no color class of k-tuples is further divided). We denote by k-WLI(G) the multiset of colors in the stable coloring. This is a graph invariant that is usually used to test if graphs are isomorphic. The power of this invariant increases with k Cai et al. (1989).\nWe now define an equivariant version of k-WL test to express the discriminatory power of equivariant architectures For this, we construct a coloring of the vertices from the coloring of the k-tuples given by the standard k-WL algorithm. Formally, define k-WLTE : Fn 2\n0 → Fn by, for i ∈ V : k-WLTE(G)i ={{ cT (i) : i ∈ V k, i1 = i }} . Similarly, define k-WLE(G) = {{ c(i) : i ∈ V k, i1 = i }} where c(i) is the stable coloring obtained by the algorithm." }, { "heading": "4.2 SEPARATING POWER OF GNNS", "text": "We formulate our results using the equivalence relation introduced by Timofte (2005), which characterizes the separating power of a set of functions.\nDefinition 2. Let F be a set of functions f defined on a set X , where each f takes its values in some Yf . The equivalence relation ρ (F) defined by F on X is: for any x, x′ ∈ X ,\n(x, x′) ∈ ρ (F) ⇐⇒ ∀f ∈ F , f(x) = f(x′) .\nGiven two sets of functions F and E , we say that F is more separating (resp. strictly more separating) than E if ρ (F) ⊆ ρ (E) (resp. ρ (F) ( ρ (E)). Note that all the functions in F and E need to be defined on the same set but can take values in different sets. For example, we can easily see that for the k-WL algorithm defined above, the equivariant version is more separating than the invariant one.\nSome properties of the WL hierarchy of tests can be rephrased with the notion of separating power. In particular, Cai et al. (1989) showed that (k + 1)-WLI distinguishes strictly more than k-WLI , which can be rewritten simply as (for a function f , we write ρ (f) for ρ ({f}))\nρ ((k + 1)-WLI) ( ρ (k-WLI) . (3)\nThis notion of separating power enables us to concisely summarize the current knowledge about the discriminatory power of classes of GNN.\nProposition 3. We have, for k ≥ 2,\nρ (MGNNI) = ρ (2-WLI) ρ (MGNNE) = ρ (2-WLE) (4) ρ (k-LGNNI) = ρ (k-WLI) ρ (k-LGNNE) ⊆ ρ (k-WLE) (5) ρ (k-FGNNI) = ρ ((k + 1)-WLI) ρ (k-FGNNE) = ρ ((k + 1)-WLE) (6)\nOnly results about the invariant cases were previously known: (4) comes from Xu et al. (2018), (5) from Maron et al. (2018) Geerts (2020a) and one inclusion of (6) comes from Maron et al. (2019a). The equality in (6) for general k ≥ 2 is proved in Section C. Note that for k = 2, all GNNs are dealing with tensors of order 2 i.e. with the adjacency matrix of the graph. However, the complexities of the various layers are quite different: for the message passing GNN, all computations are local (scaling with the maximum degree in the graph) and can be done in parallel; for the linear layer, there are only 15 linear functions from Rn2 → Rn2 for all values of n (Maron et al., 2018); the folklore layer involves a (dense) matrix multiplication of shape n× n. If 2-FGNN is the most complex architecture, we see that it has the best separating power among all architectures proposed so far dealing with tensors of order 2." }, { "heading": "4.3 APPROXIMATION RESULTS FOR GNNS", "text": "For X,Y finite-dimensional spaces, let us denote by CI(X,Y ), CE(X,Y ), , the set of invariant, respectively equivariant, continuous functions from X to Y . The closure of a class of function F for the uniform norm is denoted by F . Our result extend easily to graphs of varying sizes but this is deferred to Section F.2 for clarity.\nThe theorem below states in particular that the class k-FGNN can approximate any continuous function that is less separating than (k + 1)-WL in the invariant and in the equivariant cases.\nTheorem 4. Let Kdiscr ⊆ Gn × Fn0 , K ⊆ Fn 2 0 be compact sets. For the invariant case, we have:\nMGNNI = {f ∈ CI(Kdiscr,F) : ρ (2-WLI) ⊆ ρ (f)} k-LGNNI = {f ∈ CI(K,F) : ρ (k-WLI) ⊆ ρ (f)} k-FGNNI = {f ∈ CI(K,F) : ρ ((k + 1)-WLI) ⊆ ρ (f)}\nFor the equivariant case, we have: MGNNE = {f ∈ CE(Kdiscr,Fn) : ρ (2-WLE) ⊆ ρ (f)} k-LGNNE = {f ∈ CE(K,Fn) : ρ (k-LGNNE) ⊆ ρ (f)} ⊃ {f ∈ CE(K,Fn) : ρ (k-WLE) ⊆ ρ (f)} k-FGNNE = {f ∈ CE(K,Fn) : ρ ((k + 1)-WLE) ⊆ ρ (f)}\nIn the invariant case for k = 2,we have MGNNI = 2-LGNNI ( 2-FGNNI where the strictness of the last inclusion comes from (3). In other words, 2-FGNNI has a better power of approximation than the other architectures working with tensors of order 2. We already knew by Proposition 3 that 2-FGNNI is the best separating architecture among those studied in this paper, dealing with tensor of order 2 and our theorem implies that this is also the case for the approximation power.\nTo clarify the meaning of these statements, we explain why the inclusions “⊆” are actually straightforward. For concreteness, we focus on k-FGNNI ⊆ {f ∈ CI(K,F) : ρ ((k + 1)-WLI) ⊆ ρ (f)}. Take h ∈ k-FGNNI , this means that there is a sequence GNNj ∈ k-FGNNI such that, supG∈K ‖h(G)−GNNj(G)‖ goes to zero when j goes to infinity. Therefore, h is continuous and constant on each ρ (k-FGNNI)-class. Indeed, for any (G,G′) ∈ ρ (k-FGNNI), GNNj(G) = GNNj(G′) so that h(G) = limiGNNj(G) = limjGNNj(G′) = h(G′). Hence we have ρ (k-FGNNI) ⊆ ρ (h) and by Prop. 3, ρ (k-FGNNI) = ρ ((k + 1)-WLI), allowing us to get the inclusion above.\nOn the contrary, the reverse inclusions “⊃” are much more intricate but they are also the most valuable. For instance, consider the inclusion k-FGNNI ⊃ {f ∈ CI(K,F) : ρ ((k + 1)-WLI) ⊆ ρ (f)}. If one wishes to learn a function h ∈ CI(K,F) with k-FGNNI , this function must at least be approximable by the class of k-FGNNI . Our theorem precisely guarantees that if h is less separating that k-WLI , it can be approximated by k-FGNNI :\n∀ > 0, ∃GNN ∈ k-FGNNI , sup G∈K ‖h(G)−GNN(G)‖ ≤ .\nFor this, we show a much more general version of the famous Stone-Weierstrass theorem (see Section D) which relates the separating power with the approximation power. Following the elegant idea of Maehara & Hoang (2019), we augment the input space to transform vector-valued equivariant functions into scalar invariant maps. Then, we apply a fine-grained approximation theorem from Timofte (2005).We also provide specialized versions of our abstract theorem in Section 5, which can be easily used to determine the approximation capabilities of any deep learning architecture.\nOur theorem has also implications for universality results like Maron et al. (2019b); Keriven & Peyré (2019). A class of GNN is said to be universal if its closure on a compact set K is the whole CI(K,F) (or CE(K,Fn)). In particular, Thm. 4 implies that n-LGNN and n-FGNN are universal as n-WL distinguishes non-isomorphic graphs of size n. This recovers a result of Ravanbakhsh (2020) for LGNN. Moreover, we can leverage the extensive literature on the WL tests to give more subtle results. For instance, Cai et al. (1989, §8.2) show that, for planar graphs, O( √ n)-WL can distinguish non-isomoprhic instances. Therefore, O( √ n)-LGNN or O( √ n)-FGNN achieve universality in the particular, yet common, case of planar graphs. On a more practical side, Fürer (2010, Thm. 4.5) shows that the spectrum of a graph is less separating than 3-WL so that functions of the spectrum can actually be well approximated by 2-FGNN." }, { "heading": "5 EXPRESSIVENESS OF GNNS", "text": "We now state the general theorems which are our main tools in proving our approximation guarantees for GNNs. Theirs proofs are deferred to Section D.9 which contains our generalization of the StoneWeierstrass theorem with symmetries. We need to first introduce more general definitions: If G is a finite group acting on some topological space X , we say that G acts continuously on X if, for all g ∈ G, x 7→ g · x is continuous. If G is a finite group acting on some compact set X and some topological space Y , we define the sets of equivariant and invariant continuous functions by,\nCE(X,Y ) = {f ∈ C(X,Y ) : ∀x ∈ X, ∀g ∈ G, f(g · x) = g · f(x)} CI(X,Y ) = {f ∈ C(X,Y ) : ∀x ∈ X, ∀g ∈ G, f(g · x) = f(x)}\nNote that these definitions extend Definition 1 to a general group. Theorem 5. Let X be a compact space, F = Rp be some finite-dimensional vector space, G be a finite group acting (continuously) on X .\nLet F0 ⊆ ⋃∞ h=1 CI(X,Rh) be a non-empty set of invariant functions, stable by concatenation, and consider,\nF = {m ◦ f : f ∈ F0 ∩ C(X,Rh), m : Rh → F MLP, h ≥ 1} ⊆ C(X,F) ." }, { "heading": "Then the closure of F is,", "text": "F = {f ∈ CI(X,F) : ρ (F0) ⊆ ρ (f)} .\nWe can apply Theorem 5 to the class of k-ary relational pooling GNN introduced in Murphy et al. (2019a). As a result, we get that this class of invariant k-RP GNN can approximate any continuous function f with ρ(k − RPGNN) ⊆ ρ(f) but to the best of our knowledge, ρ(k − RPGNN) is not known and only ρ(k − RPGNN) ⊂ ρ(2−WLI) is proved in Murphy et al. (2019a). We now state our general theorem for the equivariant case: Theorem 6. Let X be a compact space, F = Rp and G = Sn the permutation group, acting (continuously) on X and acting on Fn by, for σ ∈ Sn, x ∈ Fn,\n∀i ∈ {1, . . . , p}, (σ · x)i = xσ−1(i) ,\nLet F0 ⊆ ⋃∞ h=1 CE ( X, (Rh)n ) be a non-empty set of equivariant functions, stable by concatenation, and consider,\nF = {x 7→ (m(f(x)1), . . . ,m(f(x)n)) : f ∈ F0 ∩ C ( X, (Rh)n ) , m : Rh → F MLP, h ≥ 1}\nAssume, that, if f ∈ F0, then,\nx 7→ ( n∑ i=1 f(x)i, n∑ i=1 f(x)i, . . . , n∑ i=1 f(x)i ) ∈ F0 ." }, { "heading": "Then the closure of F is,", "text": "F = {f ∈ CE(X,Fn) : ρ (F0) ⊆ ρ (f)} .\nApplications of these theorems fo the case of Pointnet Qi et al. (2017) are provided in Section D.9" }, { "heading": "6 QUADRATIC ASSIGNMENT PROBLEM", "text": "To empirically evaluate our results, we study the Quadratic Assignment Problem (QAP), a classical problem in combinatorial optimization. For A,B n× n symmetric matrices, it consists in solving\nmaximize trace(AXBX>), subject to X ∈ Π,\nwhere Π is the set of n× n permutation matrices. Many optimization problems can be formulated as QAP. An example is the network alignment problem, which consists in finding the best matching\nbetween two graphs, represented by their adjacency matrices A and B. Though QAP is known to be NP-hard, recent works such as Nowak et al. (2018) have investigated whether it can be solved efficiently w.r.t. a fixed input distribution. More precisely, Nowak et al. (2018) studied whether one can learn to solve this problem using a MGNN trained on a dataset of already solved instances. However, as shown below, both the baselines and their approach fail on regular graphs, a class of graph considered as particularly hard for isomorphism testing.\nTo remedy this weakness, we consider 2-FGNNE . We then follow the siamese method of (Nowak et al., 2018): given two graphs, our system produces an embedding in Fn for each graph, where n is the number of nodes, which are then multiplied together to obtain a n× n similarity matrix on nodes. A permutation is finally computed by solving a Linear Assignment Problem (LAP) with this resulting n × n as cost matrix. We tested our architecture on two distribution: the Erdős–Rényi model and random regular graphs. The accuracy in matching the graphs is much improved compare to previous works. The experimental setup is described more precisely in Section A.1.\nErdős–Rényi graph model\nRegular graph model" }, { "heading": "7 CONCLUSION", "text": "We derived the expressive power of various practical GNN architectures: message passing GNN, linear GNN and folklore GNN; both for their invariant and equivariant counterparts. Our results unify and extend the recent works in this direction. In particular, we are able to recover all the universality results proved for GNNs so far. Similarly to existing results in the literature, we do not deal here with the sizes of the embeddings constructed at different layers, i.e. the sizes of the spaces F`, and these sizes are supposed to grow to infinity with the number of nodes n in the graph. Obtaining bounds on the scaling of the sizes of the features to ensure that the results presented here are still valid is an interesting open question. We show that folklore GNNs have the best power of approximation among all GNNs studied here dealing with tensors of order 2. From a practical perspective, we demonstrate their improved performance on the QAP with a significant gap in performances compared to other approaches." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work was supported in part by the French government under management of Agence Nationale de la Recherche as part of the “Investissements d’avenir” program, reference ANR19-P3IA-0001 (PRAIRIE 3IA Institute). M.L. thanks Google for Google Cloud Platform research credits and NVIDIA for a NVIDIA GPU Grant." }, { "heading": "A Experimental results 14", "text": "A.1 Details on the experimental setup . . . . . . . . . . . . . . . . . . . . . . . . . . . 14\nA.2 Experimental results on graphs of varying size . . . . . . . . . . . . . . . . . . . . 14\nA.3 Generalization for regular graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . 15" }, { "heading": "B Weisfeiler-Lehman tests 16", "text": "B.1 Weisfeiler-Lehman test on vertices . . . . . . . . . . . . . . . . . . . . . . . . . . 16\nB.2 Isomorphism type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16\nB.3 Weisfeiler-Lehman and Folklore Weisfeiler-Lehman tests of order k ≥ 2 . . . . . . 17" }, { "heading": "C Separating power of GNN 18", "text": "C.1 Mutli-linear perceptrons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18\nC.2 Augmented separating power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18\nC.3 Initialization layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19\nC.4 Known results about the separating power of some GNN classes . . . . . . . . . . 20\nC.5 Bounding the separating power of k-FGNN . . . . . . . . . . . . . . . . . . . . . 20\nC.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21" }, { "heading": "D Stone-Weierstrass theorem with symmetries 21", "text": "D.1 General notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22\nD.2 Separating power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22\nD.3 Approximation theorems for real-valued functions . . . . . . . . . . . . . . . . . . 23\nD.4 The equivariant approximation theorem . . . . . . . . . . . . . . . . . . . . . . . 24\nD.5 A preliminary version of the equivariant approximation theorem . . . . . . . . . . 26\nD.6 Characterizing the subalgebras of Rp . . . . . . . . . . . . . . . . . . . . . . . . . 27\nD.7 Proof of the main equivariant approximation theorem . . . . . . . . . . . . . . . . 29\nD.8 Practical reductions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30\nD.9 Reductions for GNNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33" }, { "heading": "E Proofs for expressiveness of GNNs 34", "text": "E.1 Expressivity of GNN layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34\nE.2 Approximation theorems for GNNs . . . . . . . . . . . . . . . . . . . . . . . . . 35" }, { "heading": "F Extension to graphs of varying sizes 37", "text": "F.1 Extension to disconnected input spaces . . . . . . . . . . . . . . . . . . . . . . . . 37\nF.2 Approximation theorem with varying graph size . . . . . . . . . . . . . . . . . . . 39" }, { "heading": "A EXPERIMENTAL RESULTS", "text": "" }, { "heading": "A.1 DETAILS ON THE EXPERIMENTAL SETUP", "text": "We consider a 2-FGNNE and train it to solve random planted problem instances of the QAP. Given a pair of graphs G1, G2 with n nodes each, we consider the siamese 2-FGNNE encoder producing embeddings E1, E2 ∈ Rn×k. Those embeddings are used to predict a matching as follows: we first compute the outer product E1ET2 , then we take a softmax along each row and use standard cross-entropy loss to predict the corresponding permutation index. We used 2-FGNNE with 2 layers, each MLP having depth 3 and hidden states of size 64. We trained for 25 epochs with batches of size 32, a learning rate of 1e-4 and Adam optimizer. The PyTorch code is available in the supplementary material.\nFor each experiment, the dataset was made of 20000 graphs for the train set, 1000 for the validation set and 1000 for the test set. For the experiment with Erdős–Rényi random graphs, we consider G1 to be a random Erdős–Rényi graph with edge density pe = 0.2 and n = 50 vertices. The graph G2 is a small perturbation of G1 according to the following error model considered in Feizi et al. (2016):\nG2 = G1 (1−Q) + (1−G1) Q′, (7)\nwhere Q and Q′ are Erdős–Rényi random graphs with edge density p1 and p2 = p1pe/(1 − pe) respectively, so that G2 has the same expected degree as G1. The noise level is the parameter p1. For regular graphs, we followed the same experimental setup but now G1 is a random regular graph with degree d = 10. Regular graphs are interesting example as they tend to be considered harder to align due to their more symmetric structure." }, { "heading": "A.2 EXPERIMENTAL RESULTS ON GRAPHS OF VARYING SIZE", "text": "Regular graph model\nWe tested our models on dataset of graphs of varying size, as this setting is also encompassed by our theory.\nHowever, contrary to message-passing GNN, GNN based on tensors do not work well with batches of graphs of varying size. Previous implementations, such as the one of Maron et al. (2019a), group the graphs in the dataset by size, enabling the GNN to only deal with batches of graphs on the same size.\nInstead, we use masking, which is a standard practice in recurrent neural networks. A batch of b tensors of sizes n1 × n1, n2 × n2, . . . , nb × nb is represented as a tensor b× nmax × nmax where nmax = maxi=1,...,b ni. A mask is created at initialization and is used to ensure that the operations on the full tensor translates to valid operations on each of the individual tensor.\nWe implemented this functionnality as a class MaskedTensors. Thanks to the newest improvements of PyTorch (Paszke et al., 2019), MaskedTensors act as a subclass of fundamental Tensor class. Thus they almost seamlessly integrate into standard PyTorch code. We refer the reader to the code for more details: https://github.com/mlelarge/graph_neural_net\nResults of our architecture and implementation with graphs of varying size are shown below on Figure 2. The only difference with the setting described above is that the number of nodes is now random. The number of vertices of a graph is indeed chosen randomly according to binomial distribution of parameters n = 50 and pn = 0.9." }, { "heading": "A.3 GENERALIZATION FOR REGULAR GRAPHS", "text": "We made the following experiment with the same general setting as in Section A.1 with regular graphs. We trained different models for all noise levels between 0 and 0.22 but in Figure 3, we plot the accuracy of each model across all noise levels. We observe that a majority of the models actually generalize to settings with noise level on which they were not trained. Indeed, the model trained with noise level ≈ 0.1 is performing best among all models across all noise levels!" }, { "heading": "B WEISFEILER-LEHMAN TESTS", "text": "Here we describe more precisely this hierarchy of tests, which will be used extensively to characterize the discirminatory power of classes of GNN. See Douglas (2011); Grohe (2017); Fürer (2017) for graph-theoretic introductions to these algorithms." }, { "heading": "B.1 WEISFEILER-LEHMAN TEST ON VERTICES", "text": "We now present the initial vertex coloring algorithm.\nInput. This algorithm takes as input a discrete graph structure G = (V,E) ∈ Gn with V = [n], E ⊆ V 2 and h ∈ Fn0 features on the vertices.\nInitialization. Each vertex s ∈ V is given a color c0WL(G, h)s = hs corresponding to its features vector.\nRefining the coloring. The colors of the vertices are updated as follows, ct+1WL (G, h)s = Lex ( ctWL(G, h)s, {{ ctWL(G, h)(s̃) : s̃ ∼ s }}) ,\nand the function Lex means that all occuring colors are lexicographically ordered.\nFor each graph G ∈ Gn and each vector of features h ∈ Fn0 , there exists a time T (G, h) from which the sequence of colorings (ctWL(G, h))t≥0 is stationary. More exactly, the colorings are not refined anymore: for any t ≥ T (G, h), s, s′ ∈ V ,\nctWL(G, h)s = c t WL(G, h)s′ ⇐⇒ c T (G,h) WL (G, h)s = c T (G,h) WL (G, h)s′ .\nDenote the resulting coloring cT (G,h)WL (G, h) by simply cWL(G, h). cWL is now a mapping from Gn × Fn0 → Zn for some space of colors Z.\nInvariant tests The proper Weisfeiler-Lehman test is invariant and is defined by, for t ≥ 0 and (G, h) ∈ Gn × Fn0 a graph,\nWLtI(G) = {{ ctWL(G)s : s ∈ V }} WLI(G) = {{cWL(G)s : s ∈ V }}\nEquivariant tests For the vertex coloring algorithm, cWL is already an equiavriant mapping so we define, for t ≥ 0 and (G, h) ∈ Gn × Fn0 a graph,\nWLtE = c t WL WLE = cWL\nB.2 ISOMORPHISM TYPE\nThe initialization of the higher-order variants of the Weisfeiler-Lehman test is slightly more intricate. For this we need to define the isomorphism type of a k-tuple w.r.t. a graph described by a tensor Fn20 .\nA k-tuple (i1, . . . , ik) ∈ [n]k in a graph G ∈ Fn 2 0 and a k-tuple (j1, . . . , jk) ∈ [n]k in a graph H ∈ Fn20 are said to have the same isomoprhism type if the mapping iw 7→ jw is a well-defined partial isormophism. Explicitly, this means that,\n• ∀w,w′ ∈ [k], iw = iw′ ⇐⇒ jw = jw′ . • ∀w,w′ ∈ [k], Giw,iw′ = Hjw,jw′ .\nDenote by iso(G)i1,...,ik the isomorphism type of the k-tuple (i1, . . . , ik) ∈ [n]k in a graph G ∈ Fn 2 0 .\nB.3 WEISFEILER-LEHMAN AND FOLKLORE WEISFEILER-LEHMAN TESTS OF ORDER k ≥ 2\nWe now present the folklore version of the Weisfeiler-Lehman test of order k (k-FWL), for k ≥ 2, along with k-WL for clarity. For both, we follow the presentation of Maron et al. (2019a) (except for the equivariant tests).\nInput. These algorithms take as input a graph G ∈ Fn20 which can be seen as a coloring on the pair of nodes.\nInitialization. Each k-tuple s ∈ V k is given a color c0k-WL(G)s = c0k-FWL(G)s corresponding to its isomorphism type.\nk-WL. The k-WL test relies on the following notion of neighborhood, defined by, for any w ∈ [k], and s = (i1, . . . , ik) ∈ V k,\nNw(s) = {(i1, . . . , iw−1, j, iw+1, . . . , ik) : j ∈ V } . (WL)\nThen, the colors of the k-tuples s ∈ V k are refined as follows, ct+1WL (G)s = Lex ( ctk-WL(G)s, (C t 1(s), . . . , C t k(s)) ) .\nwhere, for w ∈ [k], Ctw(s) = {{ ctk-WL(G)s̃ : s̃ ∈ Nw(s) }} .\nFor each graph G ∈ Fn0 , there exists a time T (G) from which the sequence of colorings (ctk-WL(G))t≥0 is stationary. More exactly, the colorings are not refined anymore: for any t ≥ T (G), s, s′ ∈ V k,\nctk-WL(G)s = c t k-WL(G)s′ ⇐⇒ c T (G) k-WL (G)s = c T (G) k-WL (G)s′ .\nDenote the resulting coloring cT (G)k-WL (G) by simply ck-WL(G).\nk-FWL. For k-FWL, the corresponding notion of neighborhood is defined by, for any j ∈ V , and s = (i1, . . . , ik) ∈ V k,\nNFj (s) = {(j, i2, . . . , ik), (i1, j, i3, . . . , ik), . . . , (i1, i2, . . . , ik−1, j)} (FWL)\nThen, the colors of the k-tuples s ∈ V k are refined as follows, ct+1k-WL(G)s = Lex ( ctk-FWL(G)s, {{ Ctj(s) : j ∈ V }}) ,\nwhere, for j ∈ V ,\nCtj(s) = ( ctk-FWL(G)s̃ : s̃ ∈ NFj (s) ) .\nLike k-WL, for each graph G ∈ Fn0 , there exists a time T (G) from which the sequence of colorings (ctk-WL(G))t≥0 is stationary. Similarly, denote the resulting coloring c T (G) k-WL (G) by simply ck-WL(G).\nThe colors ctk-FWL and c t k-FWL at iteration t define a mapping from Fn\n2\n0 to space of colorings of k-tuples, Zn k for some space Z.\nInvariant tests The standard versions of the Weisfeiler-Lehman tests are invariant and can be defined by, for t ≥ 0 and G ∈ Fn20 a graph,\nk-WLtI(G) = {{ ctk-WL(G)s : s ∈ V k }} k-WLI(G) = {{ ck-WL(G)s : s ∈ V k\n}} k-FWLtI(G) = {{ ctk-FWL(G)s : s ∈ V k\n}} k-FWLI(G) = {{ ck-FWL(G)s : s ∈ V k }} .\nEquivariant tests We now introduce the equivariant version of these tests. Many extensions are possible, we chose this one for its simplicity. For t ≥ 0, G ∈ Fn20 a graph, i ∈ V ,\nk-WLtE(G)i = {{ ctk-WL(G)s : s ∈ V k, s1 = i }} k-WLE(G)i = {{ ck-WL(G)s : s ∈ V k, s1 = i\n}} k-FWLtE(G)i = {{ ctk-FWL(G)s : s ∈ V k, s1 = i\n}} k-FWLE(G)i = {{ ck-FWL(G)s : s ∈ V k, s1 = i }} ." }, { "heading": "C SEPARATING POWER OF GNN", "text": "The goal of this section is to prove, Proposition 3. We have, for k ≥ 2,\nρ (MGNNI) = ρ (2-WLI) ρ (MGNNE) = ρ (2-WLE) (4) ρ (k-LGNNI) = ρ (k-WLI) ρ (k-LGNNE) ⊆ ρ (k-WLE) (5) ρ (k-FGNNI) = ρ ((k + 1)-WLI) ρ (k-FGNNE) = ρ ((k + 1)-WLE) (6)" }, { "heading": "C.1 MUTLI-LINEAR PERCEPTRONS", "text": "In the following we will use extensively multi-linear perceptrons (MLP) and their universality properties. Yet, for the sake of simplicity, we do not define precisely what we mean by MLP.\nGiven two finite-dimensional feature spaces F0 and F1, we only assume we are given a class of (continuous) MLP from F0 to F1 which is large enough to be dense in C(F0,F1). See for instance Hornik et al. (1989); Cybenko (1989) for precise conditions for MLP to be universal." }, { "heading": "C.2 AUGMENTED SEPARATING POWER", "text": "To factor the proof, we introduce another notion of separating power. Definition 7. For n,m ≥ 1 fixed, let X be a set, F be a some space and F be a set of functions from X to Y = Fn×m. Then, the augmented separating power of F is,\nρaugmn,m (F) = ρ ({(x, i, j) ∈ X × {1, . . . , n} × {1, . . . ,m} 7→ f(x)i,j : f ∈ F}) .\nExplicitly, for x, y ∈ X , i, j ∈ {1, . . . , n}, (x, i, j, y, k, l) ∈ ρaugmn,m (F) ⇐⇒ ∀f ∈ F , f(x)i,j = f(y)k,l .\nNote that when n = m = 1, the augmented separating power is exactly the same as the original separating power, so we identify ρ (.) with ρaugm1,1 (.). We also identify ρ augm n,1 (.) with ρ augm 1,n (.), that we denote by ρaugmn (.).\nFirst, it is easy to see that this notion is more precise than the separating power. Lemma 8. If F and G are set of functions from X to Fn×m,\nρaugmn,m (F) ⊆ ρaugmn,m (G) =⇒ ρ (F) ⊆ ρ (G) , and, in particular,\nρaugmn,m (F) = ρaugmn,m (G) =⇒ ρ (F) = ρ (G) ,\nThe interest in this notion is justified by the following lemma, which shows that this notion behaves well under composition with “reduction layers”. Lemma 9. For n,m ≥ 1 fixed, let X be a compact topological space, Y = Fn×m, F some finitedimensional space, F ⊆ C(X,Y ), and τ : X → Zn×m a function for some space Z. Define F̃ ⊆ C(X,Fn) by,\nF̃ = x ∈ X 7−→ m∑ j=1 h(f(x)1,j), m∑ j=1 h(f(x)2,j), . . . , m∑ i=1 h(f(x)n,j) : h : F→ F MLP, f ∈ F .\nand τ̃ : X → Z̃n by, for x ∈ X , ∀i ∈ {1, . . . , n}, τ̃(x)i = {{τ(x)i,j : 1 ≤ j ≤ m}} .\nThen,\nρaugmn,m (F) ⊆ ρaugmn,m (τ) =⇒ ρaugmn,m ( F̃ ) ⊆ ρaugmn,m (τ̃)\nρaugmn,m (F) ⊃ ρaugmn,m (τ) =⇒ ρaugmn,m ( F̃ ) ⊃ ρaugmn,m (τ̃) .\nNote that, in the statement, we implicitly see F̃ as a set of functions from X to Fn×1 to fit the definition of augmented separating power.\nProof. We show the two inclusions independently.\n(⊆) We first show that, ρaugmn,m (F) ⊆ ρaugmn,m (τ) =⇒ ρaugmn,m ( F̃ ) ⊆ ρaugmn,m (τ̃)\nTake (x, i, y, k) ∈ ρaugmn,m ( F̃ )\n. This means that, for any h : F→ F, f ∈ F , m∑ j=1 h(f(x)i,j) = m∑ j=1 h(f(y)k,j) .\nBy Lem. 31 and the universality of MLP, there exists a permutation σ ∈ Sm such that (f(x)i,σ(1), . . . , f(x)i,σ(m)) = (f(y)k,1, . . . , f(y)k,m). By definition of the augmented separating power, this means that, for any j ∈ {1, . . . ,m}, (x, i, σ(j), y, k, j) ∈ ρaugmn,m (F). Hence, by assumption, for any j ∈ {1, . . . ,m}, (x, i, σ(j), y, k, j) ∈ ρaugmn,m (τ), i.e. τ(x)i,σ(j) = τ(y)k,j . But this exactly means that τ̃(x)i = τ̃(y)k so that (x, i, y, k) ∈ ρaugmn,m (τ̃) as required.\n(⊃) We now show the other inclusion, ρaugmn,m (F) ⊃ ρaugmn,m (τ) =⇒ ρaugmn,m ( F̃ ) ⊃ ρaugmn,m (τ̃) .\nTake (x, i, y, k) ∈ ρaugmn,m (τ̃). By definition of τ̃ , this means that there exists σ ∈ Sm such that τ(x)i,σ(j) = τ(y)k,j so that (x, i, σ(j), y, k, j) ∈ ρaugmn,m (τ) ⊆ ρaugmn,m (F). Hence, for any f ∈ F , (f(x)i,σ(1), . . . , f(x)i,σ(m)) = (f(y)k,1, . . . , f(y)k,m), and so, for any h : F→ F,\nm∑ j=1 h(f(x)i,j) = m∑ j=1 h(f(y)k,j) .\nTherefore, (x, i, y, k) ∈ ρaugmn,m ( F̃ ) , which concludes the proof.\nC.3 INITIALIZATION LAYER\nWe use the same initialization layer as Maron et al. (2019a); Chen et al. (2020) and recall it below. The initial graph is a tensor of the form G ∈ Fn20 with F0 = Re+2; the last channel of G:,:,e+1 encodes the adjacency matrix of the graph and the first e channels G:,:,1:e are zero outside the diagonal and Gi,i,1:e ∈ Re is the color of vertex vi ∈ V . We then define Ik : Fn 2 0 → Fn k 1 with F1 = Rk 2×(e+2) as follows:\nIk(G)i,r,s,w = Gir,is,w, w ∈ [e+ 1], Ik(G)i,r,s,e+2 = 1(ir = is),\nfor i ∈ [n]k and r, s ∈ [k]. This linear equivariant layer has the same separating power as the isormorphism type, which is defined in Section B.2.\nLemma 10 ((Maron et al., 2019a, C.1)). For k ≥ 2, p0 ≥ 1, F0 = Rp0 , F1 = Rk 2×(p0+1), there exists Ik : Fn20 → Fn k 1 such that,\nρaugm n,nk−1\n( Ik )\n= ρaugm n,nk−1 (iso) ." }, { "heading": "C.4 KNOWN RESULTS ABOUT THE SEPARATING POWER OF SOME GNN CLASSES", "text": "First, we need to define some classes of GNN from which both the invariant and equivaraint GNN we considered are built. See Section 3 for details about the different layers.\nMGNNemb = {FT ◦ . . . F2 ◦ F1 : Ft : Fnt → Fnt+1 message passing layer, t = 1, . . . , T, T ≥ 1} k-LGNNemb = {FT ◦ . . . F2 ◦ F1 ◦ Ik : Fnt → Fnt+1 linear equivariant layer, t = 1, . . . , T, T ≥ 1} k-FGNNemb = {FT ◦ . . . F2 ◦ F1 ◦ Ik : Fnt → Fnt+1 FGL, t = 1, . . . , T, T ≥ 1} .\nThen, the precise results from the literature can be rephrased as,\nLemma 11 (Xu et al. (2018),Maron et al. (2018),Maron et al. (2019a)). For k ≥ 2,\nρaugmn (MGNNemb) = ρ augm n (cWL) (8)\nρaugm n,nk−1 (k-LGNNemb) ⊆ ρaugmn,nk−1 (ck-WL) (9) ρaugm n,nk−1 (k-FGNNemb) ⊆ ρaugmn,nk−1 (ck-FWL) (10)\n(8) comes from Xu et al. (2018, §A, §B), (9) and (10) from Maron et al. (2019a, §C, §D).\nC.5 BOUNDING THE SEPARATING POWER OF k-FGNN\nWe complete the results of the literature with a bound on the separating power of k-FGNN. Note that the particular case of k = 2 is already proven in Geerts (2020a).\nLemma 12. For any k ≥ 2,\nρaugm n,nk−1 (k-FGNNemb) ⊃ ρaugmn,nk−1 (ck-FWL) ,\nso that ρaugm n,nk−1 (k-FGNNemb) = ρ augm n,nk−1 (ck-FWL) .\nProof. Define,\nk-FGNNTemb = {FT ◦ . . . F2 ◦ F1 ◦ Ik : Fnt → Fnt+1 FGL, t = 1, . . . , T} ,\nthe set of functions defined by exactly T FGL layers. We show by induction that, for any T ≥ 0,\nρaugm n,nk−1 ( k-FGNNTemb ) ⊃ ρaugm n,nk−1 ( cTk-FWL ) .\nFor T = 0, this is immediate by the definition of Ik in Section C.3.\nAssume now that this inclusion holds at T−1 ≥ 0. We show that it also holds at T . TakeG,G′ ∈ Fnk0 and s, s′ ∈ [n]k such that,\ncTk-FWL(G)s = c T k-FWL(G ′)s′ .\nWe need to show that, for any f ∈ k-FGNNTemb,\nf(G)s = f(G ′)s′ .\nBut, by definition of the update rule k-FWL, the equality of the colors of s and s′ above implies that,\ncT−1k-FWL(G)s = c T−1 k-FWL(G ′)s′ , (11)\nand that there exists σ ∈ Sn such that, for any j ∈ [n],( cT−1k-FWL(G)s̃ : s̃ ∈ N F j (s) ) = ( cT−1k-FWL(G ′)s̃ : s̃ ∈ NFσ(j)(s ′) ) .\nLet s = (i1, . . . , ik) and s′ = (j1, . . . , jk). Then this implies that, for any w ∈ [k], j ∈ [n],\ncT−1k-FWL(G)i1,...,iw−1,j,iw+1,...,ik = c T−1 k-FWL(G ′)j1,...,jw−1,σ(j),jw+1,...,jk . (12)\nWe now use the induction hypothesis, i.e. that,\nρaugm n,nk−1 ( k-FGNNT−1emb ) ⊃ ρaugm n,nk−1 ( cT−1k-FWL ) .\nTake any fT−1 ∈ k-FGNNT−1emb . By (11),\nfT−1(G)s = fT−1(G ′)s′ .\nBy (12), for any w ∈ [k], j ∈ [n],\nfT−1(G)i1,...,iw−1,j,iw+1,...,ik = f T−1(G′)j1,...,jw−1,σ(j),jw+1,...,jk .\nBy the definition of FGL Section 3.3, for any FT : Fn k T → Fn k T+1 FGL, FT ◦ fT−1(G)s = FT ◦ fT−1(G ′)s′ . Therefore, for any f ∈ k-FGNNTemb,\nf(G)s = f(G ′)s′ ,\nwhich concludes the proof." }, { "heading": "C.6 CONCLUSION", "text": "Proposition 13. We have, for k ≥ 2,\nρ (MGNNI) = ρ (2-WLI) ρ (MGNNE) = ρ (2-WLE) (13) ρ (2-LGNNI) = ρ (2-WLI) (14) ρ (k-LGNNI) ⊆ ρ (k-WLI) ρ (k-LGNNE) ⊆ ρ (k-WLE) (15) ρ (k-FGNNI) = ρ (k-FWLI) ρ (k-FGNNE) = ρ (k-FWLE) (16)\nProof. Most of the statements come from the literature or are direct consequences of the lemmas above.\n• Proof of (13). The invariant case is proven in Xu et al. (2018, Lem. 2, Thm. 3). The equivariant case comes from Lem. 11, Lem. 8, the fact that the layers mE ◦ (Id +λS1) does not change the separating power and recalling that simply WLE = cWL.\n• (14) is the exact result Chen et al. (2020, Thm. 6).\n• Proof of (15). The invariant case is exactly Maron et al. (2019a, Thm. 1). The equivariant case comes from Lem. 11, Lem. 9, Lem. 8 and the fact that the layer mE does not change the separating power.\n• Proof of (16). The direct inclusion of the invariant case corresponds to Maron et al. (2019a, Thm. 2). The other cases are a consequence of Lem. 11, Lem. 9, Lem. 8 and the fact that the layers mI and mE does not change the separating power." }, { "heading": "D STONE-WEIERSTRASS THEOREM WITH SYMMETRIES", "text": "This section presents our extension of the Stone-Weierstrass theorem dealing with functions with symmetries. The scope of this section is not restricted to graphs or even tensors and we will deal with general spaces and general symmetries. To illustrate it, we will present applications for the PointNet architecture Qi et al. (2017). Our approximation results for GNNs (Theorems Thm. 5 and Thm. 6) are then obtained from these theoretical results applied to tensors and the symmetric group in Section D.9" }, { "heading": "D.1 GENERAL NOTATIONS", "text": "As explained above, we are dealing in this section with a much larger scope than graphs and permutations. We first need to extend the notations introduced above. The notations introduced below will make this section self-contained.\nIf X is some topological space, and F ⊆ X , denote by F its closure. If X is a topological space and Y = Rp some finite-dimensional space, denote by C(X,Y ) the set of continuous functions from X to Y .\nMoreover, if X is compact, we endow C(X,Y ) with the topology of uniform convergence, which is defined by the norm, f 7→ supx∈X ‖f(x)‖ for some norm ‖.‖ on Y . If G is a finite group acting on some topological space X , we say that G acts continuously on X if, for all g ∈ G, x 7→ g · x is continuous. If G is a finite group acting on some compact set X and some topological space Y , we define the sets of equivariant and invariant continuous functions by,\nCE(X,Y ) = {f ∈ C(X,Y ) : ∀x ∈ X, ∀g ∈ G, f(g · x) = g · f(x)} CI(X,Y ) = {f ∈ C(X,Y ) : ∀x ∈ X, ∀g ∈ G, f(g · x) = f(x)}\nNote that these definitions extend Definition 1 to a general group.\nIf Y = Rp, we denote the coordinate-wise multiplication, or Hadamard product of y, y′ ∈ Y simply by yy′ = (y1y′1, . . . , ypy ′ p) ∈ Y . We say that a subset A ⊆ Rp is a subalgebra of Rp if it is both a linear space and stable by multiplication.\nThis product in turn defines a product on C(X,Y ) with Y = Rp by, for f, g ∈ C(X,Y ), fg : x 7→ f(x)g(x).\nIn addition, we also extend the scalar-vector product of Y = Rp to functions: if g ∈ C(X,R) and f ∈ C(X,Y ), their product gf is the function gf : x 7→ g(x)f(x). Given a set of scalar functions S ⊆ C(X,R) and a set of vector-valued functions F ⊆ C(X,Y ), the set of products of functions of these two sets will be denoted by,\nS · F = {gf : g ∈ S, f ∈ F} .\nMoreover, we denote by 1 the continuous function from some X to Rp defined by x 7→ (1, . . . , 1). In particular, if f is a function from X to R, f1 denotes the function x 7→ (f(x), . . . , f(x)) which goes from X to Y = Rp. Finally, we say that F ⊆ C(X,Y ) with Y being some Rp is a subalgebra if it is a linear space which is also stable by multiplication." }, { "heading": "D.2 SEPARATING POWER", "text": "We recall the definition of separating power that we introduced above: Definition 14. Let F be a set of functions f defined on a set X , where each f takes its values in some Yf . The equivalence relation ρ (F) defined by F on X is: for any x, x′ ∈ X ,\n(x, x′) ∈ ρ (F) ⇐⇒ ∀f ∈ F , f(x) = f(x′) .\nFor a function f , we write ρ (f) for ρ ({f}). Separating power is stable by closure: Lemma 15. Let X be a compact topological space, Y be some finite-dimensional and F ⊆ C(X,Y ). Then,\nρ (F) = ρ ( F )\nProof. As F ⊆ F , F is more separating than F , i.e. ρ ( F ) ⊆ ρ (F).\nConversely, take (x, y) /∈ ρ ( F ) . By definition, there exists h ∈ F such that h(x) 6= h(y) so that if\n= ‖h(x) − h(y)‖, > 0 (for some norm ‖.‖ on Y ). As h ∈ F , there is some f ∈ F such that\nsupX ‖h− f‖ ≤ 3 . Therefore, by the triangular inequality,\n= ‖h(x)− h(y)‖ ≤ ‖f(x)− h(x)‖+ ‖f(x)− f(y)‖+ ‖f(y)− h(y)‖ ≤ 2 3 + ‖f(x)− f(y)‖ .\nIt follows that ‖f(x)− f(y)‖ ≥ 3 > 0 so that f(x) 6= f(y) and (x, y) /∈ ρ (F)." }, { "heading": "D.3 APPROXIMATION THEOREMS FOR REAL-VALUED FUNCTIONS", "text": "We start by recalling Stone-Weierstrass theorem, see Rudin (1991, Thm. 5.7).\nTheorem 16 (Stone-Weierstrass). Let X be a compact space, and F be a subalgebra of C(X,R) the space of real-valued continuous functions of X , which contains the constant function 1. If F separates points, i.e. ρ (F) = {(x, x) : x ∈ X}, then F is dense in C(X,R).\nWe now prove an extension of this classical result due to Timofte (2005) allowing us to deal with much smaller F by dropping the requirement that F separates points. Corollary 17. LetX be a compact space, andF be a subalgebra of C(X,R) the space of real-valued continuous functions of X , which contains the constant function 1. Then,\nF = {f ∈ C(X,R) : ρ (F) ⊆ ρ (f)} .\nNote that if F separates points, we get back the classical result as every function satisfies {(x, x) : x ∈ X} ⊆ ρ (f). Example 18. The invariant version of PointNet is able to learn functions of the form ∑ i f(xi) for f ∈ C(Rp,R). We can apply Corollary 17 to this setting. Consider the case where X is a compact subset of Rp and F = {x 7→ g ( ∑n i=1 f(xi)) , f ∈ C(X,Rh), g ∈ C(Rh,R)}. F is a subalgebra of C(X,R) (indeed of CI(X,R)) which contains the constant function 1. Then, it easy to see that ρ (F) = {(x, σ ? x), σ ∈ Sn}, where σ ? x is defined by (σ ? x)σ(i) = xi for all i (see Lem. 31 for a formal statement).\nNow note that for a function f ∈ C(X,R), the condition {(x, σ ? x), σ ∈ Sn} ⊆ ρ (f) is equivalent to f ∈ CI(X,R). So that Corollary 17 implies that F = CI(X,R) which means that PointNet is universal for approximating invariant functions. This was already proved in Qi et al. (2017) .\nWe now provide a proof of Corollary 17 for completeness.\nProof. The first inclusion F ⊆ {f ∈ C(X,R) : ρ (F) ⊆ ρ (f)} follows from the same argument as the one given below Theorem 4 so we focus on the other one.\nFor every x ∈ X , let xF denote its ρ (F)-class. The quotient set and the canonical surjection are: XF = X/ρ (F) = {xF , x ∈ X} and πF : X → XF , πF (x) = xF . A function g : X → R factorizes as g = ĝ ◦ πF for some ĝ : XF → R if and only if ρ (F) ⊆ ρ (g). In this case ĝ is unique, since πF is a surjection. In particular. every f ∈ F factorizes uniquely as f = f̂ ◦ πF , f̂ : XF → R, and F̂ = {f̂ , f ∈ F} clearly separates points on XF . We refer to Munkres (2000, §22) for the properties of the quotient topology. In particular, by the properties of the quotient topology on XF , F̂ is a subalgebra of C(XF ,R) and XF is compact. Hence, we can apply Theorem 16 to F̂ and F̂ is dense in C(XF ,R).\nNow take f ∈ C(X,R) with ρ (F) ⊆ ρ (f) and we show that f ∈ F . Again, ρ (F) ⊆ ρ (f) implies that f = f̂ ◦ πF . Let > 0. By density of F̂ , there is some ĥ ∈ F̂ such that supxF |ĥ(xF )) − f̂(xF )| ≤ . But, by construction of F̂ , there exits h ∈ F such that ĥ ◦ πF = h. Thus,\nsup x∈X |h(x)− f(x)| = sup x∈X |ĥ(πF (x))− f̂(πF (x))| = sup xF∈XF |ĥ(xF ))− f̂(xF )| ≤ .\nAs this holds for any > 0, we have proven that f ∈ F ." }, { "heading": "D.4 THE EQUIVARIANT APPROXIMATION THEOREM", "text": "We first need to extend Corollary 17 to vector-valued functions. For this, we need to have a vectorvalued version of the Stone-Weierstrass theorem and as shown by the example below additional assumptions have to be made. Example 19. We consider now the equivariant version of PointNet corresponding to the particular case where X is a compact subset of (Rp)n, Y = Rn and F = {x 7→ (f(x1), . . . , f(xn)), f ∈ C(R,R)}. Then clearly F is a subalgebra of CE(X,Y ) containing the constant function 1 and ρ (F) = {(x, x), x ∈ X}. Hence if Corollary 17 would be true with vector-valued functions instead of real-valued functions, we would have that F is dense in C(X,Y ). But this can clearly not be true as F ⊆ CE(X,Y ) which is clearly not dense in C(X,Y ).\nWe now present an extension of Corollary 17 also due to Timofte (2005): Proposition 20. Let X be a compact space, Y = Rp for some p ≥ 1. Let F ⊆ C(X,Y ). If there exists a nonempty subset S ⊆ C(X,R) such that:\nS · F ⊆ F and, ρ (S) ⊆ ρ (F) . (17)\nThen we have\nF = { f ∈ C(X,Y ), ρ (F) ⊆ ρ (f) , f(x) ∈ F(x) } , (18)\nwhere F(x) = {f(x), f ∈ F}. Moreover in (18), we can replace ρ (F) by ρ (S).\nNote that in the particular case Y = R and if F is a subalgebra of C(X,R), then we can take S = F in (17) and if the constant function 1 is in F , then F(x) = R, so that we recover Corollary 17. Now consider the case where F is a subalgebra of C(X,Rp). We need to find a set S ∈ C(X,R) satisfying (17) i.e. with a better separating power than F but containing real-valued functions such that sf ∈ F for all s ∈ S and f ∈ F . In the sequel, we will consider the set Fscal = {f ∈ C(X,R) : f1 ∈ F}. We clearly have Fscal · F ⊆ F since F is a subalgebra. Hence, in this setting, Prop. 20 can be rewritten as follows: Corollary 21. Let X be a compact space, Y = Rp for some p, G be a finite group acting (continuously) on X and F ⊆ CI(X,Y ) a (non-empty) set of invariant functions associated to G." }, { "heading": "Consider the following assumptions,", "text": "1. F is a sub-algebra of C(X,Y ) and the constant function 1 is in F .\n2. The set of functions Fscal ⊆ C(X,R) defined by,\nFscal = {f ∈ C(X,R) : f1 ∈ F}\nsatisfy, ρ (Fscal) ⊆ ρ (F) .\n3. For any x ∈ X , there exists f ∈ F such that f(x) has pairwise distinct coordinates, i.e., for any indices i, j ∈ {1, . . . , p} with i 6= j, f(x)i 6= f(x)j .\nThen the closure of F (for the topology of uniform convergence) is,\nF = {f ∈ CI(X,Y ) : ρ (F) ⊆ ρ (f)} .\nNote that Assumptions 1 and 2 ensures that (17) is valid, while Assumption 3 ensures thatF(x) = Rp. Unfortunately, in the equivariant case, the condition ρ (Fscal) ⊆ ρ (F) is too strong and we now explain how we will relax it.\nFor the sake of simplicity, we consider here the particular setting adapted to graphs: let n ≥ 1 be a fixed number (corresponding to the number of nodes), X be a compact set of graphs in Rn2 and Y = Fn with F = Rp for some p ≥ 1. We define the action of the symmetric group Sn on X by (σ ? x)σ(i),σ(j) = xi.j and on Y by (σ ? y)σ(i) = yi ∈ Rp. Hence the set of continuous equivariant functions CE(X,Y ) agrees with Definition 1.\nNow consider the case where F ⊆ CE(X,Y ) is a subalgebra of equivariant functions. Then, f ∈ Fscal needs to be invariant in order for f1 to be equivariant and hence in F . As a result, we see that Fscal will not separate points of X in the same orbit, i.e. x and σ ? x. But these points will typically be separated by F , since for any f ∈ F , we have f(σ ? x) = σ ? f(x) which is not equal to f(x) unless f is invariant.\nWe see that we need somehow to require a weaker separating power for F . More formally, two isomorphic graphs will have permuted outputs through an equivariant function, but should not be considered as separated. Let Orb(x) = {σ ? x, σ ∈ Sn} and Orb(y) = {σ ? y, σ ∈ Sn}. For any equivariant function f ∈ CE(X,Y ), for any z ∈ Orb(x), we have f(z) ∈ Orb(f(x)). Then let π : Y → Y/Sn be the canonical projection π(y) = Orb(y). We define\n(x, x′) ∈ ρ (π ◦ F) ⇔ ∀f ∈ F , Orb(f(x)) = Orb(f(x′)) ⇔ ∀f ∈ F ,∃σ ∈ Sn, f(σ ? x) = f(x′).\nIn particular, we see that if x′ ∈ Orb(x) then (x, x′) ∈ ρ (π ◦ F) for any F ∈ CE(X,Y ). Moreover, two graphs x and x′ are ρ (π ◦ F)-distinct if there exists a function f ∈ F such that ∀σ, f(σ ? x) 6= f(x′), i.e. the function f discriminates Orb(x) from Orb(x′) in the sense that for any z ∈ Orb(x) and z′ ∈ Orb(x′), we have f(z) 6= f(z′). To obtain an equivalent of Proposition 20 with CE(X,Y ) replacing C(X,Y ), we are able to relax assumption (17) to ρ (Fscal) ⊆ ρ (π ◦ F). Our main general result in this direction is the following theorem (proved in Section D.7) which might be of independent interest:\nTheorem 22. Let X be a compact space, Y = Rp for some p, G be a finite group acting (continuously) on X and Y and F ⊆ CE(X,Y ) a (non-empty) set of equivariant functions. Denote by π : Y −→ Y/G the canonical projection on the quotient space Y/G. Consider the following assumptions,\n1. F is a sub-algebra of C(X,Y ) and the constant function 1 is in F .\n2. The set of functions Fscal ⊆ C(X,R) defined by,\nFscal = {f ∈ C(X,R) : f1 ∈ F}\nsatisfy, ρ (Fscal) ⊆ ρ (π ◦ F) .\nThen the closure of F (for the topology of uniform convergence) is,\nF = {f ∈ CE(X,Y ) : ρ (F) ⊆ ρ (f) , ∀x ∈ X, f(x) ∈ F(x)} ,\nwhere F(x) = {f(x), f ∈ F}. Moreover, if I(x) = {(i, j) ∈ [p]2 : ∀y ∈ F(x), yi = yj}, then we have:\nF(x) = {y ∈ Rp : ∀(i, j) ∈ I(x), yi = yj} .\nExample 23. We now demonstrate how Theorem 22 can be used to recover the universality results in Segol & Lipman (2020). In this paper, the authors study equivariant neural network architectures working with unordered sets, corresponding in our case to X = Y = Rn and the group being the symmetric group Sn. They show that the PointNet architecture cannot approximate any (continuous) equivariant function and that adding a single so-called transmission layer is enough to make this architecture universal.\nIndeed, PointNet can only learn maps of the form x ∈ Rn 7→ (f(x1) . . . f(xn)), which are not universal in the class of equivariant functions, as shown by Segol & Lipman (2020, Lem. 3). Now, their transmission layer is a map of the form x ∈ Rn 7→ (1Tx)1. Therefore, in PointNetST, adding such a layer precisely adds a large class of functions to F = {(f(x1, ∑ i g(xi)), . . . , f(xn, ∑ i g(xi))), f ∈ C(R× Rh,R), g ∈ C(R,Rh), h ≥ 1}. F is still an algebra and as shown in Example 19, we have ρ (F) = {(x, x), x ∈ X}. Moreover, we have Fscal = CI(X,R) by Lem. 33 in particular, we get ρ (Fscal) = {(x, σ ? x), x ∈ X}, so that we obviously have ρ (Fscal) ⊆ ρ (π ◦ F). In summary, Theorem 22 implies the universality of PointNetST in CE(X,Y )." }, { "heading": "D.5 A PRELIMINARY VERSION OF THE EQUIVARIANT APPROXIMATION THEOREM", "text": "We start by proving a version of Theorem 22 with a slightly weaker condition:\nProposition 24. Let X be a compact space, Y = Rp for some p, G be a finite group acting (continuously) on X and Y and F ⊆ CE(X,Y ) a (non-empty) set of equivariant functions." }, { "heading": "Consider the following assumptions,", "text": "1. F is a subalgebra CE(X,Y ).\n2. The set of real-valued functions Fscal ⊆ C(X,R) defined by,\nFscal = {f ∈ C(X,R) : f1 ∈ F}\nsatisfies,\nρ (Fscal) ⊆ {(x, x′) ∈ X ×X : ∃g ∈ G, (g · x, x′) ∈ ρ (F)} .\nThen the closure of F (for the topology of uniform convergence) is, F = { f ∈ CE(X,Y ) : ρ (F) ⊆ ρ (f) , ∀x ∈ X, f(x) ∈ F(x) } , (19)\nwhere F(x) = {f(x), f ∈ F}.\nThe proof of this theorem relies on two main ingredients. First, following the elegant idea of Maehara & Hoang (2019), we augment the input space to transform the vector-valued equivariant functions into scalar maps. Second, we apply the fine-grained approximation result Cor. 17.\nProof. As uniform convergence implies point-wise convergence, the first inclusion is immediate, F ⊆ { f ∈ CE(X,Y ) : ρ (F) ⊆ ρ (f) , ∀x ∈ X, f(x) ∈ F(x) } .\nThe rest of the proof is devoted to the other direction.\nFor convenience, denote by Φ the family of linear forms associated to the canonical basis of Rp, i.e.,\nΦ = {y 7→ yi : 1 ≤ i ≤ p} ⊆ C(Y,R) .\nDefine our augmented input space as X̃ = X × Φ. As Φ is finite and X is compact, X̃ is still a compact space. We now transform F , a class of equivariant functions from X to Y , into F̃ a class of maps from X̃ to R. Define\nF̃ = {(x, ϕ) 7→ ϕ(f(x)) : f ∈ F} .\nWe check that F̃ is indeed a subset of C(X̃,R). Indeed, as Φ is finite, it is equipped with the discrete topology. Hence, each singleton {ϕ} for ϕ ∈ Φ is open in Φ and it suffices to check the continuity in the first variable with ϕ fixed. But, if f ∈ F , x 7→ ϕ(f(x)) is continuous as a composition of continuous maps.\nWe can now apply Cor. 17 to F̃ ⊆ C(X̃,R). Therefore, the closure of F̃ in C(X̃,R) is, F̃ = { v ∈ C(X̃,R) : ρ ( F̃ ) ⊆ ρ (v) , ∀(x, ϕ) ∈ X̃, v(x, ϕ) ∈ F̃(x, ϕ) } .\nWe now show the equality of (19). Take h in the right-hand side of (19), i.e. h ∈ CE(X,Y ) such that ρ (F) ⊆ ρ (h) and h(x) ∈ F(x) for all x ∈ X . We show that h̃, defined by h̃ : (x, ϕ) 7→ ϕ(h(x)), belongs to F̃ using the result above.\n• As h is continuous, by the same argument as above, (x, ϕ) 7→ ϕ(h(x)) is continuous on X̃ .\n• We check that ρ ( F̃ ) ⊆ ρ ( h̃ ) .\nTake (x, ϕ), (y, ψ) ∈ X̃ such that, for all f ∈ F ,\nϕ(f(x)) = ψ(f(y)) , (20)\nand we aim at showing that ϕ(h(x)) = ψ(h(y)).\nTo gain more information from (20), we apply it to functions of the form f1 with f ∈ Fscal. By definition of Φ, this translates to f(x) = f(y), for any f ∈ Fscal. Therefore (x, y) ∈ ρ (Fscal) and so there exists g ∈ G such that (g · x, y) ∈ ρ (F). Plugging this into (20) and using the equivariance of f ,\n∀f ∈ F , ϕ(f(x)) = ψ(g · f(x)) .\nAs G acts continuously on Y , both ϕ and z 7→ ψ(g ·z) are continuous and, as a consequence of the equality above, coincide on F(x). But we assumed that h(x) ∈ F(x) and therefore the equality also holds for h, i.e. ϕ(h(x)) = ψ(g · h(x)). Finally, recalls that, by assumption, ρ (F) ⊆ ρ (h). Therefore (g · x, y) ∈ ρ (F) implies that h(g · x) = h(y) and, combined with the result above, ϕ(h(x)) = ψ(h(y)).\n• We verify that, for x ∈ X , ϕ ∈ Φ, ϕ(h(x)) belongs to F̃(x). Indeed, recall that h(x) ∈ F(x). Therefore, as ϕ is continuous, ϕ(h(x)) is in ϕ(F(x)) which is included in F̃(x).\nThis shows that h̃ : (x, ϕ) 7→ ϕ(h(x)) is in F̃ . Consequently, for any > 0, there exists f ∈ F such that, ∀x ∈ X, ∀ϕ ∈ Φ, |ϕ(h(x))− ϕ(f(x))| ≤ . If Y = Rp is endowed with the infinity norms on coordinates, by definition of Φ, this means that,\n∀x ∈ X, ‖h(x)− f(x)‖ ≤ .\nRemark 25. In the particular case of G ⊆ Sn being a group of permutations acting on Rp by, for g ∈ G, x ∈ Rp,\n∀i ∈ {1, . . . , p}, (g · x)i = xg−1(i) ,\nthe functions of F̃ are indeed invariant, as shown by Maehara & Hoang (2019). For this, a left action on Φ is defined by, for g ∈ G, ϕ ∈ Φ,\n∀x ∈ Rp, (g · ϕ)(x) = ϕ(g−1 · x) .\nIn other words, the action of g on the linear form associated to the ith coordinate yields the linear form associated to the g(i)th coordinate. One can now check that the functions F̃ are invariant.\nD.6 CHARACTERIZING THE SUBALGEBRAS OF Rp\nBefore moving to our general result, we need to study the structure of the subalgebras of Rp. For this, we will use the following simple lemma.\nIn the following lemma, R[X1, . . . , Xp] denotes the set of multivariate polynomials with p indeterminates (and real coefficients).\nLemma 26. Let C ⊆ Rp be a finite subset of Rp. There exists P ∈ R[X1, . . . , Xp] such that P|C , the restriction of P to C, is an injective map.\nProof. Let x1, . . . , xm ∈ Rp be distinct vectors such that {x1, . . . , xm} = C. Similarly to Lagrange polynomials, define,\nP (X1, . . . , Xp) = m∑ i=1 i ∏ j 6=i ∑p l=1(Xl − x j l ) 2 ‖xi − xj‖22 , (21)\nwhich is a well-defined multivariate polynomial. Note that, seeing X = (X1, . . . , Xp) as a vector in Rp, it can also be written as,\nP (X1, . . . , Xp) = m∑ i=1 i ∏ j 6=i ‖X − xj‖22 ‖xi − xj‖22 . (22)\nBy construction, P (xi) = i and therefore P is an injective map on C\nLemma 27. For a subalgebra A of Rp, we define: J = {j ∈ {1, . . . , p} : ∀x ∈ A, xj = 0} I = {(i, j) ∈ (JC)2 : ∀x ∈ A, xi = xj} .\nThen, we have : A = {x ∈ Rp : ∀(i, j) ∈ I, xi = xj , ∀j ∈ J, xj = 0} .\nProof. Before proving the general case, we focus on the situation where A will turn out to be the whole Rp.\nAssume that the two following conditions holds,\n∀i 6= j, ∃x ∈ A, xi 6= xj (23) ∀i, ∃x ∈ A, xi 6= 0 (24)\nOur goal is to show that, under these additional assumptions, A = Rp. We divide the proof in three parts, first we show that 1 ∈ A using (24), giving us that A is closed under polynomials, then that there is x ∈ A with pairwise distinct coordinates thanks to (23) and finally that this implies that A is the whole space.\nNote that if p = 1, (24) and the linear space property of A immediately give the result.\n• Here we prove that (24) implies that 1 ∈ A.\n– First, we construct by induction x ∈ A such that xi 6= 0 for any index i. More precisely, our induction hypothesis at step j ∈ {1, . . . , p} is,\n∃x ∈ A, ∀1 ≤ i ≤ j, xi 6= 0 . (25) By (24), this holds for j = 1. Now, assume that it holds at j−1 for some p ≥ j ≥ 2 and take x ∈ A such that xi 6= 0 for any 1 ≤ i ≤ j − 1 and y ∈ A such that jj 6= 0 by (24). By definition of x, the set,\n{λ ∈ R : ∃1 ≤ i ≤ j − 1, λxi + yi = 0} , is finite. Thus, there exists λ ∈ R such that λxi + yi 6= 0 for any 1 ≤ i ≤ j− 1 and for i = j too as xj = 0 and yj 6= 0. As A is a subalgebra, λx+ y ∈ A and this concludes the induction step.\n– Let x ∈ A be the vector constructed, i.e. such that xi 6= 0 for every index i. We prove that 1 ∈ A by constructing 1 from x. Indeed, using Lagrange interpolation, take P ∈ R[X] such that P (xi) = 1xi for every i. (Note that this noes not matter if some xi are equal, as the 1xi would also be the same.) Finally, as A is a subalgebra, xP (x), which is to be understood coordinate-wise, is in A and so is 1 = xP (x).\n• We show that the previous point and (23) imply that there exists a vector in A with pairwise distinct coordinates, i.e. that there exists x ∈ A such that, for any i 6= j, xi 6= xj . Using (23), for any i < j, there exists xij ∈ A such that xiji 6= x ij j . We wish to combine the family\n(xij)i<j into a single vector.\nFor this we use Lem. 26. Seeing each collection (xijk )i<j as vector of Rp(p−1)/2, we define C = {(xijk )i<j : 1 ≤ k ≤ p}, which is a finite subset (of cardinal p) of Rp(p−1)/2. By Lem. 26, there exists P ∈ R[X1, . . . , Xp(p−1)/2] such that P is an injective map on C.\nAs A is a subalgebra and 1 ∈ A, the vector,\nP ( (xij)i<j ) = P ( (xij1 )i<j ) ...\nP ( (xijp )i<j\n) ,\nis in A too. We now check that this vector has pairwise distinct coordinates. Let l < k, then xlkl 6= xlkk and therefore (xijl )i<j 6= (x ij k )i<j . By construction of P , P ( (xijl )i<j ) 6= P ( (xijk )i<j ) ,\ni.e. P ( (xij)i<j ) l 6= P ( (xij)i<j ) k . Thus, P ( (xij)i<j ) ∈ A has pairwise distinct coordinates as required.\n• Finally, we show that A = Rp. This is a direct consequence of the point above and of Lagrange interpolation. Indeed, take any y ∈ Rp and denote by x ∈ A the vector that we just constructed with pairwise distinct coordinates. Therefore, by Lagrange interpolation, there exists P ∈ R[X] such that P (xi) = yi for every i ∈ {1, . . . , p}. As A is a sublagebra and 1 ∈ A, P (x) ∈ A and hence y ∈ A.\nFinally, we return to the general case. We introduce the set of indexes of I and J which appear in the result and use them to reduce the situation to the previous case. Define,\nJ = {j ∈ {1, . . . , p} : ∀x ∈ A, xj = 0} I = {(i, j) ∈ (JC)2 : ∀x ∈ A, xi = xj} .\nand denote by A′ = {x ∈ Rp : ∀(i, j) ∈ I, xi = xj , ∀j ∈ J, xj = 0}, which is also a subalgebra. By definition, it holds that A ⊆ A′. By construction, I is an equivalence relation on JC and denote by JC/I its equivalence classes. Let p′ = |JC/I| and choose i1, . . . , ip′ representatives of the equivalence classes. Consider the map,\nϕ :\n{ Rp −→ Rp ′\nx 7−→ (xi1 , . . . , xip′ ) .\nϕ is an algebra homomoprhism so that ϕ(A) is a subalgebra of Rp′ . But, by construction of I and J , ϕ(A) satisfies (23) and (24). Whence, by our result in this particular case, ϕ(A) = Rp′ .\nHowever, A ⊆ A′ implies that ϕ(A) ⊆ ϕ(A′) ⊆ Rp′ . Therefore, Rp′ = ϕ(A) = ϕ(A′). But, by construction of ϕ, ϕ is actually an injective map on A′. Therefore, we deduce from ϕ(A) = ϕ(A′) that A = A′, concluding the proof." }, { "heading": "D.7 PROOF OF THE MAIN EQUIVARIANT APPROXIMATION THEOREM", "text": "We can now fully exploit the structure of subalgebra of F thanks to the results above, and in particular relax the second assumption of Prop. 24 to give our main theorem 22. We first prove the following lemma. Lemma 28. Under the assumptions of Thm. 22, for any H ⊆ G subgroup, for any x, y ∈ X ,\n∀f ∈ F , ∃g ∈ H, f(g · x) = f(y) ⇐⇒ ∃g ∈ H, ∀f ∈ F , f(g · x) = f(y) ." }, { "heading": "In particular,", "text": "(x, y) ∈ ρ (π ◦ F) ⇐⇒ ∃g ∈ G, (g · x, y) ∈ ρ (F) .\nProof. The reverse implication is immediate so we focus on the direct one and prove its contraposition, i.e., ∀g ∈ H, ∃f ∈ F , f(g · x) 6= f(y) =⇒ ∃f ∈ F , ∀g ∈ H, f(g · x) 6= f(y) . To prove this, we take advantage of F being a subalgebra and H being finite. Let H = {g1, . . . , gh} and define,\nA = {(f(g1 · x), . . . , f(gh · x), f(y)) : f ∈ F} .\nAs F is a subalgebra of C(X,Rp), A is a subalgebra of Rp′ with p′ = p(h+ 1). By Lem. 27,\nA = {z ∈ Rp ′\n: ∀(i, j) ∈ I, zi = zj , ∀j ∈ J, zj = 0} . where I and J can be chosen to be,1\nJ = {j ∈ {1, . . . , p} : ∀z ∈ A, zj = 0} I = {(i, j) ∈ {1, . . . , p}2 : ∀z ∈ A, zi = zj} .\nAs I is an equivalence relation, an element of A is uniquely defined by its coordinates on equivalence classes of I . Therefore, one can choose z ∈ A such that zi = zj ⇐⇒ (i, j) ∈ I . By definition of A, there exists f∗ ∈ F such that (f∗(g1 · x), . . . , f∗(gh · x), f∗(y)) = z. We now check that f∗ is indeed appropriate. Take l ∈ {1, . . . , h}, we want to show that f∗(gl · x) 6= f∗(y). By assumption, there exists f ∈ F such that f(gl · x) 6= f(y), i.e. there exists i ∈ {1, . . . , p} such that f(gl · x)i 6= f(y)i. Therefore, ((l − 1)p+ i, hp+ i) cannot be in I so that z(l−1)p+i 6= zhp+1, i.e. f∗(gl · x)i 6= f∗(y)i.\nWe now prove our main abstract theorem. Theorem 22. Let X be a compact space, Y = Rp for some p, G be a finite group acting (continuously) on X and Y and F ⊆ CE(X,Y ) a (non-empty) set of equivariant functions. Denote by π : Y −→ Y/G the canonical projection on the quotient space Y/G. Consider the following assumptions,\n1. F is a sub-algebra of C(X,Y ) and the constant function 1 is in F .\n2. The set of functions Fscal ⊆ C(X,R) defined by, Fscal = {f ∈ C(X,R) : f1 ∈ F}\nsatisfy, ρ (Fscal) ⊆ ρ (π ◦ F) .\nThen the closure of F (for the topology of uniform convergence) is, F = {f ∈ CE(X,Y ) : ρ (F) ⊆ ρ (f) , ∀x ∈ X, f(x) ∈ F(x)} ,\nwhere F(x) = {f(x), f ∈ F}. Moreover, if I(x) = {(i, j) ∈ [p]2 : ∀y ∈ F(x), yi = yj}, then we have:\nF(x) = {y ∈ Rp : ∀(i, j) ∈ I(x), yi = yj} .\nProof of Thm. 22. By Lem. 28, the second assumption of Prop. 24 is also satisfied. To get the conclusion of Thm. 22, note that F(x) is now a linear subspace of a finite-dimensional vector space and therefore it is closed. Thus, F(x) = F(x) which is a subalgebra. Applying Lem. 27 to F(x) and noting that, necessarily J = ∅ as 1 ∈ F(x) by assumption, gives the result of Thm. 22." }, { "heading": "D.8 PRACTICAL REDUCTIONS", "text": "Though the results we proved above were formulated using classic hypotheses, such as requiring F to be a subalgebra, we can give much more compact versions for our setting. We also reduce the assumption that ρ (Fscal) ⊆ ρ (π ◦ F) to a more practical one. We start with the invariant case. Corollary 29. Let X be a compact space, Y = F = Rp be some finite-dimensional vector space, G be a finite group acting (continuously) on X and F ⊆ CI(X,Y ) a (non-empty) set of invariant functions.\nAssume that, for any h ∈ C(F2,F) and f, g ∈ F , x 7→ h(f(x), g(x)) ∈ F ." }, { "heading": "Then the closure of F is,", "text": "F = {f ∈ CI(X,Y ) : ρ (F) ⊆ ρ (f)} .\n1We slightly change the definition of I compared to the statement of the lemma to add J2, which does not change the result.\nProof. We wish to apply Thm. 22 but for this we need G to act on Y . Define a (trivial) action of G on Y by, ∀g ∈ G, ∀y ∈ Y, g · y = y . With this action on Y , CE(X,Y ) = CI(X,Y ). Moreover, Y/G = Y , π : Y → Y/G is the identity so that ρ (π ◦ F) = ρ (F). Our assumption clearly ensure that F is indeed a subalgebra and contains the constant function 1. All that is left to show to apply Thm. 22 is that the set of functions Fscal ⊆ C(X,R) defined by,\nFscal = {f ∈ C(X,R) : f1 ∈ F}\nsatisfies, ρ (Fscal) ⊆ ρ (F) .\nTake (x, y) /∈ ρ (F) and we show that (x, y) /∈ ρ (Fscal). Indeed, by definition there exists f ∈ F , i ∈ {1, . . . , p} such that f(x)i 6= f(y)i. Let l ∈ C(F,R) defined by l(z) = zi and h ∈ C(F,F) defined by, for z ∈ F, h(z) = (l(z), . . . , l(z)) = l(z)1. Then, by assumption h ◦ f ∈ F . But h ◦ f is (l ◦ f)1 with l ◦ f ∈ C(X,R), so that, by definition, l ◦ f ∈ Fscal. Moreover, l ◦ f(x) 6= l ◦ f(y). Therefore, (x, y) /∈ ρ (Fscal). Therefore, we can apply Thm. 22. We get that,\nF = {f ∈ CE(X,Y ) : ρ (F) ⊆ ρ (f) , ∀x ∈ X, f(x) ∈ F(x)} ,\nand F(x) = {y ∈ Fp : ∀(i, j) ∈ I(x), yi = yj} ,\nwith I(x) given by,\nI(x) = {(i, j) ∈ {1, . . . , p}2 : ∀y ∈ F(x), yi = yj} .\nTo conclude the proof, we now show that I(x) = {(i, i) : i ∈ {1, . . . , p}}, which will imply that F(x) = Rp = Y . Indeed, the constant function z 7→ (1, 2, . . . , p) is in F by assumption. Therefore, I(x) is reduced to {(i, i) : i ∈ {1, . . . , p}}.\nIn our previous version of our approximation result for node embedding, we did not allow features in the output as it would have made the statement and the proof a bit convoluted. With this new assumption, this is much easier.\nCorollary 30. Let X be a compact space, Y = Fn, with F = Rp and G = Sn the permutation group, acting (continuously) on X and acting on Fn by, for σ ∈ Sn, x ∈ Fn,\n∀i ∈ {1, . . . , p}, (σ · x)i = xσ−1(i) ,\nLet F ⊆ CE(X,Fn) be a (non-empty) set of equivariant functions." }, { "heading": "Consider the following assumptions,", "text": "1. For any h ∈ C(F2,F), f, g ∈ F ,\nx 7→ (h(f(x)1, g(x)1), . . . , h(f(x)n, g(x)n)) ∈ F .\n2. If f ∈ F ,\nx 7→ ( n∑ i=1 f(x)i, n∑ i=1 f(x)i . . . , n∑ i=1 f(x)i ) ∈ F .\nThen the closure of F (for the topology of uniform convergence) is,\nF = {f ∈ CE(X,Fn) : ρ (F) ⊆ ρ (f)} .\nFor this we need a handy lemma, whose proof relies on a result about multi-symmetric polynomials from Maron et al. (2019a).\nLemma 31. Let F = Rp be some finite-dimensional space. Take x1, . . . , xn, y1, . . . , yn ∈ F such that, for any σ ∈ Sn, (x1, . . . , xn) 6= (yσ(1), . . . , yσ(n)). Then, there exists h ∈ C(F,R) such that,\nn∑ i=1 h(xi) 6= n∑ i=1 h(yi) .\nMoreover, h can be written as h(x) = h1(x1)h2(x2) . . . hp(xp) for any x ∈ F with h1, . . . , hp ∈ C(R,R).\nProof. By Maron et al. (2019a, Prop. 1), there exists α1, . . . , αp non-negative integers such that∑n i=1 x α1 i1 x α2 i2 . . . x αp ip 6= ∑n i=1 y α1 i1 y α2 i2 . . . y αp ip . Taking h : x 7→ x α1 i1 x α2 i2 . . . x αp ip yields the result.\nProof of Cor. 30. We want to apply Thm. 22. With our first assumption, the first assumption of Thm. 22 is easily verified.\nWe now focus on the second one. As in the statement of Thm. 22, define Fscal ⊆ C(X,R) by, Fscal = {f ∈ C(X,R) : f1 ∈ F} .\nWe have to show that ρ (Fscal) ⊆ ρ (π ◦ F). For this, take x, y /∈ ρ (π ◦ F). There exists f ∈ F such that for any σ ∈ Sn, σ · f(x) 6= f(y). We have to find l ∈ Fscal such that l(x) 6= l(y). In other words, from a function in Fn which discriminates between x and y we have to build a function in R.\nFirst, we exhibit a function which discriminates between x and y. Apply Lem. 31 to the vectors f(x) and f(y): there exists h0 ∈ C(F,R) such that ∑n i=1 h0(f(xi)) 6= ∑n i=1 h0(f(yi)).\nTo fit the assumptions, we build h ∈ C(F,F) from h0 by h : x ∈ F 7→ (h0(x), . . . , h0(x)) ∈ F. Take g ∈ C(F,F) such that g(z) = (z1, . . . , z1) for any z ∈ F and l ∈ C(X,R) defined by, for w ∈ X , l(w) = ∑n i=1 h0(f(wi)). Then, l(x) 6= l(y). All we have to do is show that l ∈ Fscal, i.e., l1 ∈ F . This is where the two assumptions we made come into play. Indeed, the first one implies that h ◦ f ∈ F and the second gives\nz 7→ ( n∑ i=1 h(f(zi)), . . . , n∑ i=1 h(f(zi)) ) ∈ F .\nFinally, the first assumption ensure that,\nz 7→ ( g ( n∑ i=1 h(f(zi)) ) , . . . , g ( n∑ i=1 h(f(zi)) )) ∈ F .\nBut this last function is none other than l1, which shows that l ∈ Fscal as required. We have successfully verified the hypothesis of Thm. 22. Therefore, the closure of F is,\nF = {f ∈ CE(X,Fn) : ρ (F) ⊆ ρ (f) , ∀x ∈ X, f(x) ∈ F(x)} , with F(x) = {y ∈ Fn : ∀(i, i′, j, j′) ∈ I(x), yi,i′ = yj,j′} , and\nI(x) = {(i, i′, j, j′) ∈ ({1, . . . , n} × {1, . . . , p})2 : ∀y ∈ F(x), yi,i′ = yj,j′} .\nTo get the desired result, we need to get rid of the condition “f(x) ∈ F(x)” in the description of F . Fix x ∈ X . First, we show that F(x) = {y ∈ Fn : ∀(i, j) ∈ J(x), yi = yj} , with J(x) = {(i, j) ∈ {1, . . . , n}2 : yi = yj} , (note that the equalities here are not in R anymore but in F). The direct inclusion “⊆” is immediate by construction of J(x) so we focus on the reverse direction. For this, we show that the 4-tuples (i, i′, j, j′) of I(x) necessarily satisfy i′ = j′ and (i, j) ∈ J(x).\nFirst note that, by the first assumption, the vector y0 ∈ Fn such that y0i = (1, 2, . . . , p) for i ∈ {1, . . . , n} is in F(x). Indeed, take the constant function always equal to (1, 2, . . . , p) as h. Now, consider a 4-tuple (i, i′, j, j′) of I(x) and we show that, actually, (i, j) ∈ J(x) and i′ = j′. As y0 in F(x), and y0i,i′ = i′, y0j,j′ = j′, (i, i′, j, j′) ∈ I(x) implies that j′ = i′. Consider, k ∈ {1, . . . , p}. We show that, for any y ∈ F(x), yi,k = yj,k. But such a y can be written as y = f(x) for some f ∈ F . Consider, the function h ∈ C(F,F) associated to the permutation (i′ k), defined by,\nz 7→ (z(i′ k)(1), . . . , z(i′ k)(n)) = (z1, . . . , zi′−1, zk, zi′+1, . . . , zk−1, zi′ , zk+1, . . . , zp) . By our first assumption, z 7→ (h(f(z)1), . . . , h(f(z)n)) ∈ F so that (h(y1), . . . , h(yn)) ∈ F(x). In particular, as (i, i′, j, i′) ∈ I(x), h(yi)i′ = h(yj)i′ , i.e. yi,k = yj,k. Therefore, (i, j) ∈ J(x). Finally, we can conclude that F(x) ⊃ {y ∈ Fn : ∀(i, j) ∈ J(x), yi = yj}. Indeed, take y ∈ Fn : ∀(i, j) ∈ J(x), yi = yj . We show that all the constraints of I(x) are satisfied. Indeed, take (i, i′, j, j′) ∈ I(x). We have shown that (i, j) ∈ J(x) and i′ = j′ so that yi = yj and in particular yi,i′ = yj,j′ . Therefore, this finishes the proof of F(x) ⊃ {y ∈ Fn : ∀(i, j) ∈ J(x), yi = yj}. Thus, F(x) = {y ∈ Fn : ∀(i, j) ∈ J(x), yi = yj} , with\nJ(x) = {(i, j) ∈ {1, . . . , n}2 : yi = yj} .\nWe have proven so far, that,\nF = {f ∈ CE(X,Fn) : ρ (F) ⊆ ρ (f) , ∀x ∈ X, f(x) ∈ F(x)} ⊆ {f ∈ CE(X,Fn) : ρ (F) ⊆ ρ (f)} .\nTake h ∈ CE(X,Fn) such that ρ (F) ⊆ ρ (h) and fix x ∈ X . Our goal is to show that, for any (i, j) ∈ J(x), h(x)i = h(x)j so that h(x) ∈ F(x). But, if (i, j) ∈ J(x), then for any f ∈ F , f(x)i = f(x)j so that (i j) · f(x) = f(x), where (i j) denotes the permutation which exchanges i and j. Moreover, as (i j) ∈ Sn, by equivariance, this means that f((i j) · x) = f(x) for every f ∈ F and therefore that ((i j) · x, x) ∈ ρ (F). By assumption, we infer that ((i j) · x, x) ∈ ρ (h) too, i.e. that h((i j) · x) = h(x) and so that h(x)i = h(x)j by equivariance, which concludes our proof." }, { "heading": "D.9 REDUCTIONS FOR GNNS", "text": "We now present a lemma which explains how to instantiate the two corollaries above in the case of GNNs by replacing continuous functions with MLPs. Lemma 32. Fix X some compact space, n ≥ 1 and F a finite-dimensional feature space. Let F0 ⊆ ⋃∞ h=1 C(X,Rh) be stable by concatenation and consider,\nF = {x 7→ (m(f(x)1), . . . ,m(f(x)n)) : f ∈ F0∩C(X,Rh), m : Rh → F MLP, h ≥ 1} ⊆ C(X,F) . Then, if E(F) ⊆ C(X,F) is the set of functions obtained by replacing the MLP m in the definition of F by an arbitrary continuous function, E(F) satisfies,\n1. F = E(F)\n2. ρ (F) = ρ (E(F))\n3. For any h ∈ C(F2,F), f, g ∈ E(F), x 7→ (h(f(x)1, g(x)1), . . . , h(f(x)n, g(x)n)) ∈ E(F) .\n4. If, for any f ∈ F ,\nx 7→ ( n∑ i=1 f(x)i, n∑ i=1 f(x)i . . . , n∑ i=1 f(x)i ) ∈ F ,\nthen, for any f ∈ E(F),\nx 7→ ( n∑ i=1 f(x)i, n∑ i=1 f(x)i . . . , n∑ i=1 f(x)i ) ∈ E(F) .\n5. If F is equivariant w.r.t. the action described in Cor. 30, so is E(F).\nProof. Define,\nE(F) = {x 7→ (m(f(x)1), . . . ,m(f(x)n)) : f ∈ F0 ∩ C(X,Rh), m ∈ C(Rh,F), h ≥ 1} .\nAs MLP are continuous and by the universality of MLP on a compact set (see Section C.1),\nF ⊆ E(F) ⊆ F . This already implies 1. and that ρ ( F ) ⊆ ρ (E(F)) ⊆ ρ (F). Using Lem. 15 yields 2.\nWe now show 3. Take h ∈ C(F2,F), f, g ∈ F0, f ∈ C(X,Rhf ), g ∈ C(X,Rhg ) and m ∈ C(Rhf ,F), l ∈ C(Rhg ,F). All we have to show is that,\nx 7→ (h(m(f(x)1), l(g(x)1)), . . . , h(m(f(x)n), l(g(x)n))) ∈ E(F) .\nBut as F0 is stable by concatenation, x 7→ (f(x), g(x)) ∈ Rhf+hg is still in F0. Moreover, y ∈ Rhf+hg 7→ h(m(y1, . . . , yhf ), l(yhf+1, . . . , yhf+hg ) is also in C(Rhf+hg ,F) which shows that the map above is indeed in E(F). The last two points are immediate consequences of the definition of E(F).\nThm. 5 and Thm. 6 are now obtained by combining Lem. 32 with Cor. 29 and Cor. 30." }, { "heading": "E PROOFS FOR EXPRESSIVENESS OF GNNS", "text": "Note that in the Theorem 6, the additional stability assumption is “almost” necessary to obtain the result. Indeed, if the result holds, i.e.,\nF = {f ∈ CE(X,Fn) : ρ (F0) ⊆ ρ (f)} ,\nthen one can show, that, if f ∈ F , then\nf̃ : x 7→ ( n∑ i=1 f(x)i, n∑ i=1 f(x)i . . . , n∑ i=1 f(x)i ) ∈ F .\nIndeed, f ∈ F so that it has a weaker discriminating power than F , so that ρ (F) = ρ (F0) ⊆ ρ (f). But, by construction of f̃ , ρ (f) ⊆ ρ ( f̃ ) so that ρ (F0) ⊆ ρ ( f̃ ) . As f̃ is also in CE(X,Fn), f̃ ∈ F ." }, { "heading": "E.1 EXPRESSIVITY OF GNN LAYERS", "text": "Lemma 33. Fix F0,F1 (non-trivial) finite dimensionnal vector spaces. Consider the action of G = Sn on F0 × Fn×k0 defined by,\n∀σ ∈ Sn, ∀x0 ∈ F0, ∀x ∈ Fn×k, σ · (x0, x) = (x0, xσ−1(1), . . . , xσ−1(n))\nLet K ⊆ F0 × Fn×k0 be a compact set. Then, the set of functions from K ⊆ F0 × F n×k 0 to F1 of the form,\n(x0, x) 7−→ f0 x0, n∑ j=1 k∏ w=1 fw(xj,w) where f0 : F0 × Rh → F1, fj : F0 → Rh, j = 1, . . . , k are multi-linear perceptrons and h ≥ 1, is dense in CI(K,F1).\nProof. Denote by F ⊆ CI(K,F1) the set of such functions. To prove that F = CI(K,F1), we first apply Thm. 5.\nWe get, that, F = { f ∈ CI(F0 × Fn×k0 ,F1) : ρ (F) ⊆ ρ (f) } .\nWe now characterize ρ (F). Actually, it is equal to ρ (inv) = {((x0, x), (y0, y)) ∈ ( F0 × Fn×k0 )2 : ∃σ ∈ Sn, (x0, x) = σ · (y0, y)}. As the functions of F are invariant, ρ (inv) ⊆ ρ (F). We now show the reverse. Take (x0, x), (y0, y) ∈ F0 × Fn×k0 such that there does not exist σ ∈ Sn such that (x0, x) = σ · (y0, y). If x0 6= y0, there exists f0 : F0 → F1 MLP such that f(x0) 6= f(y0) so that ((x0, x), (y0, y)) /∈ ρ (F). Otherwise, there does not exists σ ∈ Sn such that (xσ−1(1), . . . , xσ−1(n)) = (y1, . . . , yn) by definition of the action of G = Sn. Now, apply Lem. 31 with F ← Fk0 , the universality of MLP and use the decomposition given to get fj : F0 → Rh, j = 1, . . . , k such that∑n j=1 ∏k w=1 fw(xj,w) 6= ∑n j=1 ∏k w=1 fw(yj,w). Choosing an appropriate MLP f0 : Rh → F1 yields that ((x0, x), (y0, y)) /∈ ρ (F). Hence, we have shown that,\nF = { f ∈ CI(F0 × Fn×k0 ,F1) : ρ (inv) ⊆ ρ (f) } = CI(F0 × Fn×k0 ,F1) ." }, { "heading": "E.2 APPROXIMATION THEOREMS FOR GNNS", "text": "We now have all the tools to finally prove our main result.\nTheorem 34. Let Kdiscr ⊆ Gn × Fn0 , K ⊆ Fn 2 0 be compact sets. For the invariant case, we have:\nMGNNI = {f ∈ CI(Kdiscr,F) : ρ (2-WLI) ⊆ ρ (f)} 2-LGNNI = {f ∈ CI(K,F) : ρ (2-WLI) ⊆ ρ (f)} k-LGNNI = {f ∈ CI(K,F) : ρ (k-LGNNI) ⊆ ρ (f)} ⊃ {f ∈ CI(K,F) : ρ (k-WLI) ⊆ ρ (f)} k-FGNNI = {f ∈ CI(K,F) : ρ (k-FWLI) ⊆ ρ (f)}\nFor the equivariant case, we have:\nMGNNE = {f ∈ CE(Kdiscr,Fn) : ρ (2-WLE) ⊆ ρ (f)} k-LGNNE = {f ∈ CE(K,Fn) : ρ (k-LGNNE) ⊆ ρ (f)} ⊃ {f ∈ CE(K,Fn) : ρ (k-WLE) ⊆ ρ (f)} k-FGNNE = {f ∈ CE(K,Fn) : ρ (k-FWLE) ⊆ ρ (f)}\nWe decompose the proof with an additional lemma.\nLemma 35. Let Kdiscr ⊆ Gn × Fn0 , K ⊆ Fn 2\n0 be compact sets. For the invariant case, and any k ≥ 2, we have,\nMGNNI = {f ∈ CI(Kdiscr,F) : ρ (MGNNI) ⊆ ρ (f)} k-LGNNI = {f ∈ CI(K,F) : ρ (k-LGNNI) ⊆ ρ (f)} k-FGNNI = {f ∈ CI(K,F) : ρ (k-FGNNI) ⊆ ρ (f)}\nFor the equivariant case, and any k ≥ 2, we have:\nMGNNE = {f ∈ CE(Kdiscr,F) : ρ (MGNNE) ⊆ ρ (f)} k-LGNNE = {f ∈ CE(K,F) : ρ (k-LGNNE) ⊆ ρ (f)} k-FGNNE = {f ∈ CE(K,F) : ρ (k-FGNNE) ⊆ ρ (f)}\nProof of Thm. 34. The theorem is now a direct consequence of Prop. 13 and Lem. 35.\nWe now move to the proof of Lem. 35.\nProof of Lem. 35. First, focus on the invariant case. Let F denote MGNNI , k-LGNNI or k-FGNNI and X be either Kdiscr or K so that X is compact and F ⊆ CI(X,F). Applying Thm. 5 directly gives,\nF = {f ∈ CI(X,F) : ρ (F) ⊆ ρ (f)} ,\nwhich is the desired result.\nWe now move to the equivariant case. First, let us replace MGNNE by another class, which is slightly simpler to analyze. Define,\nMGNNE′ = {mE ◦ ((1− λ) Id +λS1) ◦ FT ◦ . . . F2 ◦ F1 : Ft : Fnt → Fnt+1 message passing layer, t = 1, . . . , T, T ≥ 1, λ ∈ {0, 1}} .\nIt holds that ρ (MGNNE′) = ρ (MGNNE) and,\nMGNNE′ ⊆ MGNNE ⊆ {f ∈ CE(Kdiscr,Fn) : ρ (MGNNE) ⊆ ρ (f)} = {f ∈ CE(Kdiscr,Fn) : ρ (MGNNE′) ⊆ ρ (f)}\nTherefore, if we show that,\nMGNNE′ = {f ∈ CE(Kdiscr,Fn) : ρ (MGNNE′) ⊆ ρ (f)} ,\nwe will have the desired result.\nLet F denote MGNNE′ , k-LGNNE or k-FGNNE and X be either Kdiscr or K so that X is compact and F ⊆ CE(X,Fn). We now wish to apply Thm. 6, but we need to verify the stability assumption first. Thus, we show that, for any f ∈ F ,\nx 7→ ( n∑ i=1 f(x)i, n∑ i=1 f(x)i . . . , n∑ i=1 f(x)i ) ∈ F ,\n• If F = MGNNE′ .Take f ∈ MGNNE′ . f is of the form,\nmE ◦ ((1− λ) Id +λS1) ◦ FT ◦ . . . F2 ◦ F1 ,\nwhere Ft : Fnt → Fnt+1 are message passing layers, FT+1 = F and λ ∈ {0, 1}. We need to show that there is a MGNNE′ which implements,\nx 7→ ( n∑ i=1 f(x)i, n∑ i=1 f(x)i . . . , n∑ i=1 f(x)i ) . (26)\nIf λ = 1, f is exactly the function of (26). Otherwise, if λ = 0, we build another MGNNE′ implementing this function. Denote by FT+1 : Fn → Fn the (simple) message passing layer defined by FT+1(h)i = mE(hi) for i ∈ [n] and any h ∈ Fn. Then, the MGNNE′ (1− λ′) Id +λ′S1) ◦ FT+1 ◦ FT ◦ . . . F2 ◦ F1 with λ′ = 1 exactly implements (26).\n• If F = k-LGNNE . A function f of this class is of the form,\nmE ◦ Sk1 ◦ FT ◦ . . . F2 ◦ F1 ◦ Ik,\nwhere Ft : Fn k t → Fn k\nt+1 are linear graph layers and FT+1 = F. Our goal is to show that there is a GNN of k-LGNNE which implements,\nx 7→ ( n∑ i=1 f(x)i, n∑ i=1 f(x)i . . . , n∑ i=1 f(x)i ) , (27)\nBy definition of linear graph layers, the map FT+1 : Fn k → Fnk defined by, for G ∈ Fnk ,\n∀(i1, . . . , ik) ∈ [n]k, FT+1(G)i1,...,ik = mE(Sk1 (G)i) ,\nis a linear graph layer as defined in Section 3. Now, consider, the linear graph layer, FT+2 : Fn k → Fnk defined by, for G ∈ Fnk ,\n∀(i1, . . . , ik) ∈ [n]k, FT+1(G)i1,...,ik = n∑ i=1 Gi,i2,...,ik .\nThen, the k-LGNN FT+2 ◦ FT+1 ◦ FT ◦ . . . F2 ◦ F1 ◦ Ik exactly implements (27).\n• If F = k-FGNNE . A function f of this class is of the form,\nmE ◦ Sk1 ◦ FT ◦ . . . F2 ◦ F1 ◦ Ik,\nwhere Ft : Fn k t → Fn k\nt+1 are FGL (see Section 3) and FT+1 = F. We build a GNN of k-MGNNE which implements,\nx 7→ ( n∑ i=1 f(x)i, n∑ i=1 f(x)i . . . , n∑ i=1 f(x)i ) , (28)\nFor w ∈ [k], define the FGL Hw : Fn k → Fnk by,\n∀G ∈ Fn k , ∀(i1, . . . , ik) ∈ [n]k, Hw(G)i1,...,ik = n∑ j=1 Gi1,...,iw−1,j,iw+1,...,ik .\nThen H2 ◦ · · · ◦Hk : Fn k → Fnk computes the sum of the elements of the input tensor over the last k − 1 dimensions like to Sk1 : Fn k → Fn and H1 ◦H2 ◦ · · · ◦Hk : Fn k → Fnk computes the full sum of the elements of the input tensor like to Sk : Fnk → F. Finally, consider the FGL FT+1 : Fn k → Fnk associated to mE , i.e. such that, for any G ∈ Fn k ,\n∀(i1, . . . , ik) ∈ [n]k, FT+1(G)i1,...,ik = mE(Gi1,...,ik) .\nNow, the k-FGNN\nH1 ◦H2 ◦ · · · ◦Hk ◦ FT+1 ◦H2 ◦ · · · ◦Hk ◦ FT+1 ◦ FT ◦ . . . F2 ◦ F1 ◦ Ik\nexactly implements (28)." }, { "heading": "F EXTENSION TO GRAPHS OF VARYING SIZES", "text": "" }, { "heading": "F.1 EXTENSION TO DISCONNECTED INPUT SPACES", "text": "Here we show that our results can be extended to graphs of varying sizes similarly to Keriven & Peyré (2019). There are two ways to do it. The first would be to directly adapt all the proofs but it would make them more cumbersome. Instead, we extend them with a simple argument presented below. As a side benefit, this general lemma makes it possible to extend almost all the approximation results for graph neural networks from the literature.\nThe abstract setting is the following. Given a compact input space X , assume that there is some finite set A, and Xα, α ∈ A a family of pairwise disjoints compact sets such that X = ⊔ α∈AXα. Crucially, we will assume that the Xα are in distinct connected components. Intuitively, they do not \"touch\" each other. Similarly, assume that the output space Y can be written as Y = ⊔ α∈A Yα, with Yα, α ∈ A a family of (pairwise disjoints) real vector spaces. Before moving to the results, we need a last definition. Informally, we need that the functions f that we will consider do not change the number of nodes of their inputs. Formally, we say that f : X −→ Y is adapted if f(Xα) ⊆ Yα for each α ∈ A, and denote by Cad(X,Y ) the of continuous and adapted functions from X to Y .\nWe can now state our lemma to adapt our results to graphs with varying node sizes.\nLemma 36. Let X be a compact space, Y be a topological space and A a finite set. Assume that there exists Xα, α ∈ A a family of pairwise disjoints compact sets such that X = ⊔ α∈AXα and the Xα’s are in distinct connected components. Also assume that there is Yα, α ∈ A a family of (pairwise disjoints) real vector spaces such that Y = ⊔ α∈A Yα. Consider F ⊆ Cad(X,Y ) a (non-empty) set of adapted functions and, for each α ∈ A, define F|Xα = {f|Xα : f ∈ F} ⊆ C(Xα, Yα) the restriction of the functions of F to Xα. Assume that the following holds," }, { "heading": "1. Each F|Xα is a sub-algebra of C(Xα, Yα) which contains the constant function 1Yα .", "text": "2. There is a set of functions Fscal ⊆ C(X,R) such that Fscal.F ⊆ F and it discriminates between the Xα’s, i.e. ρ (Fscal) ⊆ ⊔ α∈AX 2 α." }, { "heading": "Then the closure of F is,", "text": "F = { f ∈ Cad(X,Y ) : ∀α ∈ A, f|Xα ∈ F|Xα } ,\nfor the distance on Cad(X,Y ) defined by, for f, g ∈ Cad(X,Y ),\nd(f, g) = max α∈A sup Xα ‖f − g‖Yα .\nProof. Define the set of functions S ⊆ C(X,R) by,\nS = {f ∈ C(X,R) : f.F ⊆ F} .\nBy definition, Fscal ⊆ S. Moreover, as each F|Xα is a subalgebra, S is also a subalgebra of C(X,R).\nAs X is compact, we can apply Cor. 17 to S to get that, in C(X,R),\nS = {f ∈ C(X,R) : ρ (S) ⊆ ρ (f)} .\nIn the first part of the proof, we check that, for each α, the function fα : X −→ [0, 1] defined by\nfα|Xα = 1\n∀β 6= α, fα|Xβ = 0 .\nis continuous and satisfy ρ (S) ⊆ ρ (fα). This will means that such functions belong to Fscal. As the Xβ’s are in different connected components, it is enough to check its continuity on each Xβ . Indeed, each fα|Xβ is constant so continuous. The second fact comes from the second assumption. Indeed, take (x, y) ∈ X such that (x, y) ∈ ρ (S). As Fscal ⊆ S, in particular (x, y) ∈ ρ (Fscal). But, by assumption, ρ (Fscal) ⊆ ⊔ β∈AX 2 β . Thus, necessarily x and y belong to the same Xβ so that fα(x) = fα(y).\nTherefore, we conclude that fα ∈ S. We now prove the announced equality. By the definition of the distance, the first inclusion, F ⊆{ f ∈ Cad(X,Y ) : f|Xα ∈ Fα } is immediate. We focus on the other way. Take h ∈ Cad(X,Y ) such that h|Xα ∈ Fα for every α ∈ A and > 0. By definition of h, there exists, for each α ∈ A, gα ∈ F such that,\nsup Xα ‖h− gα‖Yα ≤ ,\nAs the Xα’s are compact and the gα’s are continuous, maxα∈A supXα ‖gα‖ < +∞ and denote by M > 0 a bound on this quantity. We have shown above that each fα is in S so there exists, for each α ∈ A, lα ∈ S such that,\nsup X |fα − lα| ≤ M .\nBy definition of S and, as F is a subalgebra, ∑ α∈A lαgα ∈ F and, for each β ∈ A,\nsup Xβ ∥∥∥∥∥h−∑ α∈A lαgα ∈ F ∥∥∥∥∥ ≤ supXβ ‖h− gβ‖+ ‖gβ − lβgβ‖+ ∥∥∥∥∥∥ ∑ α6=β lαgα ∥∥∥∥∥∥ ≤ +M ×\nM + (|A| − 1)M × M = (|A|+ 1) ,\nwhich concludes the proof." }, { "heading": "F.2 APPROXIMATION THEOREM WITH VARYING GRAPH SIZE", "text": "We now state our theorem in the case of varying graph size like Keriven & Peyré (2019). With Lem. 36, it is indeed straightforward to extend any approximation result initially proven for a class of graphs of fixed size. However, as the complete proof would require new notations again, we only give a sketch of proof.\nFix N ≥ 1 and consider the space of graphs (described by tensors) of size less than N , F≤N 2 0 =⋃N n=1 Fn 2\n0 . Equip this space with the final topology or, equivalently, the graph edit distance. Then, the last thing to check to apply Lem. 36 is that the classes of GNN that we consider indeed discriminate between graphs of different sizes, which is immediate.\nLikewise, for message passing GNN, consider G≤N×F≤N0 = ⋃N n=1 Gn×Fn0 , with a similar topology or the graph edit distance.\nCorollary 37. Let Kdiscr ⊆ G≤N × F≤N0 , K ⊆ F ≤N2 0 be compact sets. For the invariant case, we have:\nMGNNI = {f ∈ CI(Kdiscr,F) : ρ (2-WLI) ⊆ ρ (f)} 2-LGNNI = {f ∈ CI(K,F) : ρ (2-WLI) ⊆ ρ (f)} k-LGNNI = {f ∈ CI(K,F) : ρ (k-LGNNI) ⊆ ρ (f)} ⊃ {f ∈ CI(K,F) : ρ (k-WLI) ⊆ ρ (f)} k-FGNNI = {f ∈ CI(K,F) : ρ (k-FWLI) ⊆ ρ (f)}\nFor the equivariant case, we have: MGNNE = { f ∈ CE(Kdiscr,F≤N ) : ρ (2-WLE) ⊆ ρ (f) } k-LGNNE = { f ∈ CE(K,F≤N ) : ρ (k-LGNNE) ⊆ ρ (f) } ⊃ { f ∈ CE(K,F≤N ) : ρ (k-WLE) ⊆ ρ (f)\n} k-FGNNE = { f ∈ CE(K,F≤N ) : ρ (k-FWLE) ⊆ ρ (f)\n} Proof. This corollary is a direct consequence of Thm. 34 and Lem. 36, using Lem. 32 to satisfy the sub-algebra assumption of Lem. 36." } ]
2,021
EQUIVARIANT GRAPH NEURAL NETWORKS
SP:8ec8daeab04e7a3d11e9098a910df23c2d6665d1
[ "This paper presents an interesting formulation for spreadsheet formula synthesis. Instead of taking the input output pairs as input, as is done in the programming by example (PBE) approaches, the proposed approach takes the semi-structured tabular context as input for predicting a formula for the target cell. A neural network architecture is presented which uses a BERT-based encoder to leverage the natural language meta-data." ]
Spreadsheet formula prediction has been an important program synthesis problem with many real-world applications. Previous works typically utilize input-output examples as the specification for spreadsheet formula synthesis, where each inputoutput pair simulates a separate row in the spreadsheet. However, such a formulation does not fully capture the rich context in real-world spreadsheets. First, spreadsheet data entries are organized as tables, thus rows and columns are not necessarily independent from each other. In addition, many spreadsheet tables include headers, which provide high-level descriptions of the cell data. However, previous synthesis approaches do not consider headers as part of the specification. In this work, we present the first approach for synthesizing spreadsheet formulas from tabular context, which includes both headers and semi-structured tabular data. In particular, we propose SPREADSHEETCODER, a BERT-based model architecture to represent the tabular context in both row-based and column-based formats. We train our model on a large dataset of spreadsheets, and demonstrate that SPREADSHEETCODER achieves top-1 prediction accuracy of 42.51%, which is a considerable improvement over baselines that do not employ rich tabular context.
[]
[ { "authors": [ "Awais Azam", "Khubaib Amjad Alam", "Areeba Umair" ], "title": "Spreadsheet smells: A systematic mapping study", "venue": "In 2019 International Conference on Frontiers of Information Technology (FIT),", "year": 2019 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Matej Balog", "Alexander L Gaunt", "Marc Brockschmidt", "Sebastian Nowozin", "Daniel Tarlow" ], "title": "Deepcoder: Learning to write programs", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Rohan Bavishi", "Caroline Lemieux", "Roy Fox", "Koushik Sen", "Ion Stoica" ], "title": "Autopandas: neuralbacked generators for program synthesis", "venue": "Proceedings of the ACM on Programming Languages,", "year": 2019 }, { "authors": [ "Rudy Bunel", "Matthew Hausknecht", "Jacob Devlin", "Rishabh Singh", "Pushmeet Kohli" ], "title": "Leveraging grammar and reinforcement learning for neural program synthesis", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Xinyun Chen", "Chang Liu", "Dawn Song" ], "title": "Execution-guided neural program synthesis", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Shing-Chi Cheung", "Wanjun Chen", "Yepang Liu", "Chang Xu" ], "title": "Custodes: automatic spreadsheet cell clustering and smell detection using strong and weak features", "venue": "In Proceedings of the 38th International Conference on Software Engineering,", "year": 2016 }, { "authors": [ "Jacob Devlin", "Jonathan Uesato", "Surya Bhupatiraju", "Rishabh Singh", "Abdel-rahman Mohamed", "Pushmeet Kohli" ], "title": "Robustfill: Neural program learning under noisy I/O", "venue": null, "year": 2017 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "In NAACL-HLT", "year": 2019 }, { "authors": [ "Haoyu Dong", "Shijie Liu", "Zhouyu Fu", "Shi Han", "Dongmei Zhang" ], "title": "Semantic structure extraction for spreadsheet tables with a multi-task learning architecture", "venue": "In Workshop on Document Intelligence at NeurIPS 2019,", "year": 2019 }, { "authors": [ "Haoyu Dong", "Shijie Liu", "Shi Han", "Zhouyu Fu", "Dongmei Zhang" ], "title": "Tablesense: Spreadsheet table detection with convolutional neural networks", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Li Dong", "Mirella Lapata" ], "title": "Coarse-to-fine decoding for neural semantic parsing", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2018 }, { "authors": [ "Wensheng Dou", "Shing-Chi Cheung", "Chushu Gao", "Chang Xu", "Liang Xu", "Jun Wei" ], "title": "Detecting table clones and smells in spreadsheets", "venue": "In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering,", "year": 2016 }, { "authors": [ "Sumit Gulwani" ], "title": "Automating string processing in spreadsheets using input-output examples", "venue": "In ACM SIGPLAN Notices,", "year": 2011 }, { "authors": [ "Sumit Gulwani", "Mark Marron" ], "title": "Nlyze: Interactive programming by natural language for spreadsheet data analysis and manipulation", "venue": "In Proceedings of the 2014 ACM SIGMOD international conference on Management of data,", "year": 2014 }, { "authors": [ "Sumit Gulwani", "William R Harris", "Rishabh Singh" ], "title": "Spreadsheet data manipulation using examples", "venue": "Communications of the ACM,", "year": 2012 }, { "authors": [ "Felienne Hermans", "Martin Pinzger", "Arie van Deursen" ], "title": "Detecting and visualizing inter-worksheet smells in spreadsheets", "venue": "In 2012 34th International Conference on Software Engineering (ICSE),", "year": 2012 }, { "authors": [ "Felienne Hermans", "Martin Pinzger", "Arie van Deursen" ], "title": "Measuring spreadsheet formula understandability", "venue": "arXiv preprint arXiv:1209.3517,", "year": 2012 }, { "authors": [ "Felienne Hermans", "Ben Sedee", "Martin Pinzger", "Arie van Deursen" ], "title": "Data clone detection and visualization in spreadsheets", "venue": "In 2013 35th International Conference on Software Engineering (ICSE),", "year": 2013 }, { "authors": [ "Jonathan Herzig", "Paweł Krzysztof Nowak", "Thomas Müller", "Francesco Piccinno", "Julian Martin Eisenschlos. Tapas" ], "title": "Weakly supervised table parsing via pre-training", "venue": "In Annual Meeting of the Association for Computational Linguistics (ACL),", "year": 2020 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Mohit Iyyer", "Wen-tau Yih", "Ming-Wei Chang" ], "title": "Search-based neural structured learning for sequential question answering", "venue": "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2017 }, { "authors": [ "Andrej Karpathy", "Justin Johnson", "Li Fei-Fei" ], "title": "Visualizing and understanding recurrent networks", "venue": "arXiv preprint arXiv:1506.02078,", "year": 2015 }, { "authors": [ "Jian Li", "Yue Wang", "Michael R Lyu", "Irwin King" ], "title": "Code completion with neural attention and pointer networks", "venue": "In Proceedings of the 27th International Joint Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Chen Liang", "Mohammad Norouzi", "Jonathan Berant", "Quoc V Le", "Ni Lao" ], "title": "Memory augmented policy optimization for program synthesis and semantic parsing", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Xi Victoria Lin", "Chenglong Wang", "Luke Zettlemoyer", "Michael D Ernst" ], "title": "Nl2bash: A corpus and semantic parser for natural language interface to the linux operating system", "venue": "In Proceedings of the Eleventh International Conference on Language Resources and Evaluation", "year": 2018 }, { "authors": [ "Yunchao Liu", "Zheng Wu" ], "title": "Learning to describe scenes with programs", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Vijayaraghavan Murali", "Letao Qi", "Swarat Chaudhuri", "Chris Jermaine" ], "title": "Neural sketch learning for conditional program generation", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Maxwell Nye", "Luke Hewitt", "Joshua Tenenbaum", "Armando Solar-Lezama" ], "title": "Learning to infer program sketches", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Augustus Odena", "Kensen Shi", "David Bieber", "Rishabh Singh", "Charles Sutton" ], "title": "Bustle: Bottom-up program-synthesis through learning-guided exploration", "venue": "arXiv preprint arXiv:2007.14381,", "year": 2020 }, { "authors": [ "Emilio Parisotto", "Abdel-rahman Mohamed", "Rishabh Singh", "Lihong Li", "Dengyong Zhou", "Pushmeet Kohli" ], "title": "Neuro-symbolic program synthesis", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Oleksandr Polozov", "Sumit Gulwani" ], "title": "Flashmeta: a framework for inductive program synthesis", "venue": "In Proceedings of the 2015 ACM SIGPLAN International Conference on Object-Oriented Programming,", "year": 2015 }, { "authors": [ "Ana-Maria Popescu", "Oren Etzioni", "Henry Kautz" ], "title": "Towards a theory of natural language interfaces to databases", "venue": "In Proceedings of the 8th international conference on Intelligent user interfaces,", "year": 2003 }, { "authors": [ "Veselin Raychev", "Martin Vechev", "Eran Yahav" ], "title": "Code completion with statistical language models", "venue": "In Proceedings of the 35th ACM SIGPLAN Conference on Programming Language Design and Implementation,", "year": 2014 }, { "authors": [ "Eui Chul Shin", "Illia Polosukhin", "Dawn Song" ], "title": "Improving neural program synthesis with inferred execution traces", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Rishabh Singh", "Benjamin Livshits", "Benjamin Zorn" ], "title": "Melford: Using neural networks to find spreadsheet", "venue": null, "year": 2017 }, { "authors": [ "Armando Solar-Lezama" ], "title": "Program Synthesis by Sketching", "venue": "PhD thesis, UNIVERSITY OF CALIFORNIA, BERKELEY,", "year": 2008 }, { "authors": [ "Shao-Hua Sun", "Hyeonwoo Noh", "Sriram Somasundaram", "Joseph Lim" ], "title": "Neural program synthesis from diverse demonstration videos", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Martin Sundermeyer", "Ralf Schlüter", "Hermann Ney" ], "title": "Lstm neural networks for language modeling", "venue": "In Thirteenth annual conference of the international speech communication association,", "year": 2012 }, { "authors": [ "Alexey Svyatkovskiy", "Ying Zhao", "Shengyu Fu", "Neel Sundaresan" ], "title": "Pythia: AI-assisted code completion system", "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2019 }, { "authors": [ "Alexey Svyatkovskiy", "Shao Kun Deng", "Shengyu Fu", "Neel Sundaresan" ], "title": "Intellicode compose: Code generation using transformer", "venue": "arXiv preprint arXiv:2005.08025,", "year": 2020 }, { "authors": [ "Alexey Svyatkovskoy", "Sebastian Lee", "Anna Hadjitofi", "Maik Riechert", "Juliana Franco", "Miltiadis Allamanis" ], "title": "Fast and memory-efficient neural code completion", "venue": "arXiv preprint arXiv:2004.13651,", "year": 2020 }, { "authors": [ "Ashwin J Vijayakumar", "Abhishek Mohta", "Oleksandr Polozov", "Dhruv Batra", "Prateek Jain", "Sumit Gulwani" ], "title": "Neural-guided deductive search for real-time program synthesis from examples", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Bailin Wang", "Richard Shin", "Xiaodong Liu", "Oleksandr Polozov", "Matthew Richardson" ], "title": "Rat-sql: Relation-aware schema encoding and linking for text-to-sql parsers", "venue": "In Annual Meeting of the Association for Computational Linguistics (ACL),", "year": 2020 }, { "authors": [ "Jiajun Wu", "Joshua B Tenenbaum", "Pushmeet Kohli" ], "title": "Neural scene de-rendering", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Pengcheng Yin", "Bowen Deng", "Edgar Chen", "Bogdan Vasilescu", "Graham Neubig" ], "title": "Learning to mine aligned code and natural language pairs from stack overflow", "venue": "IEEE/ACM 15th International Conference on Mining Software Repositories (MSR),", "year": 2018 }, { "authors": [ "Pengcheng Yin", "Graham Neubig", "Wen-tau Yih", "Sebastian Riedel" ], "title": "Tabert: Pretraining for joint understanding of textual and tabular data", "venue": "In Annual Meeting of the Association for Computational Linguistics (ACL),", "year": 2020 }, { "authors": [ "Tao Yu", "Rui Zhang", "Kai Yang", "Michihiro Yasunaga", "Dongxu Wang", "Zifan Li", "James Ma", "Irene Li", "Qingning Yao", "Shanelle Roman" ], "title": "Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Yakun Zhang", "Wensheng Dou", "Jiaxin Zhu", "Liang Xu", "Zhiyong Zhou", "Jun Wei", "Dan Ye", "Bo Yang" ], "title": "Learning to detect table clones in spreadsheets", "venue": "In Proceedings of the 29th ACM SIGSOFT International Symposium on Software Testing and Analysis,", "year": 2020 }, { "authors": [ "Victor Zhong", "Caiming Xiong", "Richard Socher" ], "title": "Seq2sql: Generating structured queries from natural language using reinforcement learning", "venue": "arXiv preprint arXiv:1709.00103,", "year": 2017 }, { "authors": [ "2013 Hermans et al", "2016 Dou et al", "Zhang" ], "title": "2020), and structure extraction for spreadsheet tables (Dong et al., 2019a;b). Our proposed encoder architecture could potentially be adapted for these spreadsheet tasks as well, and we leave it for future work. B MORE EXPERIMENTAL RESULTS For the setting where the model input", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Spreadsheets are ubiquitous for data storage, with hundreds of millions of users. Support for helping users write formulas in spreadsheets is a powerful feature for data analysis. Although spreadsheet formula languages are relatively simpler than general-purpose programming languages for data manipulation, writing spreadsheet formulas could still be tedious and error-prone for end users (Gulwani, 2011; Hermans et al., 2012b; Cheung et al., 2016). Systems such as FlashFill (Gulwani, 2011; Gulwani et al., 2012) help end-users perform string transformation tasks in spreadsheets using a few input-output examples by automatically synthesizing a program in a domain-specific language (DSL). Recently, several learning approaches based on different neural architectures have been developed for learning such programs from examples, and have demonstrated promising results (Parisotto et al., 2017; Devlin et al., 2017; Vijayakumar et al., 2018).\nAll these previous works formalize the spreadsheet program prediction problem as a programming by example task, with the goal of synthesizing programs from a small number of input-output examples. We argue that this choice engenders three key limitations. First, this setup assumes that each data row is independent, and each formula is executed on data cells of the same row. However, real spreadsheets are less structured than this. Data in spreadsheets is typically organized as semi-structured tables, and cells in different rows could be correlated. As shown in Figure 1, in the same table, different data blocks could have different structures, without a common schema. Formulas can take cell values in other rows as function arguments. Second, because spreadsheets are semi-structured, they also contain rich metadata. In particular, many spreadsheet tables include headers that provide high-level descriptions of the data, which could provide important clues for formula prediction. However, table headers are not utilized in prior work. Finally, programming-by-example methods output programs in a DSL, which is typically designed to facilitate synthesis, and is much less flexible than the language in which users write formulas. For example, the FlashFill DSL only covers a subset of spreadsheet functions for string processing, and it does not support rectangular ranges, a common feature of spreadsheet formulas. In contrast, spreadsheet languages also support a wide variety of functions for numerical calculation, while the argument selection is more flexible and takes the spreadsheet\ntable structure into account. In total, these limitations can compromise the applicability of such prior efforts to more diverse real-world spreadsheets and to richer language functionality.\nInstead, we propose synthesizing spreadsheet formulas without an explicit specification. To predict a formula in a given cell, the context of data and metadata is used as an implicit (partial) specification of the desired program. For example (Figure 1b), if predicting a formula at the end of a column of numbers labeled “Score”, and a cell in the same row contains the text “Total”, this context might specify the user’s intent to compute a column sum. Our problem brings several new challenges compared to related work in programming by example (Gulwani, 2011; Bunel et al., 2018; Balog et al., 2017), semantic parsing (Popescu et al., 2003; Zhong et al., 2017; Yu et al., 2018) and source code completion (Raychev et al., 2014; Li et al., 2018; Svyatkovskiy et al., 2019). Spreadsheet tables contain rich two-dimensional relational structure and natural language metadata, but the rows do not follow a fixed schema as in a relational database. Meanwhile, our tabular context is more ambiguous as the program specification, and the spreadsheet language studied in this work is more flexible than in the program synthesis literature.\nIn this paper, we present SPREADSHEETCODER, a neural network architecture for spreadsheet formula prediction. SPREADSHEETCODER encodes the spreadsheet context in its table format, and generates the corresponding formula in the target cell. A BERT-based encoder (Devlin et al., 2019) computes an embedding vector for each input token, incorporating the contextual information from nearby rows and columns. The BERT encoder is initialized from the weights pre-trained on English text corpora, which is beneficial for encoding table headers. To handle cell references, we propose a two-stage decoding process inspired by sketch learning for program synthesis (Solar-Lezama, 2008; Murali et al., 2018; Dong & Lapata, 2018; Nye et al., 2019). Our decoder first generates a formula sketch, which does not include concrete cell references, and then predicts the corresponding cell ranges to generate the complete formula.\nFor evaluation (Section 4), we construct a large-scale benchmark of spreadsheets publicly shared within our organization. We show that SPREADSHEETCODER outperforms neural network approaches for programming by example (Devlin et al., 2017), and achieves 42.51% top-1 full-formula accuracy, and 57.41% top-1 formula-sketch accuracy, both of which are already high enough to be practically useful. Moreover, SPREADSHEETCODER can predict cell ranges and around a hundred different spreadsheet operators, which is much more flexible than DSLs used in prior works. With various ablation experiments, we demonstrate that both implicit specification from the context and text from the headers are crucial for obtaining good performance." }, { "heading": "2 PROBLEM SETUP", "text": "In this section, we discuss the setup of our spreadsheet formula prediction problem. We first describe the input specification, then introduce the language and representation for spreadsheet formulas.\nInput specification. We illustrate the input context in Figure 1. The input context consists of two parts: (a) context surrounding the target cell (e.g., all cell values in rows 2–7, and columns A–D, excluding cell D4 in Figure 1a), and (b) the header row (e.g., row 1).\nIn contrast to prior programming-by-example approaches (Gulwani, 2011; Parisotto et al., 2017; Devlin et al., 2017; Vijayakumar et al., 2018), our input specification features (a) tabular input, rather than independent rows as input-output examples, and (b) header information. Tabular input is important for many cases where formulas are executed on various input cells from different rows and columns (Figure 1), and headers hold clues about the purpose of a column as well as its intended type, e.g, the header cell \"Score\" in Figure 1b is likely to indicate that the column data should be numbers.\nNote that we do not include the intended output of the target cell in our input specification, for three reasons. First, unlike programming-by-example problems, we do not have multiple independent input-output examples available from which to induce a formula, so providing multiple input-output examples is not an option. Second, even for our single input instance, the evaluated formula value may not be known by the spreadsheet user yet. Finally, we tried including the intended formula execution result in our specification, but it did not improve the prediction accuracy beyond what the contextual information alone allowed.\nThe spreadsheet language. Our model predicts formulas written in the Google Sheets language1. Compared to the domain-specific language defined in FlashFill, which focuses on string transfor-\n1Google Sheets function list: https://support.google.com/docs/table/25273?hl=en.\nmations, the spreadsheet language supports a richer set of operators. Besides string manipulation operators such as CONCATENATE, LOWER, etc., the spreadsheet language also includes operators for numerical calculations (e.g., SUM and AVERAGE), table lookups (e.g., VLOOKUP) and conditional statements (IF, IFS). As will be discussed in Section 4, around a hundred different base formula functions appear in our dataset, many more than the operators defined in the FlashFill DSL. We limit our problem to formulas with references to local cells in a spreadsheet tab, thus we exclude formulas with references to other tabs or spreadsheets, and absolute cell ranges.\nFormula representation. One of the key challenges in formula representation is how to represent cell references, especially ranges, which are prevalent in spreadsheet formulas. Naively using the absolute cell positions, e.g., A5, may not be meaningful across different spreadsheets. Meanwhile, a single spreadsheet can have millions of cells, thus the set of possible ranges is very large.\nTo address this, we design a representation for formula sketches inspired by prior work on sketch learning for program synthesis (Solar-Lezama, 2008; Murali et al., 2018; Dong & Lapata, 2018; Nye et al., 2019). A formula sketch includes every token in the prefix representation of the parse tree of the spreadsheet formula, except for cell references. References, which can be either a single cell or a range of cells, are replaced with a special placeholder RANGE token. For example, the sketch of the formula in Figure 1a is IF <= RANGE 1 \"A\" IF <= RANGE 2 \"B\" IF <= RANGE 3 \"C\" IF <= RANGE 4 \"D\" \"E\" $ENDSKETCH$, where $ENDSKETCH$ denotes the end of the sketch. Notice that the sketch includes literals, such as the constants 1 and \"A\".\nTo complete the formula representation, we design an intermediate representation for ranges, relative to the target cell. For example, B5 in Figure 1c is represented as $R$ R[0] C[1] $ENDR$ since it is on the next column but the same row as the target cell A5, and range C2:C6 in Figure 1b is represented as $R$ R[-5] C[0] $SEP$ R[-1] C[0] $ENDR$. The special tokens $R$ and $ENDR$ start and conclude a concrete range, respectively, and $SEP$ separates the beginning and end (relative) references of a rectangular multi-cell range.\nA complete spreadsheet formula includes both the sketch and any concrete ranges; e.g., the formula in Figure 1b is represented as SUM RANGE $ENDSKETCH$ $R$ R[-5] C[0] $SEP$ R[-1] C[0] $ENDR$ EOF, where EOF denotes the end of the formula. In Section 3.2, we will discuss our two-stage decoding process, which sequentially predicts the formula sketch and ranges." }, { "heading": "3 SPREADSHEETCODER MODEL ARCHITECTURE", "text": "In this section, we present our SPREADSHEETCODER model architecture for spreadsheet formula prediction. We provide an overview of our model design in Figure 2." }, { "heading": "3.1 TABULAR CONTEXT ENCODER", "text": "Input representation. Our model input includes the surrounding data values of the target cell as a table, and the first row is the header. When there is no header in the spreadsheet table, we set the header row to be an empty sequence. We include data values in cells that are at most D rows and D columns away from the target cell, so that the input dimension is (2D + 2)× (2D + 1), and we set D = 10 in our experiments.\nRow-based BERT encoder. We first use a BERT encoder (Devlin et al., 2019) to compute a rowbased contextual embedding for each token in the target cell’s context. Since our 2D + 1 + 1 rows contain many tokens and we use a standard BERT encoder of 512-token inputs, we tile our rows into bundles of three adjacent data rows, plus the header row, which is included in every bundle. Then we compute a token-wise BERT embedding for each bundle separately; the BERT weights are initialized from a pre-trained checkpoint for English. Specifically, in our experiments where D = 10, we concatenate all cell values for each row i in the context into a token sequence Ri, which has length L = 128 (we trim and pad as needed). We combine rows in bundles Srb = [Hr, R3b−1, R3b, R3b+1], for b ∈ [−3, 3]; here Hr is the header row. We set the BERT segment IDs to 0 for the header tokens, and 1 for data tokens in each bundle. There are 2D + 1 = 21 rows of context, so each of the 21 data rows is covered exactly once by the seven bundles. The header row is assigned a different BERT representation in each bundle. To obtain a single representation of the header row, we average per token across the embeddings from all of the bundles.\nColumn-based BERT encoder. As shown in Figure 1b, some formulas manipulate cells in the same column, in which case a column-based representation may be more desirable. Therefore, we also compute a column-based contextual embedding for all context tokens. We perform similar tiling as for the row-based BERT encoding, yielding column bundles Scb for b ∈ [−3, 3]. Unlike with row-wise tiling, where we include the header row Hr with every bundle, for column-wise tiling we use the column of the target cell, Hc = C0, as the “header column” in every bundle. After obtaining all token embeddings from this tiled computation by the BERT encoder, we discard token embeddings of C0 in its role as header column, and only use its regular token embeddings from bundle Sc0.\nRow-wise and column-wise convolution layers. Although the output vectors of BERT encoders already contain important contextual information, such as headers, nearby rows and columns, they still do not fully embed the entire input table as the context. To encode the context from more distant rows and columns, we add a row-wise convolution layer and a column-wise convolution layer on top of each BERT encoder. Specifically, the row-wise convolution layer has a kernel size of 1× L, and the column-wise convolution layer has a kernel size of (2D + 2) × 1 for row-based BERT, and (2D + 1) × 1 for column-based BERT. In this way, the convolution layer aggregates across BERT embeddings from different bundles, allowing the model to take longer range dependencies into account. For each input token, let eb be its BERT output vector, cr be the output of the row-wise convolution layer, and cc be the output of the column-wise convolution layer. The final embedding of each input token is the concatenation of the BERT output and the output of convolution layers, i.e., e = [cr + cc; eb]." }, { "heading": "3.2 TWO-STAGE FORMULA DECODER", "text": "We train an LSTM (Hochreiter & Schmidhuber, 1997) decoder to generate the formula as a token sequence. Meanwhile, we use the standard attention mechanism (Bahdanau et al., 2015) to compute\ntwo attention vectors, one over the input header, and one over the cell data. We concatenate these two attention vectors with the LSTM output, and feed them to a fully-connected layer with the output dimension |V |, where |V | is the vocabulary size of formula tokens. Note that the token vocabularies are different for sketches (formula operators, literals, and special tokens) and ranges (relative row and column tokens and special range tokens). The output token prediction is computed with the softmax.\nAs mentioned in Section 2, we design a two-stage decoding process, where the decoder first generates the formula sketch, and then predicts the concrete ranges. In the first stage, the sketch is predicted as a sequence of tokens by the LSTM, and terminates when an $ENDSKETCH$ token is predicted. Then in the second stage, the range predictor sequentially generates formula ranges corresponding to each RANGE token in the sketch, and the prediction terminates when an EOF token is generated. Both sketch and range predictors share the same LSTM, but with different output layers." }, { "heading": "4 EXPERIMENTS", "text": "We evaluate SPREADSHEETCODER on spreadsheet formula prediction tasks in different settings. We first describe our dataset, then introduce our experimental setup and discuss the results." }, { "heading": "4.1 DATASET", "text": "We constructed our dataset from a corpus of Google Sheets documents publicly shared within our organization. Specifically, we collected 46K Google Sheets with formulas, and split them into 42K for training, 2.3K for validation, and 1.7K for testing. We filtered out formulas with cell references farther than 10 rows or columns from the target cell in either direction. Finally, we have 770K training samples, 42K for validation, and 34K for testing. In total, around 100 operators are covered in our output vocabulary. See Appendix C for details about dataset construction.\nBy default, each sample includes both the header row and surrounding data values of relative cell positions within [−10, 10]. Note that we do not include the data of the target cell, and we leave an empty value there. In Section 4.3, we also discuss settings when the model input does not include headers, and when we only include a few data rows above the target cell as the input context." }, { "heading": "4.2 EVALUATION SETUP", "text": "Metrics. We evaluate the following metrics: (1) Formula accuracy: the percentage of predicted formulas that are the same as the ground truth. (2) Sketch accuracy: the percentage of predictions with the same formula sketches as the ground truth. As discussed in Section 2, formula sketches do not include ranges, but include both functions and literals. (3) Range accuracy: the percentage of predictions with the same ranges as the ground truth. Note that the order of predicted ranges should also be the same as the ground truth. In addition, the model may predict the ranges correctly even if the sketch prediction is wrong, and we provide an example in Figure 4b.\nNote that our formula accuracy metric could be an underestimate of the semantic equivalence, because different spreadsheet formulas may be semantically equivalent. For example, to predict arguments for SUM and MULTIPLY, different orders of the cell ranges have the same meaning. However, it is hard to systematically define the semantic equivalence in our evaluation, because we aim to support a wide range of operators in the spreadsheet language. Some existing works on program synthesis have evaluated the semantic equivalence based on the execution results (Devlin et al., 2017; Bunel et al., 2018; Sun et al., 2018). However, it is hard to sample different input spreadsheets requiring the same formula, thus evaluating the execution accuracy is challenging. Therefore, we still focus on our current metric to measure the formula accuracy, where we compare whether the predicted formula is exactly the same as the single ground truth formula included in the spreadsheet.\nModel details. For models with the BERT encoder (Devlin et al., 2019), including our full SPREADSHEETCODER model, we use the BERT-Medium architecture, and initialize from the English pretrained model by default.2 We compared our full model with several variants:\n(1) Different encoder architectures: i) using a single BERT encoder, either row-based or columnbased; ii) removing convolution layers, where the BERT output is directly fed into the decoder.\n2We downloaded the pre-trained BERT from: https://github.com/google-research/bert.\n(2) Different decoding approaches: We compare our proposed two-stage decoding discussed in Section 3.2 to a simpler model that uses the same predictor for both the sketch and ranges, with a single joint output vocabulary for both.\n(3) Different model initialization: When not using the pre-trained BERT model weights, we randomly initialize BERT encoders. This tests whether pre-training on generic natural language text is useful for our spreadsheet data.\nWe compare to previous approaches for related program synthesis tasks. First, we evaluate RobustFill, which demonstrates the state-of-the-art performance on string manipulation tasks for Excel spreadsheets (Devlin et al., 2017). Specifically, RobustFill is designed for FlashFill, where each formula is executed on a single data row, thus each row is independently fed into a shared encoder. We defer the details of RobustFill model architecture to Appendix A. We trained two variants of RobustFill on our dataset: one encodes each row independently, and another encodes each column independently, denoted as row-based RobustFill and column-based RobustFill respectively. In addition, we compared to a baseline that does not utilize any input context, thus the model only includes the LSTM decoder, similar to prior work on language modeling (Sundermeyer et al., 2012; Karpathy et al., 2015)." }, { "heading": "4.3 RESULTS", "text": "In this section, we present the results using different variants of spreadsheet contexts as the model inputs. For all settings, we perform a beam search during the inference time, with a beam size 64." }, { "heading": "4.3.1 RESULTS WITH THE FULL INPUT CONTEXT", "text": "Using both headers and the full surrounding data cell values as the model input, we present the formula accuracy in Table 1, where top-k accuracy measures how often the ground truth appears in the top k predictions using beam search. Compared to the model without the input context, all other models are able to use the contextual data to provide more accurate predictions. In particular, our full model achieves over 40% top-1 full formula prediction accuracy, which is 4 times as high as the model without context. We also observe that the full SpreadsheetCoder model has much better accuracy than either of the RobustFill models, demonstrating that our model is more capable of leveraging the implicit specification provided by the tabular context.\nDifferent encoder architectures. Appropriately encoding the input context is important. Comparing with RobustFill models, we observe that it is beneficial to model the dependency among different rows and columns, instead of encoding each row or column independently. Meanwhile, adding convolution layers brings additional performance gain, because it enables the representations of each input token to aggregate broader contextual information beyond a few nearby rows or columns, i.e., 3 for our BERT encoders as discussed in Section 3.1. Finally, although models representing the input context as column-based tables generally perform worse than those using row-based tables, including both row-based and column-based encoders improves the overall accuracies by 2–3 percentage points. Note that the improvement is not due to the larger model size: to test this, we trained row-based and column-based BERT models with the larger BERT-base and BERT-large architectures, but the results were no better, while taking longer to train. In addition, initializing from pre-trained BERT encoders increases the formula accuracy by around 10 percentage points, suggesting that although spreadsheet headers are generally short natural language phrases, pre-training on a large-scale text corpus with much more complex text still enables the model to better understand the spreadsheet context.\nBreakdown analysis of sketch and range prediction. We present the sketch and range accuracies in Table 2. On the one hand, sketch accuracies are generally much higher than range accuracies, since formulas are more likely to share common sketches with similar spreadsheet context, while range prediction requires a more careful investigation of the table structure. On the other hand, sketch prediction becomes more challenging when literals are included. In Figure 4a, we present a sample prediction with the correct sketch but the wrong range. Specifically, the model could easily infer that the formula should call a SUM function, since it is a common prediction given the input token \"Total\". However, the model wrongly selects all cells above as the function argument, and ignores the fact that the cell B5 is already the sum of cells B2–B4, indicated by the text \"Total Cost\" in cell A5. Figure 4b shows a prediction with the correct range but the wrong sketch, where the predicted formula misses a \"/\" as an argument to the string concatenation operator \"&\". Two-stage decoding disentangles the generation of sketches and ranges, so that the two predictors could focus on addressing different difficulties in formula prediction, and this mechanism provides additional accuracy improvement.\nTable 1: Formula accuracy on the test set. “−” means the corresponding component is removed from our full model.\nApproach Top-1 Top-5 Top-10 Full Model 42.51% 54.41% 58.57% − Column-based BERT 39.42% 51.68% 56.50% − Row-based BERT 20.37% 40.87% 48.37% − Convolution layers 38.43% 51.31% 55.87% − Two-stage decoding 41.12% 53.57% 57.95% − Pretraining 31.51% 42.64% 49.77%\nRow-based RobustFill 31.14% 40.09% 47.10% Column-based RobustFill 20.65% 39.69% 46.96% No context 10.56% 23.27% 31.96% Figure 3: Top-1 formula accuracies for different\nformula sketch lengths.\nPrediction on formulas with different sketch lengths. We present the top-1 formula accuracy on formulas with different sketch lengths in Figure 3. Note that we exclude the $ENDSKETCH$ token from length calculation. First, all models achieve higher performance on formulas with sketch lengths of 2–3 than longer formulas. It is harder to make exactly the same prediction as the ground truth when the formula becomes longer, especially given that the input context is often an ambiguous specification for formula prediction. Fortunately, users typically do not need to write complicated formulas for spreadsheet data manipulation. Specifically, 85% of our collected formulas have sketch lengths of 2–3. Despite the performance degradation, our full model consistently performs better than other models on formulas with different sketch lengths." }, { "heading": "4.3.2 THE EFFECT OF HEADER INFORMATION", "text": "In this section, we evaluate the effect of including the header row as the model input, which usually provides a short description of the table in natural language. For all models, we remove the headers from the context by replacing the header tokens with empty values. Thus the models can only use surrounding data cells as the spreadsheet context.\nIn Table 3, we observe a notable accuracy drop compared to Table 1, indicating that leveraging headers is critical. Figure 6a shows an example that can be correctly predicted by our full model, but is wrongly predicted by the model without input headers. We can observe that without the header “Average”, it is much harder to figure out that the formula should call the AVERAGE function instead of a division. Interestingly, without input headers, using row-based or column-based table representation no longer makes much difference. However, our tabular input context encoders still perform better than RobustFill models, suggesting the importance of modeling the dependency among different rows and columns. In addition, initializing from pre-trained BERT model weights does not improve the results, and even slightly hurts the performance. The main reason is that the cell data values are mostly numeric and string literals. Breakdown results are deferred to Appendix B." }, { "heading": "4.3.3 RESULTS IN THE FLASHFILL-LIKE SETTING", "text": "Finally, we conduct experiments in the FlashFill-like setting, where formulas are always executed on cells in the same row. In total, 2.5K formulas in the test set only include cells with the relative row position R[0], which constitute around 73% of the test set. More details are in Appendix D.\nTable 3: Formula accuracy on the test set, excluding headers in the context. Corresponding results with headers are in Table 1.\nApproach Top-1 Top-5 Top-10 Full Model 20.47% 40.23% 47.40% − Column-based BERT 20.63% 40.40% 48.70% − Row-based BERT 20.38% 40.11% 47.88% − Pretraining 20.94% 40.64% 48.51%\nRow-based RobustFill 19.02% 33.60% 37.38% Column-based RobustFill 17.64% 30.45% 36.79% No context 10.56% 23.27% 31.96% Figure 5: Top-1 formula accuracy in the FlashFill-\nlike setting, with different number of input rows. In Figure 5, we present the top-1 formula accuracies with different numbers of input data rows. We observe that even for spreadsheet formulas that only refer to cells in the same row, our models with tabular input encoders still perform better. In particular, with the increase of the number of input data rows, the accuracy of the RobustFill model does not show much improvement, while the accuracies of the other two models increase considerably, especially our full model. This demonstrates that our model could better utilize the available cell data context for prediction. Figure 6b shows a formula that can be correctly predicted by our model when the full input context is given, but is wrongly predicted when the input only contains the header row and one data row. This example shows that understanding the cell data is especially important when the header is not informative enough. Notice that including only a few input rows or columns does not fit our encoder design well, since our BERT encoders simultaneously embed 3 data rows at a time, while the RobustFill model independently encodes each row by design. This could be the main reason why models with BERT-based encoders may perform worse than RobustFill when less than 3 data rows are presented. In addition, including headers still consistently provides a significant performance gain." }, { "heading": "5 RELATED WORK", "text": "In this section, we present a high-level overview of the related work, and we defer a more in-depth discussion to Appendix A. Program synthesis has been a long-standing challenge, and various types of specifications have been discussed, including input-output examples (Gulwani et al., 2012; Balog et al., 2017; Bunel et al., 2018; Bavishi et al., 2019; Shin et al., 2018; Chen et al., 2019), natural language descriptions (Gulwani & Marron, 2014; Yu et al., 2018; Yin et al., 2018; Lin et al., 2018), and images (Wu et al., 2017; Liu & Wu, 2019; Sun et al., 2018). In particular, the FlashFill benchmark (Gulwani et al., 2012) is the most related to our task, and their goal is to generate string transformation programs to manipulate the Excel spreadsheet data, given input-output examples as the specification. Various neural network approaches have been proposed for FlashFill (Parisotto et al., 2017; Devlin et al., 2017; Vijayakumar et al., 2018). On the other hand, Nlyze (Gulwani & Marron, 2014) translates natural language specifications to programs in an SQL-like DSL for spreadsheet data\nmanipulation; and Autopandas (Bavishi et al., 2019) synthesizes dataframe transformation functions implemented with the Python Pandas library, given input-output dataframe examples. The spreadsheet formula prediction task in our work considers the semi-structured tabular spreadsheet context as the specification, rather than standardized input-output examples or natural language descriptions. Therefore, our formula specifications are more ambiguous and diverse. Furthermore, we show that including the header information is a key factor to improving the formula prediction performance.\nIn terms of the model input format, our spreadsheet formula prediction task is related to existing benchmarks on semantic parsing over a tabular database (Iyyer et al., 2017; Zhong et al., 2017; Yu et al., 2018). Various approaches have been proposed for these tasks (Liang et al., 2018; Wang et al., 2020; Yin et al., 2020; Herzig et al., 2020). There are two key differences between these works and ours. First, their program specification contains a natural language question, while our work predicts spreadsheet formulas based on the tabular context only. Therefore, our input specification is much more ambiguous. Meanwhile, our spreadsheet tables are typically less structured than the database tables. As shown in Figure 1, spreadsheet tables do not necessarily satisfy a consistent row-based schema, and data cell values may be dependent on cells from other rows.\nOur spreadsheet formula prediction problem is also related to code completion tasks (Raychev et al., 2014; Li et al., 2018; Svyatkovskiy et al., 2019; 2020; Svyatkovskoy et al., 2020). Specifically, the goal of code completion tasks is to synthesize the subsequent program tokens given the code context, while we aim to generate the formula in the cell with the missing value to complete the spreadsheet. However, instead of providing a token sequence to represent the code context, our data context is a semi-structured table, where data values in different cells are connected in a two-dimensional space." }, { "heading": "6 CONCLUSION", "text": "We presented the first technique to synthesize spreadsheet formulas given a tabular context, including both headers and cell values. In particular, we develop SPREADSHEETCODER, a BERT-based model to capture the two-dimensional relational structure of the spreadsheet context, which are typically semi-structured tables. We demonstrate that incorporating the table headers significantly facilitates the prediction. Furthermore, modeling the dependency among cells of different rows and columns is important for generating formulas in real-world spreadsheets with diverse table structures.\nThere are a number of promising directions for future research about spreadsheet applications. First, developing a paradigm for pre-training on spreadsheet data could enable the encoder to be more specialized for spreadsheet applications. Second, we could infer more fine-grained knowledge of the table structure from the spreadsheet format information, such as colors and fonts, which could be utilized to develop more advanced encoder architectures. Finally, we could also extend our approach to support more spreadsheet applications, such as bug detection and clone detection." }, { "heading": "A AN EXTENDED DISCUSSION OF RELATED WORK", "text": "Various neural network approaches have been proposed for the FlashFill benchmark (Parisotto et al., 2017; Devlin et al., 2017; Vijayakumar et al., 2018). Specifically, both R3NN (Parisotto et al., 2017) and RobustFill (Devlin et al., 2017) are purely statistical models, and RobustFill performs better. In a RobustFill model, each formula is executed on a single data row, thus each row is independently fed into a shared encoder. Afterwards, at each decoding step, a shared LSTM decoder generates a hidden\nstate per data row, which are then fed into a max pooling layer. Finally, the pooled hidden state is fed into a fully-connected layer to predict the formula token. On the other hand, in (Vijayakumar et al., 2018), they design a neural network to guide the deductive search performed by PROSE (Polozov & Gulwani, 2015), a commercial framework for input-output program synthesis. A recent work proposes neural-guided bottom-up search for program synthesis from input-output examples, and they extend the domain-specific language of FlashFill to support more spreadsheet programs (Odena et al., 2020).\nBesides formula prediction, some previous work has studied other applications related to spreadsheets, including smell detection (Hermans et al., 2012a; Cheung et al., 2016; Singh et al., 2017; Azam et al., 2019), clone detection (Hermans et al., 2013; Dou et al., 2016; Zhang et al., 2020), and structure extraction for spreadsheet tables (Dong et al., 2019a;b). Our proposed encoder architecture could potentially be adapted for these spreadsheet tasks as well, and we leave it for future work." }, { "heading": "B MORE EXPERIMENTAL RESULTS", "text": "For the setting where the model input does not include headers, corresponding to Table 3 in Section 4.3.2, we present the sketch and range accuracies in Table 4, and the breakdown accuracies on formulas of different sketch lengths in Figure 7. We observe that the performance degradation is more severe for formulas of sketch lengths 2–3." }, { "heading": "C DATASET CONSTRUCTION", "text": "Although in principle, our model could generate formulas using any operator in the spreadsheet language, some kinds of value references are impossible to predict from local context, thus we remove formulas with such values from our dataset. Specifically, we exclude formulas that use the HYPERLINK function with a literal URL, since those are merely \"stylistic\" formulas that perform no computation beyond presenting a URL as a clickable link. As discussed in Section 2, we also filtered out formulas with cross-references from other tabs or spreadsheets. In total, the formulas filtered out after these two steps constitute around 40% of all formulas. We further filtered out formulas with cell references farther than 10 rows or columns from the target cell in either direction, and formulas with absolute cell ranges. In this way, about 45% of the original set of formulas are kept in our dataset.\nMeanwhile, we observe that some spreadsheets may have tens of thousands of rows including the same formula, and including all of them in the dataset could bias our data distribution. Therefore, when multiple rows in the same spreadsheet table include the same formula in the same column, we keep the first 10 occurrences of such a formula, and create one data sample per formula. In this way, we extract around 800K formulas from 20M formulas before this filtering step.\nAbout the length distribution of target spreadsheet formulas, about 32% formulas have sketch lengths of 2, 53% formulas have sketch lengths of 3, 11% formulas have sketch lengths of 4-5, and 4% formulas have sketch lengths of at least 6. As discussed in Section 2, even if the formula sketches are mostly short, it is still challenging to generate the full formulas correctly. For example, the formula in Figure 1b is represented as SUM RANGE $ENDSKETCH$ $R$ R[-5] C[0] $SEP$ R[-1] C[0] $ENDR$ EOF, which has a sketch length of 2, but the full formula length is 10 if excluding the EOF token for length calculation.\nAmong all spreadsheet formulas included in our filtered dataset, we list the 30 most commonly used spreadsheet functions and operators with their types 3 as follows: SUM (Math), + (Operator, equivalent to ADD), - (Operator, equivalent to MINUS), * (Operator, equivalent to MULTIPLY), / (Operator, equivalent to DIV), & (Operator, equivalent to CONCAT), AVERAGE (Statistical), LEN (Text), UPLUS (Operator), STDEV (Statistical), COUNTA (Statistical), MAX (Statistical), LEFT (Text), IFERROR (Logical), ABS (Math), MEDIAN (Statistical), UMINUS (Operator), CONCATENATE (Text), ROUND (Math), WEEKNUM (Date), AVERAGEA (Statistical), MIN (Statistical), COUNT (Statistical), TRIM (Text), COS (Math), SIN (Math), SINH (Math), TODAY (Date), IF (Logical), MONTH (Date). We observe that most of these functions and operators are for mathematical calculation, statistical computation, and text manipulation. However, people also write conditional statements, and spreadsheet formulas for calculating the dates." }, { "heading": "D MORE DISCUSSION OF THE FLASHFILL-LIKE SETTING", "text": "Following prior work on FlashFill (Devlin et al., 2017; Parisotto et al., 2017; Vijayakumar et al., 2018), we evaluate model performance when different numbers of data rows are presented to the model as input. Specifically, when the input includes 1–11 data rows, we grow the input from the target row upward. Our full data context includes 21 data rows, with 10 rows above the target cell, 10 rows below the target cell, and 1 row where the target cell locates. Consistent with prior work, when we vary the number of input data rows during inference, we always evaluate the same model trained with the full data context including 21 data rows. Since RobustFill independently encodes each row, it supports variable number of input rows by design. For our models with the tabular input representation, we set the rows to be empty when they are out of the input scope, and apply a mask to indicate that the corresponding data values are invalid.\nE IMPLEMENTATION DETAILS\nData preprocessing. The content in each cell includes its data type and value, and we concatenate them as a token sequence. For example, A2 in Figure 1a is represented as num 0. As discussed in Section 3.1, we concatenate all cell values in the same row as a token sequence, where values of different cells are separated by the [SEP] token. Each data row fed into the model includes L = 128 tokens, and when the concatenated token sequence exceeds the length limit, we discard cells that are further away from the target cell. For column-wise representation, we produce token embeddings independently for each column-wise bundle Scb = [Hc, C3b−1, C3b, C3b+1] for b ∈ [−3, 3], where Ci is a token sequence produced by concatenating all tokens of the cells in column Ci. We perform the header detection according to the spreadsheet table format, i.e., we recognize the first row of a table as the header when it is frozen. Though some spreadsheet tables may include header-like descriptions in the leftmost column, e.g., “Total Score” in Figure 1a, we only extract headers as a row, to ensure the precision of header detection.\nOutput vocabulary construction. To construct the output formula token vocabulary, we filtered out tokens that appear less than 10 times in the training set, so that the vocabulary contains 462\n3The function types are based on the Google Sheets function list here: https://support.google. com/docs/table/25273?hl=en.\ntokens, out of 2625 tokens before filtering. In total, around a hundred operators are covered in our output vocabulary, including 82 spreadsheet-specific functions, and other general-purpose numerical operators (e.g., +, -).\nHyper-parameters. The formula decoder is a 1-layer LSTM with the hidden size of 512. We train the model with the Adam optimizer, with an initial learning rate of 5e-5. We train models for 200K minibatch updates, with a batch size 64. We set the dropout rate to be 0.1 for training. The norm for gradient clipping is 1.0." } ]
2,020
null
SP:6c1d4e09a17d1a6abe209ab96356b837dbfbd710
[ "This paper proposes to model 3D cloth by embedding it into kinematically deforming skinned mesh (KDSM)[1], a tetrahedral mesh that parametrizes the volumetric region around the underlying body. A KDSM can be created and deformed using a variety of skinning and simulation techniques introduced in [1]. This paper extends KDSM by enabling plastic deformation in material space (T-pose), and accurately models the cloth deformation as per-vertex offsets. Inspired by [2], this paper trains a neural network to learn the per-vertex offset as a function of body pose. Once trained, the network is able to infer the 3D cloth on a particular body. Experiments show that the proposed 3D cloth parameterization method is better than the 2D UV parameterization method used in [2]." ]
We present a novel learning framework for cloth deformation by embedding virtual cloth into a tetrahedral mesh that parametrizes the volumetric region of air surrounding the underlying body. In order to maintain this volumetric parameterization during character animation, the tetrahedral mesh is constrained to follow the body surface as it deforms. We embed the cloth mesh vertices into this parameterization of three-dimensional space in order to automatically capture much of the nonlinear deformation due to both joint rotations and collisions. We then train a convolutional neural network to recover ground truth deformation by learning cloth embedding offsets for each skeletal pose. Our experiments show significant improvement over learning cloth offsets from body surface parameterizations, both quantitatively and visually, with prior state of the art having a mean error five standard deviations higher than ours. Without retraining, our neural network generalizes to other body shapes and T-shirt sizes, giving the user some indication of how well clothing might fit. Our results demonstrate the efficacy of a general learning paradigm where high-frequency details can be embedded into low-frequency parameterizations.
[]
[ { "authors": [ "Thiemo Alldieck", "Marcus Magnor", "Weipeng Xu", "Christian Theobalt", "Gerard Pons-Moll" ], "title": "Detailed human avatars from monocular video", "venue": "In 2018 International Conference on 3D Vision (3DV),", "year": 2018 }, { "authors": [ "Thiemo Alldieck", "Marcus Magnor", "Weipeng Xu", "Christian Theobalt", "Gerard Pons-Moll" ], "title": "Video based reconstruction of 3d people models", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Thiemo Alldieck", "Marcus Magnor", "Bharat Lal Bhatnagar", "Christian Theobalt", "Gerard Pons-Moll" ], "title": "Learning to reconstruct people in clothing from a single rgb camera", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Thiemo Alldieck", "Gerard Pons-Moll", "Christian Theobalt", "Marcus Magnor" ], "title": "Tex2shape: Detailed full human body geometry from a single image", "venue": "In Proceedings of the International Conference on Computer Vision (ICCV). IEEE,", "year": 2019 }, { "authors": [ "Dragomir Anguelov", "Praveen Srinivasan", "Daphne Koller", "Sebastian Thrun", "Jim Rodgers", "James Davis" ], "title": "Scape: shape completion and animation of people", "venue": "In ACM transactions on graphics (TOG),", "year": 2005 }, { "authors": [ "David Baraff", "Andrew Witkin" ], "title": "Large steps in cloth simulation", "venue": "In Proceedings of the 25th annual conference on Computer graphics and interactive techniques,", "year": 1998 }, { "authors": [ "David Baraff", "Andrew Witkin", "Michael Kass" ], "title": "Untangling cloth", "venue": "In ACM SIGGRAPH 2003 Papers,", "year": 2003 }, { "authors": [ "Gill Barequet", "Bernard Chazelle", "Leonidas J Guibas", "Joseph SB Mitchell", "Ayellet Tal" ], "title": "Boxtree: A hierarchical representation for surfaces in 3d", "venue": "In Computer Graphics Forum,", "year": 1996 }, { "authors": [ "Thomas Bellotti", "Maxime Theillard" ], "title": "A coupled level-set and reference map method for interface representation with applications to two-phase flows simulation", "venue": "Journal of Computational Physics,", "year": 2019 }, { "authors": [ "Bharat Lal Bhatnagar", "Garvita Tiwari", "Christian Theobalt", "Gerard Pons-Moll" ], "title": "Multi-garment net: Learning to dress 3d people from images", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Javier Bonet", "Richard D Wood" ], "title": "Nonlinear continuum mechanics for finite element analysis", "venue": "Cambridge university press,", "year": 1997 }, { "authors": [ "R. Bridson", "S. Marino", "R. Fedkiw" ], "title": "Simulation of clothing with folds and wrinkles", "venue": "In Proceedings of the 2003 ACM SIGGRAPH/Eurographics Symposium on Computer Animation,", "year": 2003 }, { "authors": [ "Robert Bridson", "Ronald Fedkiw", "John Anderson" ], "title": "Robust treatment of collisions, contact and friction for cloth animation", "venue": "ACM Trans. Graph.,", "year": 2002 }, { "authors": [ "Matthew Cong", "Michael Bao", "Jane L. E", "Kiran S. Bhat", "Ronald Fedkiw" ], "title": "Fully automatic generation of anatomical face simulation models", "venue": "In Proceedings of the 14th ACM SIGGRAPH / Eurographics Symposium on Computer Animation,", "year": 2015 }, { "authors": [ "Edilson De Aguiar", "Leonid Sigal", "Adrien Treuille", "Jessica K Hodgins" ], "title": "Stable spaces for real-time clothing", "venue": "ACM Transactions on Graphics (TOG),", "year": 2010 }, { "authors": [ "Zhenglin Geng", "Daniel Johnson", "Ronald Fedkiw" ], "title": "Coercing machine learning to output physically accurate results", "venue": "Journal of Computational Physics,", "year": 2020 }, { "authors": [ "Stefan Gottschalk", "Ming C Lin", "Dinesh Manocha" ], "title": "Obbtree: A hierarchical structure for rapid interference detection", "venue": "In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques,", "year": 1996 }, { "authors": [ "Eitan Grinspun", "Anil N Hirani", "Mathieu Desbrun", "Peter Schröder" ], "title": "Discrete shells", "venue": "In Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation,", "year": 2003 }, { "authors": [ "Peng Guan", "Loretta Reiss", "David A Hirshberg", "Alexander Weiss", "Michael J Black" ], "title": "Drape: Dressing any person", "venue": "ACM Transactions on Graphics (TOG),", "year": 2012 }, { "authors": [ "Erhan Gundogdu", "Victor Constantin", "Amrollah Seifoddini", "Minh Dang", "Mathieu Salzmann", "Pascal Fua" ], "title": "Garnet: A two-stream network for fast and accurate 3d cloth draping", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Marc Habermann", "Weipeng Xu", "Michael Zollhoefer", "Gerard Pons-Moll", "Christian Theobalt" ], "title": "Livecap: Real-time human performance capture from monocular video", "venue": "ACM Transactions on Graphics (TOG),", "year": 2019 }, { "authors": [ "Fabian Hahn", "Bernhard Thomaszewski", "Stelian Coros", "Robert W Sumner", "Forrester Cole", "Mark Meyer", "Tony DeRose", "Markus Gross" ], "title": "Subspace clothing simulation using adaptive bases", "venue": "ACM Transactions on Graphics (TOG),", "year": 2014 }, { "authors": [ "James K Hahn" ], "title": "Realistic animation of rigid bodies", "venue": "Acm Siggraph Computer Graphics,", "year": 1988 }, { "authors": [ "Geoffrey Irving", "Joseph Teran", "Ronald Fedkiw" ], "title": "Invertible finite elements for robust simulation of large deformation", "venue": "In Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on Computer animation,", "year": 2004 }, { "authors": [ "Alec Jacobson", "Olga Sorkine" ], "title": "Stretchable and twistable bones for skeletal shape deformation", "venue": "In Proceedings of the 2011 SIGGRAPH Asia Conference,", "year": 2011 }, { "authors": [ "Ning Jin", "Yilin Zhu", "Zhenglin Geng", "Ronald Fedkiw" ], "title": "A pixel-based framework for data-driven clothing", "venue": "In Proceedings of the 19th ACM SIGGRAPH / Eurographics Symposium on Computer Animation,", "year": 2020 }, { "authors": [ "Ladislav Kavan", "Jiří Žára" ], "title": "Spherical blend skinning: a real-time deformation of articulated models", "venue": "In Proceedings of the 2005 symposium on Interactive 3D graphics and games,", "year": 2005 }, { "authors": [ "Ladislav Kavan", "Steven Collins", "Jiří Žára", "Carol O’Sullivan" ], "title": "Skinning with dual quaternions", "venue": "In Proceedings of the 2007 symposium on Interactive 3D graphics and games,", "year": 2007 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Zorah Lahner", "Daniel Cremers", "Tony Tung" ], "title": "Deepwrinkles: Accurate and realistic clothing modeling", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Jeff Lander" ], "title": "Skin them bones: Game programming for the web generation", "venue": "Game Developer Magazine,", "year": 1998 }, { "authors": [ "Binh Huy Le", "Jessica K Hodgins" ], "title": "Real-time skeletal skinning with optimized centers of rotation", "venue": "ACM Transactions on Graphics (TOG),", "year": 2016 }, { "authors": [ "Minjae Lee", "David Hyde", "Michael Bao", "Ronald Fedkiw" ], "title": "A skinned tetrahedral mesh for hair animation and hair-water interaction", "venue": "IEEE transactions on visualization and computer graphics,", "year": 2018 }, { "authors": [ "Minjae Lee", "David Hyde", "Kevin Li", "Ronald Fedkiw" ], "title": "A robust volume conserving method for character-water interaction", "venue": "In Proceedings of the 18th annual ACM SIGGRAPH/Eurographics Symposium on Computer Animation,", "year": 2019 }, { "authors": [ "John P Lewis", "Matt Cordner", "Nickson Fong" ], "title": "Pose space deformation: a unified approach to shape interpolation and skeleton-driven deformation", "venue": "In Proceedings of the 27th annual conference on Computer graphics and interactive techniques,", "year": 2000 }, { "authors": [ "Ming Lin", "Stefan Gottschalk" ], "title": "Collision detection between geometric models: A survey", "venue": "In Proc. of IMA conference on mathematics of surfaces,", "year": 1998 }, { "authors": [ "Matthew Loper", "Naureen Mahmood", "Javier Romero", "Gerard Pons-Moll", "Michael J Black" ], "title": "Smpl: A skinned multi-person linear model", "venue": "ACM transactions on graphics (TOG),", "year": 2015 }, { "authors": [ "Nadia Magnenat-Thalmann", "Richard Laperrire", "Daniel Thalmann" ], "title": "Joint-dependent local deformations for hand animation and object grasping", "venue": "In Proceedings on Graphics Interface", "year": 1988 }, { "authors": [ "Mehdi Mirza", "Simon Osindero" ], "title": "Conditional generative adversarial nets", "venue": "arXiv preprint arXiv:1411.1784,", "year": 2014 }, { "authors": [ "Neil Molino", "Robert Bridson", "Joseph Teran", "Ronald Fedkiw" ], "title": "A crystalline, red green strategy for meshing highly deformable objects with tetrahedra", "venue": "In IMR,", "year": 2003 }, { "authors": [ "Matthias Müller", "Nuttapong Chentanez", "Tae-Yong Kim", "Miles Macklin" ], "title": "Air meshes for robust collision handling", "venue": "ACM Transactions on Graphics (TOG),", "year": 2015 }, { "authors": [ "Ryota Natsume", "Shunsuke Saito", "Zeng Huang", "Weikai Chen", "Chongyang Ma", "Hao Li", "Shigeo Morishima" ], "title": "Siclope: Silhouette-based clothed people", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Alexandros Neophytou", "Adrian Hilton" ], "title": "A layered model of human body and garment deformation", "venue": "In 2014 2nd International Conference on 3D Vision,", "year": 2014 }, { "authors": [ "Hayato Onizuka", "Zehra Hayirci", "Diego Thomas", "Akihiro Sugimoto", "Hideaki Uchiyama", "Rinichiro Taniguchi" ], "title": "Tetratsdf: 3d human reconstruction from a single image with a tetrahedral outer shell", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Stanley Osher", "Ronald Fedkiw" ], "title": "Level Set Methods and Dynamic Implicit Surfaces", "venue": null, "year": 2002 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": null, "year": 2017 }, { "authors": [ "Chaitanya Patel", "Zhouyingcheng Liao", "Gerard Pons-Moll" ], "title": "Tailornet: Predicting clothing in 3d as a function of human pose, shape and garment style", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Gerard Pons-Moll", "Sergi Pujades", "Sonny Hu", "Michael J Black" ], "title": "Clothcap: Seamless 4d clothing capture and retargeting", "venue": "ACM Transactions on Graphics (TOG),", "year": 2017 }, { "authors": [ "Tiberiu Popa", "Quan Zhou", "Derek Bradley", "Vladislav Kraevoy", "Hongbo Fu", "Alla Sheffer", "Wolfgang Heidrich" ], "title": "Wrinkling captured garments using space-time data-driven deformation", "venue": "In Computer Graphics Forum,", "year": 2009 }, { "authors": [ "Nadia Robertini", "Edilson De Aguiar", "Thomas Helten", "Christian Theobalt" ], "title": "Efficient multi-view performance capture of fine-scale surface detail", "venue": "In 2014 2nd International Conference on 3D Vision,", "year": 2014 }, { "authors": [ "Shunsuke Saito", "Zeng Huang", "Ryota Natsume", "Shigeo Morishima", "Angjoo Kanazawa", "Hao Li" ], "title": "Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization", "venue": "In Proceedings of the International Conference on Computer Vision (ICCV)", "year": 2019 }, { "authors": [ "Shunsuke Saito", "Tomas Simon", "Jason Saragih", "Hanbyul Joo" ], "title": "Pifuhd: Multi-level pixel-aligned implicit function for high-resolution 3d human digitization", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Igor Santesteban", "Miguel A Otaduy", "Dan Casas" ], "title": "Learning-based animation of clothing for virtual try-on", "venue": "In Computer Graphics Forum,", "year": 2019 }, { "authors": [ "Andrew Selle", "Jonathan Su", "Geoffrey Irving", "Ronald Fedkiw" ], "title": "Robust high-resolution cloth using parallelism, history-based collisions, and accurate friction", "venue": "IEEE transactions on visualization and computer graphics,", "year": 2008 }, { "authors": [ "Eftychios Sifakis", "Sebastian Marino", "Joseph Teran" ], "title": "Globally coupled collision handling using volume preserving impulses", "venue": "In Proceedings of the 2008 ACM SIGGRAPH/Eurographics Symposium on Computer Animation,", "year": 2008 }, { "authors": [ "Joseph Teran", "Neil Molino", "Ronald Fedkiw", "Robert Bridson" ], "title": "Adaptive physics based tetrahedral mesh generation using level sets", "venue": "Engineering with computers,", "year": 2005 }, { "authors": [ "Joseph Teran", "Eftychios Sifakis", "Geoffrey Irving", "Ronald Fedkiw" ], "title": "Robust quasistatic finite elements and flesh simulation", "venue": "In Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation,", "year": 2005 }, { "authors": [ "Pascal Volino", "Nadia Magnenat-Thalmann" ], "title": "Animating complex hairstyles in real-time", "venue": "In Proceedings of the ACM symposium on Virtual reality software and technology,", "year": 2004 }, { "authors": [ "Pascal Volino", "Nadia Magnenat-Thalmann" ], "title": "Real-time animation of complex hairstyles", "venue": "IEEE Transactions on Visualization and Computer Graphics,", "year": 2006 }, { "authors": [ "Robert Webb", "Mike Gigante" ], "title": "Using dynamic bounding volume hierarchies to improve efficiency of rigid body simulations", "venue": "In Visual Computing,", "year": 1992 }, { "authors": [ "Jane Wu", "Zhenglin Geng", "Ning Jin", "Ronald Fedkiw" ], "title": "Physbam virtual cloth dataset, 2020a. http://physbam.stanford.edu", "venue": null, "year": 2020 }, { "authors": [ "Jane Wu", "Yongxu Jin", "Zhenglin Geng", "Hui Zhou", "Ronald Fedkiw" ], "title": "Recovering geometric information with learned texture perturbations", "venue": "arXiv preprint arXiv:2001.07253,", "year": 2020 }, { "authors": [ "Kui Wu", "Cem Yuksel" ], "title": "Real-time hair mesh simulation", "venue": "In Proceedings of the 20th ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games,", "year": 2016 }, { "authors": [ "Weipeng Xu", "Avishek Chatterjee", "Michael Zollhöfer", "Helge Rhodin", "Dushyant Mehta", "Hans-Peter Seidel", "Christian Theobalt" ], "title": "Monoperfcap: Human performance capture from monocular video", "venue": "ACM Transactions on Graphics (ToG),", "year": 2018 }, { "authors": [ "Jinlong Yang", "Jean-Sébastien Franco", "Franck Hétroy-Wheeler", "Stefanie Wuhrer" ], "title": "Analyzing clothing layer deformation statistics of 3d human motions", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Tao Yu", "Zerong Zheng", "Yuan Zhong", "Jianhui Zhao", "Qionghai Dai", "Gerard Pons-Moll", "Yebin Liu" ], "title": "Simulcap: Single-view human performance capture with cloth simulation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Cloth is particularly challenging for neural networks to model due to the complex physical processes that govern how cloth deforms. In physical simulation, cloth deformation is typically modeled via a partial differential equation that is discretized with finite element models ranging in complexity from variational energy formulations to basic masses and springs, see e.g. Baraff & Witkin (1998); Bridson et al. (2002; 2003); Grinspun et al. (2003); Baraff et al. (2003); Selle et al. (2008). Mimicking these complex physical processes and numerical algorithms with machine learning inference has shown promise, but still struggles to capture high-frequency folds/wrinkles. PCA-based methods De Aguiar et al. (2010); Hahn et al. (2014) remove important high variance details and struggle with nonlinearities emanating from joint rotations and collisions. More recently, Gundogdu et al. (2019); Santesteban et al. (2019); Patel et al. (2020); Jin et al. (2020) leverage body skinning Magnenat-Thalmann et al. (1988); Lander (1998); Lewis et al. (2000) to capture some degree of the nonlinearity; the cloth is then represented via learned offsets from a co-dimension one skinned body surface. Building on this prior work, we propose replacing the skinned co-dimension one body surface parameterization with a skinned (fully) three-dimensional parameterization of the volume surrounding the body.\nWe parameterize the three-dimensional space corresponding to the volumetric region of air surrounding the body with a tetrahedral mesh. In order to do this, we leverage the work of Lee et al. (2018; 2019), which proposed a number of techniques for creating and deforming such a tetrahedral mesh using a variety of skinning and simulation techniques. The resulting kinematically deforming skinned mesh (KDSM) was shown to be beneficial for both hair animation/simulation Lee et al. (2018) and water simulation Lee et al. (2019). Here, we only utilize the most basic version of the KDSM, assigning skinning weights to its vertices so that it deforms with the underlying joints similar to a skinned body surface (alternatively, one could train a neural network to learn more complex KDSM deformations). This allows us to make a very straightforward and fair comparison between learning offsets from a skinned body surface and learning offsets from a skinned parameterization of three-dimensional space. Our experiments showed an overall reduction in error of approximately\n50% (see Table 2 and Figure 8) as well as the removal of visual/geometric artifacts (see e.g. Figure 9) that can be directly linked to the usage of the body surface mesh, and thus we advocate the KDSM for further study. The neural network we trained for a particular body can also be used to infer cloth with unique wrinkle patterns on different body shapes and T-shirt sizes without retraining (see supplemental material). In order to further illustrate the efficacy of our approach, we show that the KDSM is amenable to being used with recently proposed works on texture sliding for better three-dimensional reconstruction Wu et al. (2020b) as well as in conjunction with networks that use a postprocess for better physical accuracy in the L∞ norm Geng et al. (2020) (see Figure 10).\nIn summary, our specific contributions are: 1) a novel three-dimensional parameterization for virtual cloth adapted from the KDSM, 2) an extension (enabling plastic deformation) of the KDSM to accurately model cloth deformation, and 3) a learning framework to efficiently infer such deformations from body pose. The mean error of the cloth predicted in Jin et al. (2020) is five standard deviations higher than the mean error of our results." }, { "heading": "2 RELATED WORK", "text": "Cloth: Data-driven cloth prediction using deep learning has shown significant promise in recent years. To generate clothing on the human body, a common approach is to reconstruct the cloth and body jointly Alldieck et al. (2018a;b); Xu et al. (2018); Alldieck et al. (2019a;b); Habermann et al. (2019); Natsume et al. (2019); Saito et al. (2019); Yu et al. (2019); Bhatnagar et al. (2019); Onizuka et al. (2020); Saito et al. (2020). In such cases, human body models such as SCAPE Anguelov et al. (2005) and SMPL Loper et al. (2015) can be used to reduce the dimensionality of the output space. To predict cloth shape, a number of works have proposed learning offsets from the body surface Guan et al. (2012); Neophytou & Hilton (2014); Pons-Moll et al. (2017); Lahner et al. (2018); Yang et al. (2018); Gundogdu et al. (2019); Santesteban et al. (2019); Patel et al. (2020); Jin et al. (2020) such that body skinning can be leveraged. There are a variety of skinning techniques used in animation; the most popular approach is linear blend skinning (LBS) Magnenat-Thalmann et al. (1988); Lander (1998). Though LBS is efficient and computationally inexpensive, it suffers from well-known artifacts addressed in Kavan & Žára (2005); Kavan et al. (2007); Jacobson & Sorkine (2011); Le & Hodgins (2016). Since regularization often leads to overly smooth cloth predictions, additional wrinkles/folds can be added to initial network inference results Popa et al. (2009); Mirza & Osindero (2014); Robertini et al. (2014); Lahner et al. (2018); Wu et al. (2020b); Patel et al. (2020). Most recently, Patel et al. (2020) parameterized cloth as a submesh of the SMPL body mesh and decomposed cloth deformation into low-frequency and high-frequency components. However, this parameterization limits cloth to be bound by the topology of SMPL, and the high-frequency folds/wrinkles added by the network are not constrained to match those in the ground truth data. In contrast, our method allows one to predict cloth deformation independent of a predefined PCA basis, and using Geng et al. (2020) ensures that folds/wrinkles are physically consistent.\n3D Parameterization: Parameterizing the air surrounding deformable objects is a way of treating collisions during physical simulation Sifakis et al. (2008); Müller et al. (2015); Wu & Yuksel (2016). For hair simulation in particular, previous works have parameterized the volume enclosing the head or body using tetrahedral meshes Lee et al. (2018; 2019) or lattices Volino & Magnenat-Thalmann (2004; 2006). These volumes are animated such that the embedded hairs follow the body as it deforms enabling efficient hair animation, simulation, and collisions. Interestingly, deforming a low-dimensional reference map that parameterizes high-frequency details has been explored in computational physics as well, particularly for fluid simulation, see e.g. Bellotti & Theillard (2019)." }, { "heading": "3 SKINNING A 3D PARAMETERIZATION", "text": "We generate a KDSM using red/green tetrahedralization Molino et al. (2003); Teran et al. (2005a) to parameterize a three-dimensional volume surrounding the body. Starting with the body in the T-pose, we surround it with an enlarged bounding box containing a three-dimensional Cartesian grid. As is typical for collision bodies in computer graphics Bridson et al. (2003), we generate a level set representation separating the inside of the body from the outside (see e.g. Osher & Fedkiw (2002)). See Figure 1a. Next, a thickened level set is computed by subtracting a constant value from the current level set values (Figure 1b). Then, we use red/green tetrahedralization as outlined in Molino\net al. (2003); Teran et al. (2005a) to generate a suitable tetrahedral mesh (Figure 1c). Optionally, this mesh could be compressed to the level set boundary using either physics or optimization, but we forego this step because the outer boundary is merely where our parameterization ends and does not represent an actual surface as in Molino et al. (2003); Teran et al. (2005a).\nSkinning weights are assigned to the KDSM using linear blend skinning (LBS) Magnenat-Thalmann et al. (1988); Lander (1998), just as one would skin a co-dimension one body surface parameterization. In order to skin the KDSM so that it follows the body as it moves, each vertex vk is assigned a nonzero weight wkj for each joint j it is associated with. Then, given a pose θ with joint transformations Tj(θ), the world space position of each vertex is given by vk(θ) = ∑ j wkjTj(θ)v j k where v j k is the untransformed location of vertex vk in the local reference space of joint j. See Figure 1d. Importantly, it can be quite difficult to significantly deform tetrahedral meshes without having some tetrahedra invert Irving et al. (2004); Teran et al. (2005b); thus, we address inversion and robustness issues/details in Section 5." }, { "heading": "4 EMBEDDING CLOTH IN THE KDSM", "text": "In continuum mechanics, deformation is defined as a mapping from a material space to the world space, and one typically decomposes this mapping into purely rigid components and geometric strain measures, see e.g. Bonet & Wood (1997). Similar in spirit, we envision the T-pose KDSM as the material space and the skinned KDSM as being defined by a deformation mapping to world space for each pose θ. As such, we denote the position of each cloth vertex in the material space (i.e. T-pose, see Figure 2a) as umoi . We embed each cloth vertex u mo i into the tetrahedron that contains it via barycentric weights λmoik , which are only nonzero for the parent tetrahedron’s vertices. Then, given a pose θ, a cloth vertex’s world space location is defined as ui(θ) = ∑ k λ mo ik vk(θ) so that it is constrained to follow the KDSM deformation, assuming linearity in each tetrahedron (see Figure 2b). Technically, this is an indirect skinning of the cloth with its skinning weights computed as a linear combination of the skinning weights of its parent tetrahedron’s vertices, and leads to the obvious errors one would expect (see e.g. Figure 3, second row).\nThe KDSM approximates a deformation mapping for the region surrounding the body. This approximation could be improved via physical simulation (see e.g. Lee et al. (2018; 2019)), which is computationally expensive but could be made more efficient using a neural network. However, the tetrahedral mesh is only well suited to capture deformations of a volumetric three-dimensional space and as such struggles to capture deformations intrinsic to codimension one surfaces/shells including the bending, wrinkling, and folding important for cloth. Thus, we take further motivation from constitutive mechanics (see e.g. Bonet & Wood (1997)) and allow the cloth vertices to move in material space (the T-pose) akin to plastic deformation. That is, we use plastic deformation in the material space in order to recapture elastic deformations (e.g. bending) lost/recovered when embedding cloth into a tetrahedral mesh. These elastic deformations are encoded as a pose-dependent plastic displacement for each cloth vertex, i.e. di(θ); then, the pose-dependent, plastically deformed material space position of each cloth vertex is given by umi (θ) = u mo i + di(θ).\nGiven a pose θ, umi (θ) will not necessarily have the same parent tetrahedron or barycentric weights as umoi ; thus, a new embedding is computed for umi (θ) obtaining new barycentric weights λmik(θ). Using this new embedding, the position of the cloth vertex in pose θ will be ui(θ) = ∑ k λ m ik(θ)vk(θ). Ideally, if the di(θ) are computed correctly, ui(θ) will agree with the ground truth location of cloth vertex i in pose θ. The second row of Figure 4 shows cloth in the material space T-pose plastically deformed such that its skinned location in pose θ (Figure 4, first row) well matches the ground truth shown in the first row of Figure 3. Learning di(θ) for each vertex can be accomplished in exactly the same\nfashion as learning displacements from the skinned body surface mesh, and thus we use the same approach as proposed in Jin et al. (2020). Afterwards, an inferred di(θ) is used to compute umi (θ) followed by λmik(θ), and finally ui(θ). Addressing efficiency, note that only the vertices of the parent tetrahedra of um(θ) need to be skinned, not the entire tetrahedral mesh.\nIn order to compute each training example (θ, d(θ)), we examine the ground truth cloth in pose θ, i.e. uGT (θ). For each cloth vertex uGTi (θ), we find the deformed tetrahedron it is located in and\ncompute barycentric weights λGTik (θ) resulting in u GT i (θ) = ∑ k λ GT ik (θ)vk(θ). Then, that vertex’s\nmaterial space (T-pose) location is given by umi (θ) = ∑ k λ GT ik (θ)v m k where v m k are the material space (T-pose) positions of the tetrahedral mesh (which are the same for all poses, and thus not a function of θ). Finally, we define di(θ) = umi (θ)− u mo i ." }, { "heading": "5 INVERSION AND ROBUSTNESS", "text": "Unfortunately, the deformed KDSM will generally contain both inverted and overlapping tetrahedra, both of which can cause a ground truth cloth vertex uGTi (θ) to be contained in more than one deformed tetrahedron, leading to multiple candidates for umi (θ) and di(θ). Although physical simulation can be used to reverse some of these inverted elements Irving et al. (2004); Teran et al. (2005b) as was done in Lee et al. (2018; 2019), it is typically not feasible to remove all inverted tetrahedra. Additionally, overlapping tetrahedra occur quite frequently between the arm and the torso, especially because the KDSM needs to be thick enough to ensure that it contains the cloth as it deforms.\nBefore resolving which parent tetrahedron each vertex with multiple potential parents should be embedded into, we first robustly assemble a list of all such candidate parent tetrahedra as follows. Given a deformed tetrahedral mesh v(θ) in pose θ, we create a bounding box hierarchy acceleration structure Hahn (1988); Webb & Gigante (1992); Barequet et al. (1996); Gottschalk et al. (1996); Lin & Gottschalk (1998) for the tetrahedral mesh built from a slightly thickened bounding box around each tetrahedron. Then given a ground truth cloth vertex, uGTi (θ), we robustly find all tetrahedra containing (or almost containing) it using a minimum barycentric weight of − with > 0. We prune this list to remove tetrahedra that may be subject to numerical precision errors that could cause a vertex to erroneously be identified as inside multiple or no tetrahedra. This is done by first sorting the tetrahedra on the list based on their largest minimum barycentric weight, i.e. preferring tetrahedra the vertex is deeper inside. Starting with the first tetrahedron on the sorted list, we identify the face across from the vertex with the smallest barycentric weight and prune all of that face’s vertex neighbors (and thus face/edge neighbors too) from the remainder of the list. Then, the next (non-deleted) tetrahedron on the list is considered, and the process is repeated, etc.\nMethod 1: Any of the parent tetrahedra that remain on the list may be chosen to obtain training examples with zero error as compared to the ground truth, although different choices lead to higher/lower variance in d(θ) and thus higher/lower demands on the neural network. To establish a baseline, we first take the naive approach of randomly choosing umi (θ) when multiple candidates exist. This can lead to high variance in d(θ) and subsequent ringing artifacts during inference. See Figure 5.\nMethod 2: Aiming for lower variance in the training data, we leverage the method of Jin et al. (2020) where UV texture space and normal direction offsets from the skinned body surface are calculated for each pose θ in the training examples. These same offsets can be used in any pose, since the UVN coordinate system is still defined (albeit deformed) in every pose. Thus, we utilize these UVN offsets in our material space (T-pose) in order to define um(θ) and subsequently d(θ). In particular, given the shrinkwrapped cloth in the T-pose, we apply UVN offsets corresponding to pose θ. Although this results in lower variance than that obtained from Method 1, the resulting d(θ) do not exactly recover the ground truth cloth uGT (θ). See Figure 6.\n(a) (b) (c)\nFigure 6: (a) shows the result obtained using Method 2 to compute um(θ) in material space (the T-pose) for a pose θ. (b) shows the result obtained using this embedding to compute u(θ) as compared to the ground truth uGT (θ) (c). Although the variance in um(θ) and d(θ) is lower than that obtained using Method 1, the training examples now contain errors (shown with a heat map) when compared to the ground truth.\nHybrid Method: When a vertex has only one candidate parent tetrahedron, Method 1 is used. When there is more than one candidate parent tetrahedron, we choose the parent that gives an embedding closest to the result of Method 2 (in the T-pose) as long as the disagreement is below a threshold (1 cm). As shown (for a particular training example) in Figure 7a, this can leave a number of potentially high variance vertices undefined. Aiming for smoothness, we use the Poisson morph from Cong et al. (2015) to morph from the low variance results of Method 2 to the partially-defined cloth mesh shown in Figure 7a, utilizing the already defined/valid vertices as Dirichlet boundary conditions. See Figure 7b. Although smooth, the resulting predictions may contain significant errors, and thus we only validate those that are within a threshold (1 cm) of the results of Method 2. See Figure 7c. The Poisson equation morph guarantees smoothness, while only utilizing the morphed vertices close to the results of Method 2 limits errors (as compared to the ground truth) to some degree. This process is repeated until no newly newly morphed vertices are within the threshold (1 cm). At that point, the remaining vertices are assigned their morphed values despite any errors they might contain. See Figure 7d.\n6 EXPERIMENTS\nDataset Generation: We use the cloth dataset from Jin et al. (2020), which consists of T-shirt meshes corresponding to about 10,000 poses for a particular body Wu et al. (2020a). For each pose, the cloth was simulated on the scanned body, taking into account gravity, elastic and damping forces, and collision, contact and friction forces. We applied an 80-10-10 split to obtain training, validation, and test datasets, respectively. Table 1 compares the maximum L2 and L∞ norms as compared to the ground truth for each of the three methods used to generate training examples. While Method 1 minimizes cloth vertex errors, the resulting d(θ) contains high variance. Method 2 has significant vertex errors,\nbut significantly lower variance in d(θ). We leverage the advantages of both using the hybrid method.\nNetwork Training: We adapt the network architecture from Jin et al. (2020) for learning the displacements d(θ), i.e. by storing the displacements d(θ) as pixel-based cloth images for the front and back sides of the T-shirt. Given joint transformation matrices of shape 1 × 1 × 90 for pose θ, the network applies transpose convolution, batch normalization, and ReLU activation layers. The\noutput of the network is 128 × 128 × 6, where the first three dimensions represent the predicted displacements for the front side of the T-shirt, and the last three dimensions represent those for the back side. We train with an L2 loss on the difference between the ground truth displacements d(θ) and network predictions d̂(θ), using the Adam optimizer Kingma & Ba (2014) with a 10−3 learning rate in PyTorch Paszke et al. (2017).\nNetwork Inference: From the network output d̂(θ), we define ûm(θ) = umo + d̂(θ), which is then embedded into the material space (T-pose) tetrahedral mesh and subsequently skinned to world space to obtain the cloth mesh prediction û(θ). Table 2 summarizes the network inference results on the test dataset (not used in training). While all three methods detailed in Section 5 outperform the method proposed in Jin et al. (2020), the hybrid method achieved the lowest average vertex error and standard deviation. Figure 8 shows histograms of the average vertex error over all examples in the test dataset for the hybrid method and Jin et al. (2020). Note that the mean error of Jin et al. (2020) is five standard deviations above the mean of the hybrid method. Table 3 shows the errors in volume enclosed by the cloth (after capping the neck/sleeves/torso).\nThere are significant visual improvements as well, see e.g. Figure 9. In addition, we evaluate the hybrid method network on a motion capture sequence from cmu and compare the inferred cloth to the results in Jin et al. (2020). The hybrid method is able to achieve greater temporal consistency; see the supplemental video. To demonstrate the efficacy of our approach in conjunction with other approaches, we apply texture sliding from Wu et al. (2020b) and the physical post process from Geng et al. (2020) to the results of the hybrid method network predictions, see Figure 10." }, { "heading": "7 DISCUSSION", "text": "In this paper, we presented a framework for learning cloth deformation using a volumetric parameterization of the air surrounding the body. This parameterization was implicitly defined via a tetrahedral mesh that was skinned to follow the body as it animates, i.e. KDSM. A neural network was used to predict offsets in material space (the T-pose) such that the result well matched the ground truth after skinning the KDSM. The cloth predicted using the hybrid method detailed in Section 5 exhibits half the error as compared to state-of-the-art; in fact, the mean error from Jin et al. (2020) is five standard deviations above the mean resulting from our hybrid approach. Our results demonstrate that the KDSM is a promising foundation for learning virtual cloth and potentially for hair and solid/fluid interactions as well. Moreover, the KDSM should prove useful for treating cloth collisions, multiple garments, and interactions with external physics.\nThe KDSM intrinsically provides a more robust parameterization of three-dimensional space, since it contains a true extra degree of freedom as compared to the degenerate co-dimension one body surface. In particular, embedding cloth into a tetrahedral mesh has stability guarantees that do not exist when computing offsets from the body surface. See Figure 11. We believe that the significant\ndecrease in network prediction errors is at least partially attributable to increased stability from using a volumetric parameterization." }, { "heading": "A MODIFIED BODY SHAPES AND CLOTHING SIZES", "text": "We demonstrate that our network trained to infer cloth on a particular body, e.g. from Wu et al. (2020a), can be used for other body parameterizations and body shapes without retraining. Using the trained hybrid method network (see Section 6), the inferred T-shirt for a given pose is transferred to the SMPL body model Loper et al. (2015) as follows. First, we generate a skinned KDSM for the SMPL body as described in Section 3. Next, we transfer the T-pose cloth mesh to the SMPL body in the T-pose via quasistatic simulation. Then, for any skeletal pose, KDSM embedding offsets for the cloth on the SMPL body are inferred using the trained network. See Figure 13. The cloth can also be scaled to different sizes depending on user preference. See Figure 14." } ]
2,020
null
SP:1480f9299a4918309d9d2b0f658fb0f863921387
[ "1.The authors propose an extension of the CE loss to reduce classification bias that occurs in present methods and datasets. They calculate Maximum Entropy (ME) for images on the entire training dataset and then calculate the reconstruction loss between this and the ME for convolutional kernels during training. Their experiments results show that minimizing this reconstruction loss along with CE speeds up convergence. " ]
Categorical Cross Entropy (CCE) is the most commonly used loss function in deep neural networks such as Convolutional Neural Networks (CNNs) for multi-class classification problems. In spite of the fact that CCE is highly susceptible to noise; CNN models trained without accounting for the unique noise characteristics of the input data, or noise introduced during model training, invariably suffer from overfitting affecting model generalizability. The lack of generalizability becomes especially apparent in the context of ethnicity/racial image classification problems encountered in the domain of computer vision. One such problem is the unintended discriminatory racial bias that CNN models trained using CCE fail to adequately address. In other words, CNN models trained using CCE offer a skewed representation of classification performance favoring lighter skin tones. In this paper, we propose and empirically validate a novel noise-robust extension to the existing CCE loss function called Maximum Categorical Cross-Entropy (MCCE), which utilizes CCE loss and a novel reconstruction loss, calculated using the Maximum Entropy (ME) measures of the convolutional kernel weights and input training dataset. We compare the use of MCCE with CCE-trained models on two benchmarking datasets, colorFERET and UTKFace, using a Residual Network (ResNet) CNN architecture. MCCE-trained models reduce overfitting by 5.85% and 4.3% on colorFERET and UTKFace datasets respectively. In cross-validation testing, MCCE-trained models outperform CCE-trained models by 8.8% and 25.16% on the colorFERET and UTKFace datasets respectively. MCCE addresses and mitigates the persistent problem of inadvertent racial bias for facial recognition problems in the domain of computer vision.
[]
[ { "authors": [ "Berkin Bilgic", "Itthi Chatnuntawech", "Audrey P Fan", "Kawin Setsompop", "Stephen F Cauley", "Lawrence L Wald", "Elfar Adalsteinsson" ], "title": "Fast image reconstruction with L2-regularization", "venue": "Journal of magnetic resonance imaging,", "year": 2014 }, { "authors": [ "Corinna Cortes", "Mehryar Mohri", "Afshin Rostamizadeh" ], "title": "L2 regularization for learning kernels", "venue": "arXiv preprint arXiv:1205.2653,", "year": 2012 }, { "authors": [ "Philip J Davis" ], "title": "Interpolation and approximation", "venue": "Courier Corporation,", "year": 1975 }, { "authors": [ "David L Donoho", "Iain M Johnstone", "Alan S Stern", "Jeffrey C Hoch" ], "title": "Does the maximum entropy method improve sensitivity", "venue": "Proceedings of the National Academy of Sciences,", "year": 1990 }, { "authors": [ "Siyao Fu", "Haibo He", "Zeng-Guang Hou" ], "title": "Learning race from face: A survey", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2014 }, { "authors": [ "Vaishali Ganganwar" ], "title": "An overview of classification algorithms for imbalanced datasets", "venue": "International Journal of Emerging Technology and Advanced Engineering,", "year": 2012 }, { "authors": [ "Aritra Ghosh", "Himanshu Kumar", "PS Sastry" ], "title": "Robust loss functions under label noise for deep neural networks", "venue": "In Thirty-First AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Ian Goodfellow", "Yoshua Bengio", "Aaron Courville" ], "title": "Deep learning. book in preparation for mit press", "venue": "URL¡ http://www. deeplearningbook. org,", "year": 2016 }, { "authors": [ "Ralph VL Hartley" ], "title": "Transmission of information", "venue": "Bell Labs Technical Journal,", "year": 1928 }, { "authors": [ "Douglas M Hawkins" ], "title": "The problem of overfitting", "venue": "Journal of chemical information and computer sciences,", "year": 2004 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Anil K Jain" ], "title": "Fundamentals of digital image processing", "venue": "Englewood Cliffs, NJ: Prentice Hall,,", "year": 1989 }, { "authors": [ "R Johnson", "J Shore" ], "title": "Comments on and correction to” Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross-entropy”(Jan 80 26-37)[Corresp.", "venue": "IEEE transactions on Information Theory,", "year": 1983 }, { "authors": [ "James Macqueen", "Jacob Marschak" ], "title": "Partial knowledge, entropy, and estimation", "venue": "Proceedings of the National Academy of Sciences,", "year": 1975 }, { "authors": [ "Vitaly Maiorov" ], "title": "Approximation by neural networks and learning theory", "venue": "Journal of Complexity,", "year": 2006 }, { "authors": [ "P Jonathon Phillips", "Harry Wechsler", "Jeffery Huang", "Patrick J Rauss" ], "title": "The FERET database and evaluation procedure for face-recognition algorithms", "venue": "Image and vision computing,", "year": 1998 }, { "authors": [ "MI Reis", "Nilson Costa Roberty" ], "title": "Maximum entropy algorithms for image reconstruction from projections", "venue": "Inverse Problems,", "year": 1992 }, { "authors": [ "Ohad Shamir" ], "title": "Distribution-specific hardness of learning neural networks", "venue": "The Journal of Machine Learning Research,", "year": 2018 }, { "authors": [ "John Shore", "Rodney Johnson" ], "title": "Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross-entropy", "venue": "IEEE Transactions on information theory,", "year": 1980 }, { "authors": [ "Connor Shorten", "Taghi M Khoshgoftaar" ], "title": "A survey on image data augmentation for deep learning", "venue": "Journal of Big Data,", "year": 2019 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Mahdi Soltanolkotabi", "Adel Javanmard", "Jason D Lee" ], "title": "Theoretical insights into the optimization landscape of over-parameterized shallow neural networks", "venue": "IEEE Transactions on Information Theory,", "year": 2018 }, { "authors": [ "Christian Szegedy", "Wei Liu", "Yangqing Jia", "Pierre Sermanet", "Scott Reed", "Dragomir Anguelov", "Dumitru Erhan", "Vincent Vanhoucke", "Andrew Rabinovich" ], "title": "Going deeper with convolutions", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Ribeiro", "Fabian Pedregosa", "Paul van" ], "title": "Mulbregt, and SciPy 1. 0 Contributors. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python", "venue": "Nature Methods,", "year": 2020 }, { "authors": [ "Stephen J. Wernecke" ], "title": "Maximum entropy image reconstruction", "venue": "IEEE Transactions on Computers,", "year": 1977 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "arXiv preprint arXiv:1611.03530,", "year": 2016 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Michael C Mozer", "Yoram Singer" ], "title": "Identity crisis: Memorization and generalization under extreme overparameterization", "venue": null, "year": 1902 }, { "authors": [ "Zhifei Zhang", "Yang Song", "Hairong Qi" ], "title": "Age progression/regression by conditional adversarial autoencoder", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Zhilu Zhang", "Mert Sabuncu" ], "title": "Generalized cross entropy loss for training deep neural networks with noisy labels", "venue": "In Advances in neural information processing systems,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Convolutional Neural Networks (CNNs) offer state-of-the-art results in computer vision tasks He et al. (2016); Szegedy et al. (2015); Simonyan & Zisserman (2014) but are susceptible to inherent noises in the input training data preempting overfitting on the input data during information propagation. When new data is presented, overfit models do not generalize well and offer significantly lower classification performance, exacerbating the problem of bias towards a specific subset of data. The fundamental learning theory behind CNNs is to approximate an underlying d-dimensional interpolated function f(X) ∈ Rd by using information from n number of d-dimensional input vectors X = {x1, x2, · · · , xn} where xi =< x1, x2, · · · , xd > and i, d ∈ Z>0 Maiorov (2006). The problem of approximation is theoretically non-linear and there is empirical evidence to support the assertion that CNNs simply memorize the input training data Zhang et al. (2016).\nOverfitting occurs when the internal parameters of a CNN model are finely tuned to the unique variances of the input training data that it perfectly models its characteristics Hawkins (2004). Misclassification occurs when overfit models are unable to distinguish between overlapping variances for different classes of images. Reducing overfitting is also difficult since establishing a theoretical understanding or analyzing the mechanisms of learning in CNNs for non-convex optimization problems such as image classification is generally not well understood Shamir (2018).\nA simple way to reduce overfitting is to train models using a very large number of images Shorten & Khoshgoftaar (2019), such as the ImageNet dataset consisting of millions of training images used for the purpose of natural image classification. While using big data solutions might mask the underlying problem of model overfitting, acquisition of clean/noise-free labeled data for supervised model training is challenging. The problem of data acquisition is compounded further by ethical, societal, and practical concerns when dealing with facial datasets, especially for the task of race or gender classification.\nAnother key challenge while creating datasets is the consideration that needs to be made on the distribution of data amongst the multiple classes along with the variability of data within an individual class. Unbalanced datasets where the data distribution of images is not equal for all the classes introduces bias during model training Ganganwar (2012). The only viable solution to rectify imbalanced datasets is to augment or supplement datasets with new images which as mentioned before is an ongoing challenge. To the best of our knowledge, there is no research/work undertaken to optimize data distribution of the convolutional kernel weights during model training. We hypothesize that balancing convolutional kernel data, during model training could aide in mitigating bias and increase classification performance through alleviating the severity of inherent noise.\nSome researchers attribute racial bias of CNN models to noises in the training data and associated labels proposing alternate loss functions like Mean Absolute Error (MAE) Ghosh et al. (2017) to commonly used loss functions like Categorical Cross Entropy (CCE), as explained in Section 2.1. MAE was proposed as a noise-robust alternative to mitigate the susceptibility of CNNs to noise, but as Zhang & Sabuncu (2018) asserts, MAE is not applicable for complex natural image datasets like ImageNet and as such it is not considered in this paper. The task of classifying race in human faces is established to be more complex than natural image classification because there exists a narrow range of possible variations in features between human faces of different races, especially when skin tone is not the major determining factor for racial identity Fu et al. (2014).\nIn this paper, we explore the problem of overfitting with respect to racial classification by assessing the train-test divergence to quantify the degree of generalizability where a higher train-test divergence indicates a greater degree of model overfitting on the training data. We also propose a novel extension to the commonly used CCE loss function using Maximum Entropy (ME) Hartley (1928) measures, called Maximum Categorical Cross Entropy (MCCE). MCCE loss calculations are determined by taking into account the distribution of convolutional kernel weights during model training and the traditional CCE loss. Most related works explore model over-parameterization Zhang et al. (2019) or under-parameterization Soltanolkotabi et al. (2018) with unrealistic assumptions made about the distribution of input data; we do not make any such assumptions.\nThe contributions of this paper are as follows:\n• We propose a novel extension to the Categorical Cross Entropy (CCE) loss function using Maximum Entropy (ME) measures known as Maximum Categorical Cross Entropy (MCCE) loss to reduce model overfitting.\n• We empirically validate the MCCE loss function with respect to model overfitting using traintest divergence as a metric and evaluate generalizability across datasets by using cross-validation testing." }, { "heading": "2 BACKGROUND", "text": "Section 2.1 presents an understanding of how CCE loss is calculated. Sections 2.2 and ?? detail how kernel regularization and batch normalization influence CCE loss with their limitations. Section 2.3 provides the theoretical background of Maximum Entropy (ME) and methods to calculate ME along with estimating the reconstruction loss." }, { "heading": "2.1 CATEGORICAL CROSS-ENTROPY (CCE) LOSS", "text": "The most commonly used loss function is the Categorical Cross-Entropy (CCE) loss given in Equation (1), which is a measure of difference between the probability distributions of one-hot encoded CNN computed class labels and ground truths. CNN classification uses a softmax function to calculate the required probability distributions Goodfellow et al. (2016).\nH(p, q) = Σni=1 = p(xi) log q(xi) Where, xi ∈ X (1)\nIn Equation (1), q(xi) and p(xi) represent the probability distributions of the one-hot encoded CNN predicted class labels and ground truths respectively for an input data vector xi. Given that CNN model training introduces noises during convolutional operations or information propagation and that any inherent noise present in the input data can significantly affect model performance, a noise-robust alternative to CCE would help improve classification performance and mitigate bias. This is the reason why stochastic optimizers and gradient descent algorithms function using the framework of maximum likelihood estimation." }, { "heading": "2.2 KERNEL REGULARIZATION", "text": "The intuition behind regularization is that of Ockham’s razor to penalize complex models and to promote simpler models during training. Unlike empirical risk minimization which only considers loss minimization, regularization was proposed to minimize structural risk which considers both complexity and loss minimization. The most prominent and simple kernels that greatly minimize loss are selected Bilgic et al. (2014). Model complexity is represented in two ways, as a function of the total number of features with nonzero weights (L1) or as a function of all the weights of all the features in a model (L2). L2 regularization is most commonly used in computer vision tasks for CNN models such as ResNet. Model complexity can be quantified using the L2 regularization formula given in Equation (2), defined by using the sum of squares of all the feature weights as the regularization term Cortes et al. (2012).\n||ω||2 = ω21 + ω22 + ω23 + ω2n (2)\nIn Equation (2), the magnitude of the absolute value of the weight ω indicates complexity. Feature weights close to zero have no significant impact on model complexity, while large outlier weight values have a more pronounced impact on ω. The quantity of feature weights n determined using the number of trainable model parameters also contribute greatly to ω and model complexity. Furthermore, kernel regularization as it is implemented currently for CCE loss utilizes CNN computed label errors and does not take the data distribution of the convolutional kernels into account." }, { "heading": "2.3 MAXIMUM ENTROPY AND RECONSTRUCTION LOSS", "text": "The use of Maximum Entropy (ME) for applications such as convolutional kernel analysis is justified since ME is the only consistent way of selecting a single discrete data point from the set of input data vectors to best fit the regression curve, proven axiomatically in Shore & Johnson (1980); Johnson & Shore (1983).\nA method to approximate ME for digital images is through the use of distributed normalized histograms Gonzalez & Woods (2007); Jain (1989). The open-source SciKit-image processing library written in Python can be used to calculate the ME measures for images Virtanen et al. (2020).\nEntropy in images is related to the complexity contained in a given neighborhood, computed by using a circular disk with a radius of r. The disk is used to measure minute variations in local grayscale level distribution. The maximum entropy for an image depends on the number of gray levels, an 8 bit image has 256 gray levels (0-255) which has a theoretical maximum entropy of log2(28) = 8 bits per pixel. Changing the value of r can invariable produce higher or lower ME measure as illustrated in Figure 1. Similarly higher or lower ME values will be obtained while measuring convolutional kernel weights. A decrease in ME divergence can be observed in Figure 1 for r values of 5 and 50 relative to r values of 1 and 5. A significant difference in spatial/semantic information in the images can be seen with greater r values, which suggests loss in precision during approximation.\nME measures for color images require the computation on each of the three color channels, Red (R), Green (G) and Blue (B) i.e. RGB separately and averaging the result. The averaged ME measures for images in the colorFERET and UTKFace datasets are 2.09 and 2.25 bits per pixel respectively using an r value of 1. The amount of time taken to calculate the ME measures is insignificant as the ME calculation script can be executed in parallel on the CPU, while CNN model training occurs on the GPU, as evidenced in the supplementary data uploaded. Solutions other than ME for image reproduction/reconstruction from noisy or incomplete measurements such as, the use of non-linear variations on fourier transformations fail when convolutional kernels are incorporated Donoho et al. (1990). Furthermore, ME reconstruction has been shown to provide superior noise suppression while mostly preserving de-emphasized structural noise near the baseline (relative to high signal information) Donoho et al. (1990).\nAccurate reconstructions can be approximated using a 1D projection of any underlying function which is reduced to g(X) ∈ Rd such that xi ∈ X Reis & Roberty (1992). As discussed in Section 1, the underlying functional representation of the input dataset is f(X), the difference between the true representation f(X) and the ME reconstruction approximation g(X) is the reconstruction loss for the input dataset. Results presented in Reis & Roberty (1992), indicate that reconstructions using accurate and noisy data had insignificantly small variations compared to the original, attesting to the noise-robust ability of using ME measures for reconstruction. This noise averse characteristic of ME is especially important for race classification as lighting or ISO parameters of the input images can significantly affect the performance of CNN models. Reconstruction loss is described as the convolutional kernel data loss whereas CCE can be characterized as a class label loss." }, { "heading": "3 MAXIMUM CATEGORICAL CROSS-ENTROPY (MCCE)", "text": "The classification of data in CNNs primarily depends on the convolutional kernels represented by their weights. Optimization of kernel weights using a loss function is performed to ensure a closer approximation to the underlying function f(X) is achieved. As discussed in Section 2.1, CCE is a measure of difference between two probability distributions, the ground truth and CNN computed label for a class C. The drawback of CCE is that it only considers class label errors and does not account for the distribution of the convolutional kernel weights. The estimation of kernel weight probability distributions is critical in knowing the state of model training and learning capacities which could enhance classification performance, and MCCE is proposed to rectify this limitation.\nThe Maximum Categorical Cross-Entropy (MCCE) loss function monitors the data distribution of convolutional kernel weights using ME measures along with traditional CCE loss and penalizes models which are overly complex. Apriori knowledge of the entropic distribution of the input data can be computed using ME measures which is used as a baseline to monitor convolutional kernel weight distributions and penalize models with greater divergences. It is well understood that maximizing entropy measures using even partial information (such as from convolutional kernel weights) can enhance the estimation of probability distributions Macqueen & Marschak (1975) used extensively to calculate CCE loss.\nThe main criterion for producing high quality reconstruction approximation is the incorporation of twodimensional convolutions, which is traditionally a computational burden Wernecke et al. (1977). CNN models implicitly use two-dimensional convolutions to produce feature maps therefore, the computational overheads are eliminated making the computation of MCCE loss very efficient. Furthermore, using MCCE loss a L1 difference can be calculated between the reconstruction approximation g(X) and ground truth f(X) (CCE error).\nReconstruction error can be calculated using the apriori determined ME of the input dataset and reconstruction approximation g(X). Monitoring the divergence between the CCE error and reconstruction error provides an indication in the degree of deviation of convolutional kernel weights and predicted class labels to the deviation of f(X) to determine if the convolutional kernels are stuck in a global minima/maxima and thus indicating model learning has saturated." }, { "heading": "3.1 ALGORITHM", "text": "The pseudo-code for MCCE loss is presented as Algorithm 1, which requires the calculation of a 1D linear interpolation output for reconstruction loss is provided in Section 3.2.\nAlgorithm 1 Maximum Categorical Cross Entropy (MCCE) 1: Input: One-hot encoded ground truth (ytrue) and CNN predicted (ypred) class labels, apriori ME of\ntraining images in the dataset (i.e. ME(X)). 2: Output: Probabilistic logarithmic loss of ypred with the ground truth ytrue. 3: Initialize: Λ←ME(X), µ←ME(ω) 4: γ = -log( e sp\nΣCj e sj ) {CCE loss, sp is the CNN score and sj the ground truth for the class C}\n5: κ = Λ− µ {Convolutional Reconstruction (CR) loss} 6: κ = Interpolation(κ, (0,Λ), (0,1)) {1d linear interpolation to output κ between 0 and 1 rather than in the\nrange of 0 to Λ} 7: ∆ = γ + κ {Maximum loss = CCE loss + CR loss} 8: return ∆" }, { "heading": "3.2 1D LINEAR INTERPOLATION", "text": "A one-dimensional interpolation of the reconstruction error/loss is required as the MCCE loss is an extension of CCE loss which outputs values between 0 and 1. A linear interpolant is the straight line between the two known points given by their coordinates (a0, b0) and (b1, b1) Davis (1975). For any value i in the interval (a0, a1), the value of j along the straight line can be calculated using the equation of slopes given in Equation 3\nj − b0 i− a0 = b1 − b0 a1 − a0 or j = b0(a1 − i) + b1(i− a0) a1 − a0 (3)" }, { "heading": "4 EXPERIMENTATION", "text": "Experimentation revolved around quantitatively measuring train-test divergence to determine the degree of overfitting in trained CNN models for the traditional CCE loss and our novel MCCE loss using all of the techniques discussed in Section 2. Cross-validation tests by interchanging the datasets was performed to ensure CNN models trained using MCCE loss consistently generalize better were also employed. No modifications were made to our CNN model training regime compared to the original implementation presented in He et al. (2016) apart from using different testing hardware and software frameworks (Keras with a tensorflow backend)." }, { "heading": "4.1 DATASETS", "text": "To determine the effect of racial bias and the efficacy of our novel MCCE loss function, we select a balanced dataset (UTKFace Zhang et al. (2017)) where each class of race/ethnicity has an equal number of images and an unbalanced dataset (colorFeret Phillips et al. (1998)) where the distribution of data across all of the classes is unequal.\nThe colorFERET dataset contains 11,338 semi-controlled color images of 512×768 pixel size with 13 different poses from 994 test subjects. Due to our limited computing infrastructure, the images needed to be downsampled to 96×96 pixel resolution using cubic interpolation. The original dataset contains nine classes (Asian, Asian-Southern, Asian-MiddleEastern, Black-or-African-American, White, Hispanic, Native-American, Other and Pacific-Islander). Due to the very limited number of test subjects and images for four of the nine classes, the dataset was reduced to five classes (Asian, AsianMiddle-Eastern, Black-or-African-American, White, Hispanic) containing a total of 11,172 images.\nThe original UTKFace dataset contains 23,708 in-the-wild color images of 200×200 pixel size with five ethnic classes (White, Black, Asian, Indian and Others) of all age groups. Only the OECD definition for working age population (15-64) consisting of 18,095 images are considered since the facial variations are not severe enough to cause any unexpected errors like misclassification or underfitting. The images used in our experimentation were downsampled to 96×96 pixel resolution using cubic interpolation to accommodate our limited computing infrastructure." }, { "heading": "4.2 EXPERIMENTAL SETUP", "text": "All experiments presented in this paper were carried out with a single RTX 2080ti with 11GB of VRAM, generously provided by InfuseAI Limited (New-Zealand). All models were trained from scratch with the datasets randomly shuffled when reading from storage into memory and a 20% allocation of the randomly allocated dataset was reserved for testing. The training data was again randomly shuffled during model\ntraining to mitigate any variability in the input data. This process was repeated for all the three model training instances." }, { "heading": "4.3 RESULTS", "text": "The results in Table 1 present the averaged classification performance from the three model training instances and the best test accuracy using an early stopping patience of 10, monitored on the training accuracy. Table 1 also presents the cross-validation results and f scores to highlight bias in trained models on new data. MCCE trained models outperform CCE trained models in all of the measured criteria. All links to the datasets and experimental scripts used are available online and can be accessed at: Uploaded as supplementary material." }, { "heading": "5 DISCUSSION", "text": "Analyzing the data presented in Table 1, we clearly identify the effectiveness of MCCE in model training with respect to overfitting and generalizability. Classification bias is evaluated by examining the weighted f score determined using the confusion matrix. Models trained using MCCE outperform standard CCE models by almost two times. Perhaps more concerning is the fact that CCE trained models barely improve on random chance of 20% (five classes) on the balanced UTKFace cross-validation test. Although training models using MCCE might mitigate overfitting indicated by the relatively lower train-test difference, MCCE trained models still overfit to a limited degree. The tendency of relatively lower degree of overfitting holds even for cross-validation results where MCCE models achieve a greater weighted f score and test accuracy.\nFigures 3 and 4 illustrate the confusion matrices for both the balanced colorFERET and unbalanced UTKFace datasets respectively. The MCCE algorithm provided enhanced classification performance for both datasets. The CCE loss was especially vulnerable to unbalanced datasets, suggesting implicit biases with respect to the training set significantly corrupts the final results. Similar patterns of implicit biases can be observed for the MCCE loss however not as pronounced relative to CCE trained models demonstrating the improved resilience of the MCCE algorithm.\nMCCE trained models had a relatively higher loss of 1.36 and 1.5 compared to 0.35 and 0.38 for the colorFERET and UTKFace datasets respectively, suggesting a greater degree of L2 kernel regularization is being employed by the MCCE algorithm. The higher loss indicates the MCCE trained models have not fully\nconverged and greater improvements can be achieved with manual HP fine tuning. Furthermore, MCCE trained models generally converge faster taking an average of 176 and 204 epochs compared to 199 and 268 epochs for the colorFERET and UTKFace datasets respectively. Faster convergence along with higher losses suggests an enhanced learning capacity of the MCCE models, which can be improved with exploration of additional techniques to improve convergence such as learning rates.\nFigure 2 illustrates the convolutional kernel weight ME measures during model training for both datasets, which does not reach the 2.09 and 2.25 bits per pixel measures for colorFERET and UTKFace datasets respectively. Examining Figure 2 highlights the MCCE model training process where large divergences from the maximum are quickly penalized and weight adjustments during the back-propagation step correct these divergences quickly. In Figure 5, we visualize the loss curves for CCE and MCCE loss functions during model training. MCCE and CCE loss functions generally follow the same curve relative to each other but differ in their final convergence; a similar pattern can be observed in Figure ?? for model training." }, { "heading": "6 CONCLUSION AND FUTURE WORK", "text": "In this paper, we proposed a novel extension to the commonly used Categorical Cross Entropy (CCE) loss function known as Maximum Categorical Cross Entropy (MCCE). While CCE evaluates the probability distributions of the CNN predicted and ground truth class labels, MCCE extends this evaluation to include the entropic distribution of convolutional kernel weights during model training. MCCE provides a robust noise-averse method of calculating model loss since partial knowledge of the entropic distribution of the input data is determined by a priori and large divergences from the maximum are penalized during model training.\nMCCE loss takes into account the label loss and convolution kernel weight distribution loss or reconstruction loss penalizing model training if either of these distributions greatly diverge from optimal. MCCE loss has been empirically validated to minimize overfitting by 5.85% and 4.3% using a ResNet architecture on the ColorFeret and UTKFace datasets respectively. Furthermore, MCCE has shown to improve generalizability of trained models in cross-validation testing by 8.8% using the trained colorFeret models on UTKFace and 25.16% using the UTKFace trained models on colorFERET. The knowledge of entropic distribution of convolutional kernel weights during model training can be used to determine the state of convergence of the model. This state determination can be used to adjust other model parameters like learning rates which are reserved as our future work." } ]
2,020
MAXIMUM CATEGORICAL CROSS ENTROPY (MCCE): A NOISE-ROBUST ALTERNATIVE LOSS FUNCTION TO MITI- GATE RACIAL BIAS IN CONVOLUTIONAL NEURAL NET-
SP:c6868fac7481cb241d9c5735f9184de9be9b72aa
[ "The paper presents a new interactive environment which is both text-based and contains visual simulation which are aligned. The authors also propose a first agent architecture which uses the visual observations as well as the text-based (named BUTLER). The authors tested the generalization capabilities of the proposed BUTLER architecture compared to a seq2seq transformer model." ]
Given a simple request like Put a washed apple in the kitchen fridge, humans can reason in purely abstract terms by imagining action sequences and scoring their likelihood of success, prototypicality, and efficiency, all without moving a muscle. Once we see the kitchen in question, we can update our abstract plans to fit the scene. Embodied agents require the same abilities, but existing work does not yet provide the infrastructure necessary for both reasoning abstractly and executing concretely. We address this limitation by introducing ALFWorld, a simulator that enables agents to learn abstract, text-based policies in TextWorld (Côté et al., 2018) and then execute goals from the ALFRED benchmark (Shridhar et al., 2020) in a rich visual environment. ALFWorld enables the creation of a new BUTLER agent whose abstract knowledge, learned in TextWorld, corresponds directly to concrete, visually grounded actions. In turn, as we demonstrate empirically, this fosters better agent generalization than training only in the visually grounded environment. BUTLER’s simple, modular design factors the problem to allow researchers to focus on models for improving every piece of the pipeline (language understanding, planning, navigation, and visual scene understanding).
[ { "affiliations": [], "name": "Mohit Shridhar" }, { "affiliations": [], "name": "Xingdi Yuan" }, { "affiliations": [], "name": "Marc-Alexandre Côté" }, { "affiliations": [], "name": "Yonatan Bisk" }, { "affiliations": [], "name": "Adam Trischler" }, { "affiliations": [], "name": "Matthew Hausknecht" } ]
[ { "authors": [ "A. Adhikari", "X. Yuan", "Côté", "M.-A.", "M. Zelinka", "Rondeau", "M.-A.", "R. Laroche", "P. Poupart", "J. Tang", "A. Trischler", "W.L. Hamilton" ], "title": "Learning dynamic belief graphs to generalize on text-based games", "venue": "Neural Information Processing Systems (NeurIPS).", "year": 2020 }, { "authors": [ "P. Ammanabrolu", "M. Hausknecht" ], "title": "Graph constrained reinforcement learning for natural language action spaces", "venue": "International Conference on Learning Representations.", "year": 2020 }, { "authors": [ "P. Anderson", "Q. Wu", "D. Teney", "J. Bruce", "M. Johnson", "N. Sünderhauf", "I. Reid", "S. Gould", "A. van den Hengel" ], "title": "Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "year": 2018 }, { "authors": [ "P. Anderson", "Q. Wu", "D. Teney", "J. Bruce", "M. Johnson", "N. Sünderhauf", "I. Reid", "S. Gould", "A. van den Hengel" ], "title": "Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environments", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "year": 2018 }, { "authors": [ "S. Antol", "A. Agrawal", "J. Lu", "M. Mitchell", "D. Batra", "C.L. Zitnick", "D. Parikh" ], "title": "VQA: Visual Question Answering", "venue": "International Conference on Computer Vision (ICCV).", "year": 2015 }, { "authors": [ "M. Asai", "A. Fukunaga" ], "title": "Classical planning in deep latent space: Bridging the subsymbolicsymbolic boundary", "venue": "arXiv preprint arXiv:1705.00154.", "year": 2017 }, { "authors": [ "L.J. Ba", "J.R. Kiros", "G.E. Hinton" ], "title": "Layer normalization", "venue": "CoRR, abs/1607.06450.", "year": 2016 }, { "authors": [ "B. Banks", "C. Wingfield", "L. Connell" ], "title": "Linguistic distributional knowledge and sensorimotor grounding both contribute to semantic category production", "venue": null, "year": 2020 }, { "authors": [ "Y. Bisk", "A. Holtzman", "J. Thomason", "J. Andreas", "Y. Bengio", "J. Chai", "M. Lapata", "A. Lazaridou", "J. May", "A. Nisnevich", "N. Pinto", "J. Turian" ], "title": "Experience Grounds Language", "venue": "Empirical Methods in Natural Language Processing.", "year": 2020 }, { "authors": [ "M. Chevalier-Boisvert", "D. Bahdanau", "S. Lahlou", "L. Willems", "C. Saharia", "T.H. Nguyen", "Y. Bengio" ], "title": "BabyAI: First steps towards grounded language learning with a human in the loop", "venue": "International Conference on Learning Representations.", "year": 2019 }, { "authors": [ "K. Cho", "B. van Merriënboer", "C. Gulcehre", "D. Bahdanau", "F. Bougares", "H. Schwenk", "Y. Bengio" ], "title": "Learning phrase representations using RNN encoder–decoder for statistical machine translation", "venue": "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "year": 2014 }, { "authors": [ "Côté", "M.-A.", "A. Kádár", "X. Yuan", "B. Kybartas", "T. Barnes", "E. Fine", "J. Moore", "R.Y. Tao", "M. Hausknecht", "L.E. Asri", "M. Adada", "W. Tay", "A. Trischler" ], "title": "Textworld: A learning environment for text-based games", "venue": "CoRR, abs/1806.11532.", "year": 2018 }, { "authors": [ "A. Das", "S. Datta", "G. Gkioxari", "S. Lee", "D. Parikh", "D. Batra" ], "title": "Embodied Question Answering", "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).", "year": 2018 }, { "authors": [ "G. Dove" ], "title": "Thinking in words: language as an embodied medium of thought", "venue": "Topics in cognitive science, 6(3):371–389.", "year": 2014 }, { "authors": [ "S. Gehrmann", "Y. Deng", "A. Rush" ], "title": "Bottom-up abstractive summarization", "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing.", "year": 2018 }, { "authors": [ "D. Gordon", "A. Kembhavi", "M. Rastegari", "J. Redmon", "D. Fox", "A. Farhadi" ], "title": "Iqa: Visual question answering in interactive environments", "venue": "Computer Vision and Pattern Recognition (CVPR), 2018 IEEE Conference on.", "year": 2018 }, { "authors": [ "C. Gulcehre", "S. Ahn", "R. Nallapati", "B. Zhou", "Y. Bengio" ], "title": "Pointing the unknown words", "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).", "year": 2016 }, { "authors": [ "D. Ha", "J. Schmidhuber" ], "title": "Recurrent world models facilitate policy evolution", "venue": "Advances in Neural Information Processing Systems 31.", "year": 2018 }, { "authors": [ "M. Hausknecht", "P. Stone" ], "title": "Deep recurrent q-learning for partially observable mdps", "venue": "arXiv preprint arXiv:1507.06527.", "year": 2015 }, { "authors": [ "M.J. Hausknecht", "P. Ammanabrolu", "Côté", "M.-A.", "X. Yuan" ], "title": "Interactive fiction games: A colossal adventure", "venue": "AAAI.", "year": 2020 }, { "authors": [ "K. He", "G. Gkioxari", "P. Dollár", "R. Girshick" ], "title": "Mask r-cnn", "venue": "Proceedings of the IEEE international conference on computer vision.", "year": 2017 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition", "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition.", "year": 2016 }, { "authors": [ "M. Helmert" ], "title": "The Fast Downward planning system", "venue": "Journal of Artificial Intelligence Research.", "year": 2006 }, { "authors": [ "H. Hu", "D. Yarats", "Q. Gong", "Y. Tian", "M. Lewis" ], "title": "Hierarchical decision making by generating and following natural language instructions", "venue": "Advances in Neural Information Processing Systems.", "year": 2019 }, { "authors": [ "R. Hu", "M. Rohrbach", "J. Andreas", "T. Darrell", "K. Saenko" ], "title": "Modeling relationships in referential expressions with compositional modular networks", "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.", "year": 2017 }, { "authors": [ "Y. Jiang", "S.S. Gu", "K.P. Murphy", "C. Finn" ], "title": "Language as an abstraction for hierarchical deep reinforcement learning", "venue": "Advances in Neural Information Processing Systems.", "year": 2019 }, { "authors": [ "J. Johnson", "B. Hariharan", "L. van der Maaten", "L. Fei-Fei", "C.L. Zitnick", "R. Girshick" ], "title": "Clevr: A diagnostic dataset for compositional language and elementary visual reasoning", "venue": null, "year": 2017 }, { "authors": [ "J. Johnson", "A. Karpathy", "L. Fei-Fei" ], "title": "Densecap: Fully convolutional localization networks for dense captioning", "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition.", "year": 2016 }, { "authors": [ "L.P. Kaelbling", "T. Lozano-Pérez" ], "title": "Hierarchical task and motion planning in the now", "venue": "2011 IEEE International Conference on Robotics and Automation, pages 1470–1477. IEEE.", "year": 2011 }, { "authors": [ "D.P. Kingma", "J. Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980.", "year": 2014 }, { "authors": [ "E. Kolve", "R. Mottaghi", "W. Han", "E. VanderBilt", "L. Weihs", "A. Herrasti", "D. Gordon", "Y. Zhu", "A. Gupta", "A. Farhadi" ], "title": "Ai2-thor: An interactive 3d environment for visual ai", "venue": "arXiv preprint arXiv:1712.05474.", "year": 2017 }, { "authors": [ "G. Konidaris", "L.P. Kaelbling", "T. Lozano-Perez" ], "title": "From skills to symbols: Learning symbolic representations for abstract high-level planning", "venue": "Journal of Artificial Intelligence Research, 61:215–289.", "year": 2018 }, { "authors": [ "T.D. Kulkarni", "W.F. Whitney", "P. Kohli", "J. Tenenbaum" ], "title": "Deep convolutional inverse graphics network", "venue": "Advances in neural information processing systems.", "year": 2015 }, { "authors": [ "H. Küttler", "N. Nardelli", "A.H. Miller", "R. Raileanu", "M. Selvatici", "E. Grefenstette", "T. Rocktäschel" ], "title": "The nethack learning environment", "venue": null, "year": 2020 }, { "authors": [ "Lin", "T.-Y.", "M. Maire", "S. Belongie", "J. Hays", "P. Perona", "D. Ramanan", "P. Dollár", "C.L. Zitnick" ], "title": "Microsoft coco: Common objects in context", "venue": "European conference on computer vision.", "year": 2014 }, { "authors": [ "J. Lu", "D. Batra", "D. Parikh", "S. Lee" ], "title": "Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks", "venue": "Advances in Neural Information Processing Systems.", "year": 2019 }, { "authors": [ "M. MacMahon", "B. Stankiewicz", "B. Kuipers" ], "title": "Walk the talk: Connecting language, knowledge, and action in route instructions", "venue": "Proceedings of the 21st National Conference on Artificial Intelligence (AAAI-2006).", "year": 2006 }, { "authors": [ "A. Marzoev", "S. Madden", "M.F. Kaashoek", "M. Cafarella", "J. Andreas" ], "title": "Unnatural language processing: Bridging the gap between synthetic and natural language data", "venue": "arXiv preprint arXiv:2004.13645.", "year": 2020 }, { "authors": [ "D. McDermott", "M. Ghallab", "A. Howe", "C. Knoblock", "A. Ram", "M. Veloso", "D. Weld", "D. Wilkins" ], "title": "Pddl-the planning domain definition language", "venue": null, "year": 1998 }, { "authors": [ "K. Narasimhan", "R. Barzilay", "T. Jaakkola" ], "title": "Grounding language for transfer in deep reinforcement learning", "venue": "JAIR, 63(1):849–874.", "year": 2018 }, { "authors": [ "O. Press", "L. Wolf" ], "title": "Using the output embedding to improve language models", "venue": "arXiv preprint arXiv:1608.05859.", "year": 2016 }, { "authors": [ "Reddy", "D. R" ], "title": "Speech understanding systems: A summary of results of the five-year research effort", "venue": "Department of Computer Science. Camegie-Mell University, Pittsburgh, PA, 17.", "year": 1977 }, { "authors": [ "S. Ross", "G. Gordon", "D. Bagnell" ], "title": "A reduction of imitation learning and structured prediction to no-regret online learning", "venue": "Proceedings of the fourteenth international conference on artificial intelligence and statistics.", "year": 2011 }, { "authors": [ "V. Sanh", "L. Debut", "J. Chaumond", "T. Wolf" ], "title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", "venue": "arXiv preprint arXiv:1910.01108.", "year": 2019 }, { "authors": [ "E. Schwartz", "G. Tennenholtz", "C. Tessler", "S. Mannor" ], "title": "Language is power: Representing states using natural language in reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "S. Sharma", "L.E. Asri", "H. Schulz", "J. Zumer" ], "title": "Relevance of unsupervised metrics in taskoriented dialogue for evaluating natural language generation", "venue": "arXiv preprint arXiv:1706.09799.", "year": 2017 }, { "authors": [ "M. Shridhar", "J. Thomason", "D. Gordon", "Y. Bisk", "W. Han", "R. Mottaghi", "L. Zettlemoyer", "D. Fox" ], "title": "Alfred: A benchmark for interpreting grounded instructions for everyday tasks", "venue": "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10740–10749.", "year": 2020 }, { "authors": [ "C. Sun", "A. Myers", "C. Vondrick", "K. Murphy", "C. Schmid" ], "title": "Videobert: A joint model for video and language representation learning", "venue": "Proceedings of the IEEE International Conference on Computer Vision.", "year": 2019 }, { "authors": [ "K. Tang", "Y. Niu", "J. Huang", "J. Shi", "H. Zhang" ], "title": "Unbiased scene graph generation from biased training", "venue": "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.", "year": 2020 }, { "authors": [ "E. Todorov", "T. Erez", "Y. Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.", "year": 2012 }, { "authors": [ "A. Vaswani", "N. Shazeer", "N. Parmar", "J. Uszkoreit", "L. Jones", "A.N. Gomez", "Kaiser", "L. u.", "I. Polosukhin" ], "title": "Attention is all you need", "venue": "Advances in Neural Information Processing Systems 30.", "year": 2017 }, { "authors": [ "X. Wang", "Q. Huang", "A. Celikyilmaz", "J. Gao", "D. Shen", "Wang", "Y.-F.", "W.Y. Wang", "L. Zhang" ], "title": "Reinforced cross-modal matching and self-supervised imitation learning for visionlanguage navigation", "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.", "year": 2019 }, { "authors": [ "J. Wu", "E. Lu", "P. Kohli", "B. Freeman", "J. Tenenbaum" ], "title": "Learning to see physics via visual de-animation", "venue": "Advances in Neural Information Processing Systems.", "year": 2017 }, { "authors": [ "A.W. Yu", "D. Dohan", "Q. Le", "T. Luong", "R. Zhao", "K. Chen" ], "title": "Fast and accurate reading comprehension by combining self-attention and convolution", "venue": "International Conference on Learning Representations.", "year": 2018 }, { "authors": [ "L. Yu", "Z. Lin", "X. Shen", "J. Yang", "X. Lu", "M. Bansal", "T.L. Berg" ], "title": "Mattnet: Modular attention network for referring expression comprehension", "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.", "year": 2018 }, { "authors": [ "X. Yuan", "Côté", "M.-A.", "A. Sordoni", "R. Laroche", "Combes", "R.T. d.", "M. Hausknecht", "A. Trischler" ], "title": "Counting to explore and generalize in text-based games", "venue": "arXiv preprint arXiv:1806.11525.", "year": 2018 }, { "authors": [ "V. Zhong", "T. Rocktäschel", "E. Grefenstette" ], "title": "RTFM: Generalising to novel environment dynamics via reading", "venue": "ICLR.", "year": 2020 }, { "authors": [ "Y. Zhu", "D. Gordon", "E. Kolve", "D. Fox", "L. Fei-Fei", "A. Gupta", "R. Mottaghi", "A. Farhadi" ], "title": "Visual semantic planning using deep successor representations", "venue": "IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017.", "year": 2017 } ]
[ { "heading": null, "text": "1 INTRODUCTION TextWorld Embodied\nWelcome!\nYou are in the middle of the room. Looking around you, you see a diningtable, a stove, a microwave, and a cabinet.\nYour task is to: Put a pan on the diningtable.\n> goto the cabinet\nYou arrive at the cabinet. The cabinet is closed.\n> open the cabinet\nThe cabinet is empty.\n> goto the stove\nYou arrive at the stove. Near the stove, you see a pan, a pot, a bread loaf, a lettuce, and a winebottle.\n> take the pan from the stove\nYou take the pan from the stove.\n> goto the diningtable\nYou arrive at the diningtable.\n> put the pan on the diningtable You put the pan on the diningtable.\nFigure 1: ALFWorld: Interactive aligned text and embodied worlds. An example with high-level text actions (left) and low-level physical actions (right).\nConsider helping a friend prepare dinner in an unfamiliar house: when your friend asks you to clean and slice an apple for an appetizer, how would you approach the task? Intuitively, one could reason abstractly: (1) find an apple (2) wash the apple in the sink (3) put the clean apple on the cutting board (4) find a knife (5) use the knife to slice the apple (6) put the slices in a bowl. Even in an unfamiliar setting, abstract reasoning can help accomplish the goal by leveraging semantic priors. Priors like locations of objects – apples are commonly found in the kitchen along with implements for cleaning and slicing, object affordances – a sink is useful for washing an apple unlike a refrigerator, pre-conditions – better to wash an apple before slicing it, rather than the converse. We hypothesize that, learning to solve tasks using abstract language, unconstrained by the particulars of the physical world, enables agents to complete embodied tasks in novel environments by leveraging the kinds of semantic priors that are exposed by abstraction and interaction.\nTo test this hypothesis, we have created the novel ALFWorld framework, the first interactive, parallel environment that aligns text descriptions and commands with physically embodied robotic simulation. We build ALFWorld by extending two prior works: TextWorld (Côté et al., 2018) - an engine for interactive text-based games, and ALFRED (Shridhar et al., 2020) - a large scale dataset for visionlanguage instruction following in embodied environments. ALFWorld provides two views of the same underlying world and two modes by which to interact with it: TextWorld, an abstract, text-based environment, generates textual observations of the world and responds to high-level text actions; ALFRED, the embodied simulator, renders the world in high-dimensional images and responds to low-level physical actions as from a robot (Figure 1).1 Unlike prior work on instruction following (MacMahon et al., 2006; Anderson et al., 2018a), which typically uses a static corpus of cross-modal expert demonstrations, we argue that aligned parallel environments like ALFWorld offer a distinct advantage: they allow agents to explore, interact, and learn in the abstract environment of language before encountering the complexities of the embodied environment.\nWhile fields such as robotic control use simulators like MuJoCo (Todorov et al., 2012) to provide infinite data through interaction, there has been no analogous mechanism – short of hiring a human around the clock – for providing linguistic feedback and annotations to an embodied agent. TextWorld addresses this discrepancy by providing programmatic and aligned linguistic signals during agent exploration. This facilitates the first work, to our knowledge, in which an embodied agent learns the meaning of complex multi-step policies, expressed in language, directly through interaction.\nEmpowered by the ALFWorld framework, we introduce BUTLER (Building Understanding in Textworld via Language for Embodied Reasoning), an agent that first learns to perform abstract tasks in TextWorld using Imitation Learning (IL) and then transfers the learned policies to embodied tasks in ALFRED. When operating in the embodied world, BUTLER leverages the abstract understanding gained from TextWorld to generate text-based actions; these serve as high-level subgoals that facilitate physical action generation by a low-level controller. Broadly, we find that BUTLER is capable of generalizing in a zero-shot manner from TextWorld to unseen embodied tasks and settings. Our results show that training first in the abstract text-based environment is not only 7× faster, but also yields better performance than training from scratch in the embodied world. These results lend credibility to the hypothesis that solving abstract language-based tasks can help build priors that enable agents to generalize to unfamiliar embodied environments.\nOur contributions are as follows:\n§ 2 ALFWorld environment: The first parallel interactive text-based and embodied environment. § 3 BUTLER architecture: An agent that learns high-level policies in language that transfer to\nlow-level embodied executions, and whose modular components can be independently upgraded. § 4 Generalization: We demonstrate empirically that BUTLER, trained in the abstract text domain,\ngeneralizes better to unseen embodied settings than agents trained from corpora of demonstrations or from scratch in the embodied world." }, { "heading": "2 ALIGNING ALFRED AND TEXTWORLD", "text": "The ALFRED dataset (Shridhar et al., 2020), set in the THOR simulator (Kolve et al., 2017), is a benchmark for learning to complete embodied household tasks using natural language instructions and egocentric visual observations. As shown in Figure 1 (right), ALFRED tasks pose challenging interaction and navigation problems to an agent in a high-fidelity simulated environment. Tasks are annotated with a goal description that describes the objective (e.g., “put a pan on the dining table”). We consider both template-based and human-annotated goals; further details on goal specification can be found in Appendix H. Agents observe\nthe world through high-dimensional pixel images and interact using low-level action primitives: MOVEAHEAD, ROTATELEFT/RIGHT, LOOKUP/DOWN, PICKUP, PUT, OPEN, CLOSE, and TOGGLEON/OFF.\n1Note: Throughout this work, for clarity of exposition, we use ALFRED to refer to both tasks and the grounded simulation environment, but rendering and physics are provided by THOR (Kolve et al., 2017).\nThe ALFRED dataset also includes crowdsourced language instructions like “turn around and walk over to the microwave” that explain how to complete a goal in a step-by-step manner. We depart from the ALFRED challenge by omitting these step-by-step instructions and focusing on the more diffcult problem of using only on goal descriptions specifying what needs to be achieved.\nOur aligned ALFWorld framework adopts six ALFRED task-types (Table 1) of various difficulty levels.2 Tasks involve first finding a particular object, which often requires the agent to open and search receptacles like drawers or cabinets. Subsequently, all tasks other than Pick & Place require some interaction with the object such as heating (place object in microwave and start it) or cleaning (wash object in a sink). To complete the task, the object must be placed in the designated location.\nWithin each task category there is significant variation: the embodied environment includes 120 rooms (30 kitchens, 30 bedrooms, 30 bathrooms, 30 living rooms), each dynamically populated with a set of portable objects (e.g., apple, mug), and static receptacles (e.g., microwave, fridge). For each task type we construct a larger train set, as well as seen and unseen validation evaluation sets: (1): seen consists of known task instances {task-type, object, receptacle, room} in rooms seen during training, but with different instantiations of object locations, quantities, and visual appearances (e.g. two blue pencils on a shelf instead of three red pencils in a drawer seen in training). (2): unseen consists of new task instances with possibly known object-receptacle pairs, but always in unseen rooms with different receptacles and scene layouts than in training tasks.\nThe seen set is designed to measure in-distribution generalization, whereas the unseen set measures out-of-distribution generalization. The scenes in ALFRED are visually diverse, so even the same task instance can lead to very distinct tasks, e.g., involving differently colored apples, shaped statues, or textured cabinets. For this reason, purely vision-based agents such as the unimodal baselines in Section 5.2 often struggle to generalize to unseen environments and objects.\nThe TextWorld framework (Côté et al., 2018) procedurally generates text-based environments for training and evaluating language-based agents. In order to extend TextWorld to create text-based analogs of each ALFRED scene, we adopt a common latent structure representing the state of the simulated world. ALFWorld uses PDDL - Planning Domain Definition Language (McDermott et al., 1998) to describe each scene from ALFRED and to construct an equivalent text game using the TextWorld engine. The dynamics of each game are defined by the PDDL domain (see Appendix C for additional details). Textual observations shown in Figure 1 are generated with templates sampled from a context-sensitive grammar designed for the ALFRED environments. For interaction, TextWorld environments use the following high-level actions:\ngoto {recep} take {obj} from {recep} put {obj} in/on {recep} open {recep} close {recep} toggle {obj}{recep} clean {obj} with {recep} heat {obj} with {recep} cool {obj} with {recep}\nwhere {obj} and {recep} correspond to objects and receptacles. Note that heat, cool, clean, and goto are high-level actions that correspond to several low-level embodied actions.\nALFWorld, in summary, is an cross-modal framework featuring a diversity of embodied tasks with analogous text-based counterparts. Since both components are fully interactive, agents may be trained in either the language or embodied world and evaluated on heldout test tasks in either modality. We believe the equivalence between objects and interactions across modalities make ALFWorld an ideal framework for studying language grounding and cross-modal learning." }, { "heading": "3 INTRODUCING BUTLER: AN EMBODIED MULTI-TASK AGENT", "text": "We investigate learning in the abstract language modality before generalizing to the embodied setting. The BUTLER agent uses three components to span the language and embodied modalities: BUTLER::BRAIN – the abstract text agent, BUTLER::VISION – the language state estimator, and BUTLER::BODY – the low-level controller. An overview of BUTLER is shown in Figure 2 and each component is described below.\n2To start with, we focus on a subset of the ALFRED dataset for training and evaluation that excludes tasks involving slicing objects or using portable container (e.g., bowls).\n3.1 BUTLER::BRAIN (TEXT AGENT) ∶ o0, ot, g → at\nBUTLER::BRAIN is a novel text-based game agent that generates high-level text actions in a token-by-token fashion akin to Natural Language Generation (NLG) approaches for dialogue (Sharma et al., 2017) and summarization (Gehrmann et al., 2018). An overview of the agent’s architecture is shown in Figure 3. At game step t, the encoder takes the initial text observation o0, current observation ot, and the goal description g as input and generates a context-\naware representation of the current observable game state. The observation o0 explicitly lists all the navigable receptacles in the scene, and goal g is sampled from a set of language templates (see Appendix H). Since the games are partially observable, the agent only has access to the observation describing the effects of its previous action and its present location. Therefore, we incorporate two memory mechanisms to imbue the agent with history: (1) a recurrent aggregator, adapted from Yuan et al. (2018), combines the encoded state with recurrent state ht−1 from the previous game step; (2) an observation queue feeds in the k most recent, unique textual observations. The decoder generates an action sentence at token-by-token to interact with the game. The encoder and decoder are based on a Transformer Seq2Seq model with pointer softmax mechanism (Gulcehre et al., 2016). We leverage pre-trained BERT embeddings (Sanh et al., 2019), and tie output embeddings with input embeddings (Press and Wolf, 2016). The agent is trained in an imitation learning setting with DAgger (Ross et al., 2011) using expert demonstrations. See Appendix A for complete details.\nWhen solving a task, an agent might get stuck at certain states due to various failures (e.g., action is grammatically incorrect, wrong object name). The observation for a failed action does not contain any useful feedback, so a fully deterministic actor tends to repeatedly produce the same incorrect action. To address this problem, during evaluation in both TextWorld and ALFRED, BUTLER::BRAIN uses Beam Search (Reddy et al., 1977) to generate alternative action sentences in the event of a failed action. But otherwise greedily picks a sequence of best words for efficiency. Note that Beam Search is not used to optimize over embodied interactions like prior work (Wang et al., 2019). but rather to simply improve the generated action sentence during failures.\n3.2 BUTLER::VISION (STATE ESTIMATOR) ∶ vt → ot\nAt test time, agents in the embodied world must operate purely from visual input. To this end, BUTLER::VISION’s language state estimator functions as a captioning module that translates visual observations vt into textual descriptions ot. Specifically, we use a pre-trained Mask R-CNN detec-\ntor (He et al., 2017) to identify objects in the visual frame. The detector is trained separately in a supervised setting with random frames from ALFRED training scenes (see Appendix D). For each frame vt, the detector generates N detections {(c1,m1), (c2,m2), . . . , (cN ,mN)}, where cn is the predicted object class, and mn is a pixel-wise object mask. These detections are formatted into a sentence using a template e.g., On table 1, you see a mug 1, a tomato 1, and a tomato 2. To handle multiple instances of objects, each object is associated with a class cn and a number ID e.g., tomato 1. Commands goto, open, and examine generate a list of detections, whereas all other commands generate affirmative responses if the action succeeds e.g., at: put mug 1 on desk 2→ ot+1: You put mug 1 on desk 2, otherwise produce Nothing happens to indicate failures or no state-change. See Appendix G for a full list of templates. While this work presents preliminary results with template-based descriptions, future work could generate more descriptive observations using pre-trained image-captioning models (Johnson et al., 2016), video-action captioning frameworks (Sun et al., 2019), or scene-graph parsers (Tang et al., 2020).\n3.3 BUTLER::BODY (CONTROLLER) ∶ vt, at → {â1, â2, . . . , âL}\nThe controller translates a high-level text action at into a sequence of L low-level physical actions {â1, â2, . . . , âL} that are executable in the embodied environment. The controller handles two types of commands: manipulation and navigation. For manipulation actions, we use the ALFRED API to interact with the simulator by providing an API action and a pixel-wise mask based on Mask R-CNN detections mn that was produced during state-estimation. For navigation commands, each episode is initialized with a pre-built grid-map of the scene, where each receptacle instance is associated with a receptacle class and an interaction viewpoint (x, y, θ, φ) with x and y representing the 2D position, θ and φ representing the agent’s yaw rotation and camera tilt. The goto command invokes an A* planner to find the shortest path between two viewpoints. The planner outputs a sequence of L displacements in terms of motion primitives: MOVEAHEAD, ROTATERIGHT, ROTATELEFT, LOOKUP, and LOOKDOWN, which are executed in an open-loop fashion via the ALFRED API. We note that a given pre-built grid-map of receptacle locations is a strong prior assumption, but future work could incorporate existing models from the vision-language navigation literature (Anderson et al., 2018a; Wang et al., 2019) for map-free navigation." }, { "heading": "4 EXPERIMENTS", "text": "We design experiments to answer the following questions: (1) How important is an interactive language environment versus a static corpus? (2) Do policies learnt in TextWorld transfer to embodied environments? (3) Can policies generalize to human-annotated goals? (4) Does pre-training in an abstract textual environment enable better generalization in the embodied world?" }, { "heading": "4.1 IMPORTANCE OF INTERACTIVE LANGUAGE", "text": "The first question addresses our core hypothesis that training agents in interactive TextWorld environments leads to better generalization than training agents with a static linguistic corpus. To test this hypothesis, we use DAgger (Ross et al., 2011) to train the BUTLER::BRAIN agent in TextWorld and compare it against Seq2Seq, an identical agent trained with Behavior Cloning from an equivalentlysized corpus of expert demonstrations. The demonstrations come from the same expert policies and we control the number of episodes to ensure a fair comparison. Table 2 presents results for agents trained in TextWorld and subsequently evaluated in embodied environments in a zero-shot manner. The agents are trained independently on individual tasks and also jointly on all six task types. For each task category, we select the agent with best evaluation performance in TextWorld (from 8 random seeds); this is done separately for each split: seen and unseen. These best-performing agents are then evaluated on the heldout seen and unseen embodied ALFRED tasks. For embodied evaluations, we also report goal-condition success rates, a metric proposed in ALFRED (Shridhar et al., 2020) to measure partial goal completion.3\n3For instance, the task “put a hot potato on the countertop” is composed of three goal-conditions: (1) heating some object, (2) putting a potato on the countertop, (3) heating a potato and putting it on the countertop. If the agent manages to put any potato on the countertop, then 1/3 = 0.33 goal-conditions are satisfied, and so on.\nComparing BUTLER to Seq2Seq, we see improved performance on all types of seen tasks and five of the seven types of unseen tasks, supporting the hypothesis that interactive TextWorld training is a key component in generalizing to unseen embodied tasks. Interactive language not only allows agents to explore and build an understanding of successful action patterns, but also to recover from mistakes. Through trial-and-error the BUTLER agent learns task-guided heuristics, e.g., searching all the drawers in kitchen to look for a knife. As Table 2 shows, these heuristics are subsequently more capable of generalizing to the embodied world. More details on TextWorld training and generalization performance can be found in Section 5.1." }, { "heading": "4.2 TRANSFERRING TO EMBODIED TASKS", "text": "Since TextWorld is an abstraction of the embodied world, transferring between modalities involves overcoming domain gaps that are present in the real world but not in TextWorld. For example, the physical size of objects and receptacles must be respected – while TextWorld will allow certain objects to be placed inside any receptacle, in the embodied world it might be impossible to put a larger object into a small receptacle (e.g. a large pot into a microwave).\nSubsequently, a TextWorld-trained agent’s ability to solve embodied tasks is hindered by these domain gaps. So to study the transferability of the text agent in isolation, we introduce BUTLER-ORACLE in Table 2, an oracle variant of BUTLER which uses perfect state-estimation, object-detection, and navigation. Despite these advantages, we nevertheless observe a notable drop in performance from TextWorld to BUTLER-ORACLE. This performance gap results from the domain gaps described above as well as misdetections from Mask R-CNN and navigation failures caused by collisions. Future work might address this issue by reducing the domain gap between the two environments, or performing additional fine-tuning in the embodied setting.\nThe supplementary video contains qualitative examples of the BUTLER agent solving tasks in unseen environments. It showcases 3 successes and 1 failure of a TextWorld-only agent trained on All Tasks. In “put a watch in the safe”, the agent has never seen the ‘watch’-‘safe’ combination as a goal." }, { "heading": "4.3 GENERALIZING TO HUMAN-ANNOTATED GOALS", "text": "BUTLER is trained with templated language, but in realistic scenarios, goals are often posed with open-ended natural language. In Table 2, we present Human Goals results of BUTLER evaluated on human-annotated ALFRED goals, which contain 66 unseen verbs (e.g., ‘wash’, ‘grab’, ‘chill’) and 189 unseen nouns (e.g., ‘rag’, ‘lotion’, ‘disc’; see Appendix H for full list). Surprisingly, we find non-trivial goal-completion rate indicating that certain categories of task, such as pick and place, are quite generalizable to human language. While these preliminary results with natural language are encouraging, we expect future work could augment the templated language with synthetic-to-real transfer methods (Marzoev et al., 2020) for better generalization." }, { "heading": "4.4 TO PRETRAIN OR NOT TO PRETRAIN IN TEXTWORLD?", "text": "Given the domain gap between TextWorld and the embodied world, Why not eliminate this gap by training from scratch in the embodied world? To answer this question, we investigate three training strategies: (i) EMBODIED-ONLY: pure embodied training, (ii) TW-ONLY: pure TextWorld training followed by zero-shot embodied transfer and\n(iii) HYBRID training that switches between the two environments with 75% probability for TextWorld and 25% for embodied world. Table 3 presents success rates for these agents trained and evaluated on All Tasks. All evaluations were conducted with an oracle state-estimator and controller. For a fair comparison, each agent is trained for 50K episodes and the training speed is recorded for each strategy. We report peak performance for each split.\nResults indicate that TW-ONLY generalizes better to unseen environments while EMBODIED-ONLY quickly overfits to seen environments (even with a perfect object detector and teleport navigator). We hypothesize that the abstract TextWorld environment allows the agent to focus on quickly learning tasks without having to deal execution-failures and expert-failures caused by physical constraints inherent to embodied environments. TextWorld training is also 7× faster4 since it does not require running a rendering or physics engine like in the embodied setting. See Section F for more quantitative evaluations on the benefits of training in TextWorld." }, { "heading": "5 ABLATIONS", "text": "We conduct ablation studies to further investigate: (1) The generalization performance of BUTLER::BRAIN within TextWorld environments, (2) The ability of unimodal agents to learn directly through visual observations or action history, (3) The importance of various hyper-parameters and modeling choices for the performance of BUTLER::BRAIN." }, { "heading": "5.1 GENERALIZATION WITHIN TEXTWORLD", "text": "We train and evaluate BUTLER::BRAIN in abstract TextWorld environments spanning the six tasks in Table 1, as well as All Tasks. Similar to the zero-shot results presented in Section 4.1, the All Tasks setting shows the extent to which a single policy can learn and generalize on the large set of 3,553 different tasks, but here without having to deal with failures from embodied execution.\nWe first experimented with training BUTLER::BRAIN through reinforcement learning (RL) where the agent is rewarded after completing a goal. Due to the infesibility of using candidate commands or command templates as discussed in Section I, the RL agent had to generate actions token-by-token. Since the probability of randomly stumbling upon a grammatically correct and contextually valid action is very low (7.02e-44 for sequence length 10), the RL agent struggled to make any meaningful progress towards the tasks.\nAfter concluding that current reinforcement learning approaches were not successful on our set of training tasks, we turned to DAgger (Ross et al., 2011) assisted by a rule-based expert (detailed in Appendix E). BUTLER::BRAIN is trained for 100K episodes using data collected by interacting with the set of training games.\nResults in Table 4 show (i) Training success rate varies from 16-60% depending on the category of tasks, illustrating the challenge of solving hundreds to thousands of training tasks within each category. (ii) Transferring from training to heldout test games typically reduces performance, with the unseen rooms leading to the largest performance drops. Notable exceptions include heat and cool tasks where unseen performance exceeds training performance. (iii) Beam search is a key contributor to test performance; its ablation causes a performance drop of 21% on the seen split of All Tasks. (iv) Further ablating the DAgger strategy and directly training a Sequence-to-Sequence (Seq2Seq) model\n4For a fair comparison, all agents in Table 3 use a batch-size of 10. THOR instances use 100MB×batch-size of GPU memory for rendering, whereas TextWorld instances are CPU-only and are thus much easier to scale up.\nwith pre-recorded expert demonstrations causes a bigger performance drop of 30% on seen split of All Tasks. These results suggest that online interaction with the environment, as facilitated by DAgger learning and beam search, is essential for recovering from mistakes and sub-optimal behavior.\n5.2 UNIMODAL BASELINES\nTable 5 presents results for unimodal baseline comparisons to BUTLER. For all baselines, the action space and controller are fixed, but the state space is substituted with different modalities. To study the agents’ capability of learning a single policy that generalizes across various tasks, we train and evaluate on All Tasks. In VISION (RESNET18), the textual observation from the state-estimator is replaced with ResNet-18 fc7 features (He et al., 2016) from the visual frame. Similarly, VISION (MCNN-FPN) uses the\npre-trained Mask R-CNN from the state-estimator to extract FPN layer features for the whole image. ACTION-ONLY acts without any visual or textual feedback. We report peak performance for each split.\nThe visual models tend to overfit to seen environments and generalize poorly to unfamiliar environments. Operating in text-space allows better transfer of policies without needing to learn state representations that are robust to visually diverse environments. The zero-performing ACTION-ONLY baseline indicates that memorizing action sequences is an infeasible strategy for agents.\n5.3 MODEL ABLATIONS\nFigure 4 illustrates more factors that affect the performance of BUTLER::BRAIN. The three rows of plots show training curves, evaluation curves in seen and unseen settings, respectively. All experiments were trained and evaluated on All Tasks with 8 random seeds.\nIn the first column, we show the effect of using different observation queue lengths k as described in Section 3.1, in which size 0 refers to not providing any observation information to the agent. In the second column, we examine the effect of explicitly keeping the initial observation o0, which lists all the receptacles in the\nscene. Keeping the initial observation o0 facilitates the decoder to generate receptacle words more accurately for unseen tasks, but may be unnecessary in seen environments. The third column suggests that the recurrent component in our aggregator is helpful in making history-based decisions\nparticularly in seen environments where keeping track of object locations is useful. Finally, in the fourth column, we see that using more training games can lead to better generalizability in both seen and unseen settings. Fewer training games achieve high training scores by quickly overfitting, which lead to zero evaluation scores." }, { "heading": "6 RELATED WORK", "text": "The longstanding goal of grounding language learning in embodied settings (Bisk et al., 2020) has lead to substantial work on interactive environments. ALFWorld extends that work with fully-interactive aligned environments that parallel textual interactions with photo-realistic renderings and physical interactions.\nInteractive Text-Only Environments: We build on the work of text-based environments like TextWorld (Côté et al., 2018) and Jericho (Hausknecht et al., 2020). While these environment allow for textual interactions, they are not grounded in visual or physical modalities.\nVision and language: While substantial work exists on vision-language representation learning e.g., MAttNet (Yu et al., 2018b), CMN (Hu et al., 2017), VQA (Antol et al., 2015), CLEVR (Johnson et al., 2017), ViLBERT (Lu et al., 2019), they lack embodied or sequential decision making.\nEmbodied Language Learning: To address language learning in embodied domains, a number of interactive environments have been proposed: BabyAI (Chevalier-Boisvert et al., 2019), Room2Room (Anderson et al., 2018b), ALFRED (Shridhar et al., 2020), InteractiveQA (Gordon et al., 2018), EmbodiedQA (Das et al., 2018), and NetHack (Küttler et al., 2020). These environments use language to communicate instructions, goals, or queries to the agent, but not as a fully-interactive textual modality.\nLanguage for State and Action Representation: Others have used language for more than just goal-specification. Schwartz et al. (2019) use language as an intermediate state to learn policies in VizDoom. Similarly, Narasimhan et al. (2018) and Zhong et al. (2020) use language as an intermediate representation to transfer policies across different environments. Hu et al. (2019) use a natural language instructor to command a low-level executor, and Jiang et al. (2019) use language as an abstraction for hierarchical RL. However these works do not feature an interactive text environment for pre-training the agent in an abstract textual space. Zhu et al. (2017) use high-level commands similar to ALFWorld to solve tasks in THOR with IL and RL-finetuning methods, but the policy only generalizes to a small set of tasks due to the vision-based state representation. Using symbolic representations for state and action is also an inherent characteristic of works in task-and-motionplanning (Kaelbling and Lozano-Pérez, 2011; Konidaris et al., 2018) and symbolic planning (Asai and Fukunaga, 2017).\nWorld Models: The concept of using TextWorld as a “game engine” to represent the world is broadly related to inverse graphics (Kulkarni et al., 2015) and inverse dynamics (Wu et al., 2017) where abstract visual or physical models are used for reasoning and future predictions. Similarly, some results in cognitive science suggest that humans use language as a cheaper alternative to sensorimotor simulation (Banks et al., 2020; Dove, 2014)." }, { "heading": "7 CONCLUSION", "text": "We introduced ALFWorld, the first interactive text environment with aligned embodied worlds. ALFWorld allows agents to explore, interact, and learn abstract polices in a textual environment. Pre-training our novel BUTLER agent in TextWorld, we show zero-shot generalization to embodied tasks in the ALFRED dataset. The results indicate that reasoning in textual space allows for better generalization to unseen tasks and also faster training, compared to other modalities like vision.\nBUTLER is designed with modular components which can be upgraded in future work. Examples include the template-based state-estimator and the A* navigator which could be replaced with learned modules, enabling end-to-end training of the full pipeline. Another avenue of future work is to learn “textual dynamics models” through environment interactions, akin to vision-based world models (Ha and Schmidhuber, 2018). Such models would facilitate construction of text-engines for new domains, without requiring access to symbolic state descriptions like PDDL. Overall, we are excited by the challenges posed by aligned text and embodied environments for better cross-modal learning." }, { "heading": "A DETAILS OF BUTLER::BRAIN", "text": "In this section, we use ot to denote text observation at game step t, g to denote the goal description provided by a game.\nWe use L to refer to a linear transformation and Lf means it is followed by a non-linear activation function f . Brackets [⋅; ⋅] denote vector concatenation, ⊙ denotes element-wise multiplication.\nA.1 OBSERVATION QUEUE\nAs mentioned in Section 3.1, we utilize an observation queue to cache the text observations that have been seen recently. Since the initial observation o0 describes the high level layout of a room, including receptacles present in the current game, we it visible to BUTLER::BRAIN at all game steps, regardless of the length of the observation queue. Specifically, the observation queue has an extra space storing o0, at any game step, we first concatenate all cached observations in the queue, then prepend the o0 to form the input to the encoder. We find this helpful because it facilitates the pointer softmax mechanism in the decoder (described below) by guiding it to point to receptacle words in the observation. An ablation study on this is provided in Section 5.\nA.2 ENCODER\nWe use a transformer-based encoder, which consists of an embedding layer and a transformer block (Vaswani et al., 2017). Specifically, embeddings are initialized by pre-trained 768-dimensional BERT embeddings (Sanh et al., 2019). The embeddings are fixed during training in all settings.\nThe transformer block consists of a stack of 5 convolutional layers, a self-attention layer, and a 2-layer MLP with a ReLU non-linear activation function in between. In the block, each convolutional layer has 64 filters, each kernel’s size is 5. In the self-attention layer, we use a block hidden size H of 64, as well as a single head attention mechanism. Layernorm (Ba et al., 2016) is applied after each component inside the block. Following standard transformer training, we add positional encodings into each block’s input.\nAt every game step t, we use the same encoder to process text observation ot and goal description g. The resulting representations are hot ∈ R Lot×H and hg ∈ R Lg×H , where Lot is the number of tokens in ot, Lg denotes the number of tokens in g, H = 64 is the hidden size.\nA.3 AGGREGATOR\nWe adopt the context-query attention mechanism from the question answering literature (Yu et al., 2018a) to aggregate the two representations hot and hg .\nSpecifically, a tri-linear similarity function is used to compute the similarity between each token in hot with each token in hg. The similarity between i-th token in ho and j-th token in hg is thus computed by (omitting game step t for simplicity):\nSim(i, j) =W (hoi , hgj , hoi ⊙ hgj), (1) where W is a trainable parameter in the tri-linear function. By applying the above computation for each ho and hg pair, we get a similarity matrix S ∈ R Lo×Lg .\nBy computing the softmax of the similarity matrix S along both dimensions (number of tokens in goal description Lg and number of tokens in observation Lo), we get Sg and So, respectively. The two representations are then aggregated by:\nhog = [ho;P ;ho ⊙ P ;ho ⊙Q], P = Sgh ⊤ g ,\nQ = SgS ⊤ o h ⊤ o ,\n(2)\nwhere hog ∈ R Lo×4H is the aggregated observation representation.\nNext, a linear transformation projects the aggregated representations to a space with size H = 64:\nhog = L tanh(hog). (3)\nTo incorporate history, we use a recurrent neural network. Specifically, we use a GRU (Cho et al., 2014):\nhRNN = Mean(hog), ht = GRU(hRNN, ht−1),\n(4)\nin which, the mean pooling is performed along the dimension of number of tokens, i.e., hRNN ∈ R H . ht−1 is the output of the GRU cell at game step t − 1.\nA.4 DECODER\nOur decoder consists of an embedding layer, a transformer block and a pointer softmax mechanism (Gulcehre et al., 2016). We first obtain the source representation by concatenating hog and ht, resulting hsrc ∈ R Lo×2H .\nSimilar to the encoder, the embedding layer is frozen after initializing it with pre-trained BERT embeddings. The transformer block consists of two attention layers and a 3-layer MLP with ReLU non-linear activation functions inbetween. The first attention layer computes the self attention of the input embeddings hself as a contextual encoding for the target tokens. The second attention layer then computes the attention αisrc ∈ R\nLo between the source representation hsrc and the i-th token in hself. The i-th target token is consequently represented by the weighted sum of hsrc, with the weights α i src. This generates a source information-aware target representation h ′ tgt ∈ R\nLtgt×H , where Ltgt denotes the number of tokens in the target sequence. Next, h′tgt is fed into the 3-layer MLP with ReLU activation functions inbetween, resulting htgt ∈ R\nLtgt×H . The block hidden size of this transformer is H = 64. Taking htgt as input, a linear layer with tanh activation projects the target representation into the same space as the embeddings (with dimensionality of 768), then the pre-trained embedding matrix E generates output logits (Press and Wolf, 2016), where the output size is same as the vocabulary size. The resulting logits are then normalized by a softmax to generate a probability distribution over all tokens in vocabulary: pa(yi) = ESoftmax(Ltanh(htgt)), (5) in which, pa(yi) is the generation (abstractive) probability distribution. We employ the pointer softmax (Gulcehre et al., 2016) mechanism to switch between generating a token yi (from a vocabulary) and pointing (to a token in the source text). Specifically, the pointer softmax module computes a scalar switch si at each generation time-step i and uses it to interpolate the abstractive distribution pa(yi) over the vocabulary (Equation 5) and the extractive distribution px(yi) = αisrc over the source text tokens:\np(yi) = si ⋅ pa(yi) + (1 − si) ⋅ px(yi), (6) where si is conditioned on both the attention-weighted source representation ∑j αi,jsrc ⋅ hjsrc and the decoder state hitgt:\ns i = Lsigmoid1 (tanh(L2(∑\nj\nα i,j src ⋅ h j src) + L3(hitgt))). (7)\nIn which, L1 ∈ R H×1, L2 ∈ R 2H×H and L3 ∈ R H×H are linear layers, H = 64." }, { "heading": "B TRAINING AND IMPLEMENTATION DETAILS", "text": "In this section, we provide hyperparameters and other implementation details.\nFor all experiments, we use Adam (Kingma and Ba, 2014) as the optimizer. The learning rate is set to 0.001 with a clip gradient norm of 5.\nDuring training with DAgger, we use a batch size of 10 to collect transitions (tuples of {o0, ot, g, ât}) at each game step t, where ât is the ground-truth action provided by the rule-based expert (see Section E). We gather a sequence of transitions from each game episode, and push each sequence into a replay buffer, which has a capacity of 500K episodes. We set the max number of steps per episode to be 50. If the agent uses up this budget, the game episode is forced to terminate. We linearly anneal the fraction of the expert’s assistance from 100% to 1% across a window of 50K episodes.\nThe agent is updated after every 5 steps of data collection. We sample a batch of 64 data points from the replay buffer. In the setting with the recurrent aggregator, every sampled data point is a sequence of 4 consecutive transitions. Following the training strategy used in the recurrent DQN literature (Hausknecht and Stone, 2015; Yuan et al., 2018), we use the first 2 transitions to estimate the recurrent states, and the last 2 transitions for updating the model parameters.\nBUTLER::BRAIN learns to generate actions token-by-token, where we set the max token length to be 20. The decoder stops generation either when it generates a special end-of-sentence token [EOS], or hits the token length limit.\nWhen using the beam search heuristic to recover from failed actions (see Figure 5), we use a beam width of 10, and take the top-5 ranked outputs as candidates. We iterate through the candidates in the rank order until one of them succeeds. This heuristic is not always guaranteed to succeed, however, we find it helpful in most cases. Note that we do not employ beam search when we evaluate during the training process for efficiency, e.g., in the seen and unseen curves shown in Figure 4. We take the best performing checkpoints and then apply this heuristic during evaluation and report the resulting scores in tables (e.g., Table 2).\nBy default unless mentioned otherwise (ablations), we use all available training games in each of the task types. We use an observation queue length of 5 and use a recurrent aggregator. The model is trained with DAgger, and during evaluation, we apply the beam search heuristic to produce the reported scores. All experiment settings in TextWorld are run with 8 random seeds. All text agents are trained for 50,000 episodes." }, { "heading": "C TEXTWORLD ENGINE", "text": "Internally, the TextWorld Engine is divided into two main components: a planner and text generator.\nPlanner TextWorld Engine uses Fast Downward (Helmert, 2006), a domain-independent classical planning system to maintain and update the current state of the game. A state is represented by a set of predicates which define the relations between the entities (objects, player, room, etc.) present in the game. A state can be modified by applying production rules corresponding to the actions listed in Table 6. All variables, predicates, and rules are defined using the PDDL language.\nFor instance, here is a simple state representing a player standing next to a microwave which is closed and contains a mug:\nst = at(player,microwave)⊗ in(mug,microwave) ⊗ closed(microwave)⊗ openable(microwave),\nwhere the symbol ⊗ is the linear logic multiplicative conjunction operator. Given that state, a valid action could be open microwave, which would essentially transform the state by replacing closed(microwave) with open(microwave).\nText generator The other component of the TextWorld Engine, the text generator, uses a contextsensitive grammar designed for the ALFRED environments. The grammar consists of text templates similar to those listed in Table 6. When needed, the engine will sample a template given some context,\ni.e., the current state and the last action. Then, the template gets realized using the predicates found in the current state." }, { "heading": "D MASK R-CNN DETECTOR", "text": "We use a Mask R-CNN detector (He et al., 2017) pre-trained on MSCOCO (Lin et al., 2014) and fine-tune it with additional labels from ALFRED training scenes. To generate additional labels, we replay the expert demonstrations from ALFRED and record ground-truth image and instance segmentation pairs from the simulator (THOR) after completing each high-level action e.g., goto, pickup etc. We generate a dataset of 50K images, and fine-tune the detector for 4 epochs with a batch size of 8 and a learning rate of 5e-4. The detector recognizes 73 object classes where each class could vary up to 1-10 instances. Since demonstrations in the kitchen are often longer as they involve complex sequences like heating, cleaning etc., the labels are slightly skewed towards kitchen objects. To counter this, we balance the number of images sampled from each room (kitchen, bedroom, livingroom, bathroom) so the distribution of object categories is uniform across the dataset." }, { "heading": "E RULE-BASED EXPERT", "text": "To train text agents in an imitation learning (IL) setting, we use a rule-based expert for supervision. A given task is decomposed into sequence of subgoals (e.g., for heat & place: find the object, pick the object, find the microwave, heat the object with the microwave, find the receptacle, place the object in the receptacle), and a closed-loop controller tries to sequentially execute these goals. We note that while designing rule-based experts for ALFWorld is relatively straightforward, experts operating directly in embodied settings like the PDDL planner used in ALFRED are prone to failures due to physical infeasibilities and non-deterministic behavior in physics-based environments." }, { "heading": "F BENEFITS OF TRAINING IN TEXTWORLD OVER EMBODIED WORLD", "text": "Pre-training in TextWorld offers several benefits over directly training in embodied environments. Figure 6 presents the performance of an expert (that agents are trained to imitate) across various environments. The abstract textual space leads to higher goal success rates resulting from successful navigation and manipulation subroutines. TextWorld agents also do not suffer from object misdetections and slow execution speed." }, { "heading": "G OBSERVATION TEMPLATES", "text": "The following templates are used by the state-estimator to generate textual observations ot. The object IDs {obj id} correspond to Mask R-CNN objects detection or ground-truth instance IDs. The receptacle IDs {recep id} are based on the receptacles listed in the initial observation o0. Failed actions and actions without any state-changes result in Nothing happens." }, { "heading": "H GOAL DESCRIPTIONS", "text": "H.1 TEMPLATED GOALS\nThe goal instructions for training games are generated with following templates. Here obj, recep, lamp refer to object, receptacle, and lamp classes, respectively, that pertain to a particular task. For each task, the two corresponding templates are sampled with equal probability.\nH.2 HUMAN ANNOTATED GOALS\nThe human goal descriptions used during evaluation contain 66 unseen verbs and 189 unseen nouns with respect to the templated goal instructions used during training.\nUnseen Verbs: acquire, arrange, can, carry, chill, choose, cleaning, clear, cook, cooked, cooled, dispose, done, drop, end, fill, filled, frying, garbage, gather, go, grab, handled, heated, heating, hold, holding, inspect, knock, left, lit, lock, microwave, microwaved, move, moving, pick, picking, place, placed, placing, putting, read, relocate, remove, retrieve, return, rinse, serve, set, soak, stand, standing, store, take, taken, throw, transfer, turn, turning, use, using, walk, warm, wash, washed.\nUnseen Nouns: alarm, area, back, baisin, bar, bars, base, basin, bathroom, beat, bed, bedroom, bedside, bench, bin, books, bottle, bottles, bottom, box, boxes, bureau, burner, butter, can, canteen, card, cardboard, cards, cars, cds, cell, chair, chcair, chest, chill, cistern, cleaning, clock, clocks, coffee, container, containers, control, controllers, controls, cooker, corner, couch, count, counter, cover, cream, credit, cupboard, dining, disc, discs, dishwasher, disks, dispenser, door, drawers, dresser, edge, end, floor, food, foot, freezer, game, garbage, gas, glass, glasses, gold, grey, hand, head, holder, ice, inside, island, item, items, jars, keys, kitchen, knifes, knives, laddle, lamp, lap, left, lid, light, loaf, location, lotion, machine, magazine, maker, math, metal, microwaves, move, nail, newsletters, newspapers, night, nightstand, object, ottoman, oven, pans, paper, papers, pepper, phone, piece, pieces, pillows, place, polish, pot, pullout, pump, rack, rag, recycling, refrigerator, remote, remotes, right, rinse, roll, rolls, room, safe, salt, scoop, seat, sets, shaker, shakers, shelves, side, sink, sinks, skillet, soap, soaps, sofa, space, spatulas, sponge, spoon, spot, spout, spray, stand, stool, stove, supplies, table, tale, tank, television, textbooks, time, tissue, tissues, toaster, top, towel, trash, tray, tv, vanity, vases, vault, vegetable, wall, wash, washcloth, watches, water, window, wine." }, { "heading": "I ACTION CANDIDATES VS ACTION GENERATION", "text": "BUTLER::BRAIN generates actions in a token-by-token fashion. Prior text-based agents typically use a list of candidate commands from the game engine (Adhikari et al., 2020) or populate a list of command templates (Ammanabrolu and Hausknecht, 2020). We initially trained our agents with candidate commands from the TextWorld Engine, but they quickly ovefit without learning affordances,\ncommonsense, or pre-conditions, and had zero performance on embodied transfer. In the embodied setting, without access to a TextWorld Engine, it is difficult to generate candidate actions unless a set of heuristics is handcrafted with strong priors and commonsense knowledge. We also experimented with populating a list of command templates, but found this to be infeasible as some scenarios involved 1000s of populated actions per game step." }, { "heading": "J ALFRED TASK DESCRIPTIONS", "text": "The following descriptions describe the processes involved in each of six task-types:\n• Pick & Place (e.g., “put a plate on the coffee table”) - the agent must find an object of the desired type, pick it up, find the correct location to place it, and put it down there.\n• Examine in Light (e.g., “examine a book under the lamp”) - the agent must find an object of the desired type, locate and turn on a light source with the desired object in-hand.\n• Clean & Place (e.g., “clean the knife and put in the drawer”) - the agent must find an object of the desired type, pick it up, go to a sink or a basin, wash the object by turning on the faucet, then find the correct location to place it, and put it down there.\n• Heat & Place (e.g., “heat a mug and put on the coffee table”) - the agent must find an object of the desired type, pick it up, go to a microwave, heat the object turning on the microwave, then find the correct location to place it, and put it down there.\n• Cool & Place (e.g., “put a cool bottle on the countertop”) - the agent must find an object of the desired type, pick it up, go to a fridge, put the object inside the fridge and cool it, then find the correct location to place it, and put it down there.\n• Pick Two & Place (e.g., “put two pencils in the drawer”) - the agent must find an object of the desired type, pick it up, find the correct location to place it, put it down there, then look for another object of the desired type, pick it up, return to previous location, and put it down there with the other object." }, { "heading": "K ALFWORLD TEXT GAME EXAMPLES", "text": "Three examples of ALFWorld text game walkthroughs:\nYou are in the middle of a room. Looking quickly around you, you see a towelholder 1, a toilet 1, a bathtubbasin 1, a drawer 4, a handtowelholder 2, a drawer 6, a drawer 1, a countertop 1, a sinkbasin 1, a drawer 2, a drawer 3, a toiletpaperhanger 1, a drawer 5, a handtowelholder 1, a towelholder 2, a sinkbasin 2, and a garbagecan 1.\nYour task is to: put a clean cloth in bathtubbasin.\n> go to drawer 5 You arrive at loc 9. The drawer 5 is closed. > open drawer 5 You open the drawer 5. The drawer 5 is open. In it, you see a candle 1. > close drawer 5 You close the drawer 5. > go to countertop 1 You arrive at loc 25. On the countertop 1, you see a cloth 2, a soapbottle 1, a cloth 1, and a candle 3. > take cloth 1 from countertop 1 You pick up the cloth 1 from the countertop 1. > go to sinkbasin 1 You arrive at loc 20. On the sinkbasin 1, you see nothing. > clean cloth 1 with sinkbasin 1 You clean the cloth 1 using the sinkbasin 1. > go to bathtubbasin 1 You arrive at loc 14. On the bathtubbasin 1, you see a soapbar 1. > put cloth 1 in/on bathtubbasin 1 You won!\nYou are in the middle of a room. Looking quickly around you, you see a drawer 15, a drawer 19, a drawer 7, a armchair 1, a drawer 3, a drawer 16, a drawer 2, a drawer 17, a sidetable 3, a sidetable 5, a drawer 4, a drawer 20, a sofa 1, a drawer 13, a drawer 12, a drawer 9, a drawer 21, a sidetable 4, a sidetable 2, a drawer 5, a drawer 11, a drawer 1, a garbagecan 1, a dresser 1, a cabinet 1, a drawer 8, a drawer 18, a sidetable 1, a drawer 14, a drawer 6, and a drawer 10.\nYour task is to: find two remotecontrol and put them in armchair.\n> go to sidetable 2 You arrive at loc 34. On the sidetable 2, you see a remotecontrol 1. > take remotecontrol 1 from sidetable 2 You pick up the remotecontrol 1 from the sidetable 2. > go to armchair 1 You arrive at loc 1. On the armchair 1, you see nothing. > put remotecontrol 1 in/on armchair 1 You put the remotecontrol 1 in/on the armchair 1. > go to dresser 1 You arrive at loc 18. On the dresser 1, you see a creditcard 1, a television 1, a laptop 1, a book 1, a statue 2, and a cellphone 2. > go to sidetable 5 You arrive at loc 17. On the sidetable 5, you see a box 1, and a creditcard 2. > go to sofa 1 You arrive at loc 2. On the sofa 1, you see a newspaper 1, a pillow 1, and a remotecontrol 2. > take remotecontrol 2 from sofa 1 You pick up the remotecontrol 2 from the sofa 1. > go to armchair 1 You arrive at loc 1. On the armchair 1, you see a remotecontrol 1. > put remotecontrol 2 in/on armchair 1 You won!\nYou are in the middle of a room. Looking quickly around you, you see a safe 1, a shelf 4, a drawer 2, a bed 1, a drawer 1, a shelf 5, a shelf 2, a sidetable 2, a shelf 3, a drawer 3, a shelf 1, a sidetable 1, a desk 1, and a garbagecan 1.\nYour task is to: examine the alarmclock with the desklamp.\n> go to desk 1 You arrive at loc 8. On the desk 1, you see a pen 1, a bowl 1, a alarmclock 2, a pencil 2, a pencil 3, a creditcard 3, a book 1, a alarmclock 3, a keychain 3, and a book 2. > take alarmclock 2 from desk 1 You pick up the alarmclock 2 from the desk 1. > go to sidetable 2 You arrive at loc 1. On the sidetable 2, you see a desklamp 1, and a alarmclock 1. > use desklamp 1 You won!" } ]
2,021
null
SP:4f7eaeae0559362f0caf13406b20914c120de74b
[ "This paper proposes a training method for classification, with the goal of training with less data. The proposal is to train an auxiliary classifier at the same time. The auxiliary classifier and the main classifier share the early layers. The auxiliary classifier is a binary classifier that discriminates training data versus background/noise data. The proposed method is evaluated on image and speech classification tasks." ]
Deep learning models owe their success at large, to the availability of a large 1 amount of annotated data. They try to extract features from the data that contain 2 useful information needed to improve their performance on target applications. 3 Most works focus on directly optimizing the target loss functions to improve the 4 accuracy by allowing the model to implicitly learn representations from the data. 5 There has not been much work on using background/noise data to estimate the 6 statistics of in-domain data to improve the feature representation of deep neural 7 networks. In this paper, we probe this direction by deriving a relationship between 8 the estimation of unknown parameters of the probability density function (pdf) 9 of input data and classification accuracy. Using this relationship, we show that 10 having a better estimate of the unknown parameters using background and in11 domain data provides better features which leads to better accuracy. Based on 12 this result, we introduce a simple but effective detection booster training (DBT) 13 method that applies a detection loss function on the early layers of a neural network 14 to discriminate in-domain data points from noise/background data, to improve 15 the classifier accuracy. The background/noise data comes from the same family 16 of pdfs of input data but with different parameter sets (e.g., mean, variance). In 17 addition, we also show that our proposed DBT method improves the accuracy even 18 with limited labeled in-domain training samples as compared to normal training. 19 We conduct experiments on face recognition, image classification, and speaker 20 classification problems and show that our method achieves superior performance 21 over strong baselines across various datasets and model architectures. 22
[]
[ { "authors": [ "Richard O Duda", "Peter E Hart", "David G Stork" ], "title": "Pattern classification", "venue": null, "year": 2012 }, { "authors": [ "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "NeurIPS, pp. 2672–2680,", "year": 2014 }, { "authors": [ "405 Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "ICLR,", "year": 2014 }, { "authors": [ "A. Rabinovich" ], "title": "Going deeper with convolutions", "venue": "CVPR, pp", "year": 2015 }, { "authors": [ "Christophe Veaux", "Junichi Yamagishi", "Kirsten MacDonald" ], "title": "Under review as a conference paper at ICLR", "venue": null, "year": 2021 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei Efros" ], "title": "Colorful image colorization", "venue": null, "year": 2016 } ]
[ { "heading": null, "text": "Deep learning models owe their success at large, to the availability of a large1 amount of annotated data. They try to extract features from the data that contain2 useful information needed to improve their performance on target applications.3 Most works focus on directly optimizing the target loss functions to improve the4 accuracy by allowing the model to implicitly learn representations from the data.5 There has not been much work on using background/noise data to estimate the6 statistics of in-domain data to improve the feature representation of deep neural7 networks. In this paper, we probe this direction by deriving a relationship between8 the estimation of unknown parameters of the probability density function (pdf)9 of input data and classification accuracy. Using this relationship, we show that10 having a better estimate of the unknown parameters using background and in-11 domain data provides better features which leads to better accuracy. Based on12 this result, we introduce a simple but effective detection booster training (DBT)13 method that applies a detection loss function on the early layers of a neural network14 to discriminate in-domain data points from noise/background data, to improve15 the classifier accuracy. The background/noise data comes from the same family16 of pdfs of input data but with different parameter sets (e.g., mean, variance). In17 addition, we also show that our proposed DBT method improves the accuracy even18 with limited labeled in-domain training samples as compared to normal training.19 We conduct experiments on face recognition, image classification, and speaker20 classification problems and show that our method achieves superior performance21 over strong baselines across various datasets and model architectures.22\n1 INTRODUCTION23 Modern pattern recognition systems achieve outstanding accuracies on a vast domain of challenging24 computer vision, natural language, and speech recognition benchmarks (Russakovsky et al. (2015);25 Lin et al. (2014); Everingham et al. (2015); Panayotov et al. (2015)). The success of deep learning26 approaches relies on the availability of a large amount of annotated data and on extracting useful27 features from them for different applications. Learning rich feature representations from the available28 data is a challenging problem in deep learning. A related line of work includes learning deep latent29 space embedding through deep generative models (Kingma & Welling (2014); Goodfellow et al.30 (2014); Berthelot et al. (2019) or using self-supervised learning methods (Noroozi & Favaro (2016);31 Gidaris et al. (2018); Zhang et al. (2016b)) or through transfer learning approaches (Yosinski et al.32 (2014); Oquab et al. (2014); Razavian et al. (2014)).33\nIn this paper, we propose to use a different approach to improve the feature representations of deep34 neural nets and eventually improve their accuracy by estimating the unknown parameters of the35 probability density function (pdf) of input data. Parameter estimation or Point estimation methods36 are well studied in the field of statistical inference (Lehmann & Casella (1998)). The insights from37 the theory of point estimation can help us to develop better deep model architectures for improving38 the model’s performance. We make use of this theory to derive a correlation between the estimation39 of unknown parameters of pdf and classifier outputs. However, directly estimating the unknown40 pdf parameters for practical problems such as image classification is not feasible since it can sum41 up to millions of parameters. In order to overcome this bottleneck, we assume that the input data42 points are sampled from a family of pdfs instead of a single pdf and propose to use a detection43 based training approach to better estimate the unknowns using in-domain and background/noise data.44 One alternative is that we can use generative models for this task, however, they mimic the general45\ndistribution of training data conditioned on random latent vectors and hence cannot be directly applied46 for estimating the unknown parameters of a family of pdfs. Our proposed detection method involves47 a binary class discriminator that separates the target data points from noise or background data. The48 noise or background data is assumed to come from the same family of distribution of in-domain49 data but with different moments (Please refer to the appendix for more details about the family of50 distributions and its extension to a general structure). In image classification, this typically represents51 the background patches from input data that fall under the same distribution family. In speech domain,52 it can be random noise or the silence intervals in speech data. Collecting such background data to53 improve the feature representations is much simpler as compared to using labeled training data since54 it is time-consuming and expensive to collect labeled data. Since the background patches in images55 or noise in speech signals are used for binary classification in our method, we refer to such data56 as the noise of an auxiliary binary classification problem denoted by auxiliary binary classification57 (ABC)-noise dataset. An advantage of using ABC-noise data during training is that it can implicitly58 add robustness to deep neural networks against the background or noisy data.59\nSince ABC-noise data can be collected in large quantities for free and using that data in our approach60 improves the classification benchmarks, we investigate whether this data can act as a substitute for61 labeled data. We conduct empirical analysis and show that using only a fraction of labeled training62 data together with ABC-noise data in our DBT method, indeed improves the accuracy as compared63 to normal training.64\nTo summarize, our contributions are threefold. First, we present a detailed theoretical analysis on65 the relation between the estimation of unknown parameters of pdf of data and classification outputs.66 Second, based on the theoretical analysis, we present a simple booster training method to improve67 classification accuracy which also doubles up as an augmented training method when only limited68 labeled data is available. Third, we consistently achieve improved performances over strong baselines69 on face recognition, image classification, and speaker recognition problems using our proposed70 method, showing its generalization across different domains and model architectures.71\n2 RELATED WORK72\nNotations and Preliminary: In this paper, vectors, matrices, functions, and sets are denoted by bold73 lower case, bold uppercase, lower case, and calligraphic characters, respectively. Consider a datapoint74 denoted by x. We assume that x belongs to a family of probability density functions (pdf’s) defined75 as P = {p(x,θ),θ ∈ Θ}, where Θ is the possible set of parameters of the pdf. In general, θ is a real76 vector in higher dimensions. For example, in a mixture of Gaussians, θ is a vector containing the77 component weights, the component means, and the component covariance matrices. In this paper, we78 assume that θ is an unknown deterministic function (There are other approaches such as bayesian79 that consider θ as a random vector). In general, although the structure of the family of pdfs is itself80 unknown, defining a family of pdfs such as P can help us to develop theorems and use those results81 to derive a new method. For the family of distribution P , we can define the following classification82 problem83\n{ C1 : θ ∈ Θ1, C2 : θ ∈ Θ2, · · · , Cn : θ ∈ Θn } (1)\nwhere set of Θi’s is a partition of Θ. The notation of (1) means that, class Ci deals with a set of84 data points whose pdf is p(x,θi) where θi ∈ Θi. A wide range of classification problems can be85 defined using (1) e.g., ((Lehmann & Casella, 2006, Chapter 3)) and ((Duda et al., 2012, Chapter 4)).86 The problem of estimating θ comes under the category of parametric estimation or point estimation87 (Lehmann & Casella (1998)). Estimating the unknown parameters of a given pdf p(x,θ), have been88 extensively studied in the field of point estimation methods (Lindgren (2017); Lee et al. (2018);89 Lehmann & Casella (2006)). An important estimator in this field is the minimum variance unbiased90 estimator and it is governed by the Cramer Rao bound. The Cramer Rao bound provides the lower91 bound of the variance of an unbiased estimator (Bobrovsky et al. (1987)). Let the estimation of92 θ be denoted by θ̂, and assume that θ̂ is an unbiased estimator, i.e., E(θ̂) = θ. Its covariance93 matrix denoted by Σθ̂ satisfies Σθ̂ − I\n−1(θ) 0, where A 0 implies that A is a non-negative94 definite matrix ((Lehmann & Casella, 1998, chapter 5)) and I(θ) := −E(∂2 log(p(x,θ))/∂θ2)95 is called the Fisher information matrix. For an arbitrary differentiable function g(·), an efficient96 estimator of g(θ) is an unbiased estimator when its covariance matrix equals to I−1g (θ), where I −1 g (θ)97 is the fisher information matrix of g(θ), i.e., the efficient estimator achieves the lowest possible98\nvariance among all unbiased estimators. The efficient estimator can be achieved using factorization of99 ∂ log(p(x,θ))/∂g(θ) = Ig(θ)(ĝ(x)− g(θ)), if it exists (Rao (1992); Lehmann & Casella (1998)).100 Based on these results, we derive a relationship between the efficient estimation of unknowns and101 maximum likelihood classifier of (1) and use auxiliary binary classifiers to apply that result in our102 proposed DBT method.103\nParameter Estimations: Independent component analysis (Hyvärinen (1999)) decomposes a multi-104 variate signal into independent non-Gaussian signals. ICA can extract non-Gaussian features from105 Gaussian noise. Additionally, there is a class of classifiers called generalized likelihood ratio functions106 that replaces the estimation of unknown parameters into the likelihood functions. This approach107 provides a huge improvement in the field of parametric classifiers, where the family of pdf of data108 is given (Zeitouni et al. (1992), Conte et al. (2001), Lehmann & Casella (2006)). Noise-contrastive109 estimation (NCE) (Gutmann & Hyvärinen (2010)) involves training a generative model that allows110 a model to discriminate data from a fixed noise distribution. Then, this trained model can be used111 for training a sequence of models of increasing quality. This can be seen as an informal competition112 mechanism similar in spirit to the formal competition used in the adversarial networks game. In113 Bachman et al. (2019), a feature selection is proposed by maximizing the mutual information of the114 difference between features extracted from multiple views of a shared context. In that work, it is115 shown that the best results is given by using a mutual information bound based on NCE. The key116 difference between our method and NCE is that, we do not construct a generative model for noise.117 Instead of estimating the pdf of noise in NCE, we estimate the parameters of pdf of in-domain dataset118 using an auxiliary class that has many common parameters in its pdf. Moreover, we show that the119 estimation of that parameters are sufficient statistic for a classifier. We assume that the noise dataset is120 not pure and it has some similarity with the in-domain dataset, where it can help the feature selection121 layers to select relevant (in-domain) features, e.g., see Fig. 3. Further, in our approach, we do not122 construct the pdf of noise or in-domain data, instead we estimate its parameters directly, which is123 more efficient in terms of training, computation and also dimensionality reduction.124\nAuxiliary classifiers were introduced in inception networks (Szegedy et al. (2015)) and used in (Lee125 et al. (2015); S. et al. (2016)) for training very deep networks to prevent vanishing gradient problems.126 Further, auxiliary classifiers were also proposed for early exit schemes (Teerapittayanon et al. (2016))127 and self-distillation methods (Zhang et al. (2019a;b)). Such auxiliary classifiers tackle different128 problems by predicting the same target as the final classification layer. In contrast, our proposed DBT129 method involves auxiliary binary classifiers that detect noise, interference, and/or background data130 from in-domain data points for improving the target classification accuracy.131\n3 ESTIMATION OF PARAMETERS OF PDF AND CLASSIFICATION132\nFor (1), we define a deterministic discriminative function of Θi, denoted by ti(·) such that the133 following conditions are satisfied:134 • ti(·) maps Θ to real numbers such that ti(θ) > 0, if θ ∈ Θi and ti(θ) ≤ 0 for θ /∈ Θi.135 • ti(·) is a differentiable function almost everywhere and ∫ Θ |ti(θ)|dµl(θ) <∞, where µl denotes136 the Lebesgue measure.137 The following theorem shows the relationship of ti(·) and the log-likelihood ratio of class Ci versus138 other classes. The proofs of Theorems 1, 2 and 3 are provided in the appendix.139\nTheorem 1 Assume that the pdf p(x,θ) is differentiable with respect to θ almost everywhere. If the140 efficient minimum variance and unbiased estimation of a deterministic discriminative function of Θi141 exists, then the log likelihood ratio of class i against the rest of classes is an increasing function of142 the minimum variance and unbiased estimation of Θi.143\nDirectly from this theorem, it follows that the optimal classifier using the maximum likelihood for (1)144 is given as follows d(x) = arg maxi∈{1,··· ,n} ki(t̂i(x)), where ki’s are some increasing functions and145 ti(·)’s are the deterministic discriminative function of Θi’s such that the efficient minimum variance146 and unbiased estimation for them exists. Based on this result, a set of minimum variance and unbiased147 estimation of deterministic discriminative functions of Θi’s leads us to the maximum likelihood148 classifier. One approach is to directly estimate the deterministic discriminative functions, instead of149 maximizing the likelihood function. However, finding deterministic discriminative functions that150 have efficient minimum variance and unbiased estimation may not be feasible in practical problems,151\nespecially when the dimension of θ increases. Theorems 2 and 3 study the same relationship between152 the estimation of unknown parameters and the accuracy of classifiers for sub-optimal estimators and153 classifiers.154\nTheorem 2 Consider the output of two classifiers for the ith class as follows: rj(x) = i if hj(x) > τ155 and rj(x) = other classes if hj(x) < τ , where j ∈ {1, 2}. where hj(x) is the estimation of a156 deterministic discriminative function and τ is a classification threshold. Assume that the cumulative157 distribution function of hj(x)’s have bounded inflection points, and also, the probability of true158 positive of rj(x) is an increasing function of d(θ), which is the deterministic discriminative function159 of class i, for all i. Further assume that for each τ the probability of false positive of r1(x) is less160 than the probability of false positive of r2(x) and the probability of true positive of r1(x) is greater161 than the probability of true positive of r2(x). Then, there exists a hmin such that for all d(θ) > hmin162 and all θ we have Pr(|h1(x)− d(θ)| < ) > Pr(|h2(x)− d(θ)| < ).163\nTheorem 2 shows that a better classifier leads to a better estimation of d(θ). In the next theorem, we164 show the dual property of this result.165\nTheorem 3 Let Θm be a Borel set with positive Lebesgue measure in (1) for all m ∈ {1, · · · , n}.166 Assume that r1(·) and r2(·) are given as follows r1(x) = m, if θ̂1 ∈ Θm and r2(x) = m, if θ̂2 ∈ Θm.167 Also, assume that Pr(‖θ̂1 − θ‖ ≤ ) ≥ Pr(‖θ̂2 − θ‖ ≤ ), for all θ ∈ Θ = ∪nm=1Θm and > 0,168 then the probability of classification error r1(·) is less than r2(·) where θ̂1 and θ̂2 are two different169 estimators of θ ∈ Θ = ∪M−1m=0 Θm.170\nTheorem 3 proves that a more accurate estimator leads to a classifier that has a lower probability171 of classification error. From Theorem 1, we can infer that a sufficient statistic for developing the172 maximum likelihood classification is t̂i(x), which is the efficient minimum variance and unbiased173 estimation of the deterministic discriminative functions of Θi’s denoted by ti(θ). In other words, the174 maximum likelihood classifier is a function of x only via the efficient minimum variance and unbiased175 estimation ti(θ). We can estimate ti(θ) by replacing the estimation θ in ti(·), i.e., t̂i(θ) ≈ ti(θ̂),176 where θ̂ is a function of x. From the above theorems, we conclude that improving the estimation177 of unknown parameters of pdf of data can improve the accuracy of the classifier. On the other side,178 having a good classifier means having a good estimator of unknowns of the pdf of input data. In179 many practical problems, the optimal maximum likelihood classifier may not be achievable, but the180 likelihood function of the classifier provides an optimal bound of the probability of error. In such181 cases, we can improve the accuracy of sub-optimal classifiers and that is the main focus of this paper.182 Fig. 1 illustrates the proposed theorems visually.183\n4 PROPOSED METHOD: DETECTION BOOSTER TRAINING (DBT)184 In this section, we propose the detection booster training (DBT) method based on the achieved185 theorems in the previous section to improve the accuracy of deep networks. Specifically, we divide186 a deep model into two parts - early and later layers. We apply a detector (detection here means187 detecting a target pattern from noise/background) on the early layers of the neural network in order188\nto improve the estimation of unknown parameters of the family of pdf (based on Theorem 2). A189 better estimation of unknown parameters corresponds to better feature representations in the early190 layers and these features are input to the rest of the layers to construct the deterministic discriminative191 functions (DDF) useful for the in-domain data classification (based on Theorem 3).192\nA general schema for dividing a deep model into two sub-models namely PEF (parameter estimator193 functions) and DDF is depicted in Figure 2. The early layers of the model estimate the unknown194 parameters of pdf of data while the later layers construct the discriminative functions essential for195 classification. Based on this scheme, we formally define the three main components of DBT as196 follows:197 • parameter estimator functions (PEF): The sub-network from input layer to the kth layer, where k is198 a hyperparameter in the DBT approach.199 • auxiliary binary classification (ABC): Some additional layers are attached to the end of PEF,200 mapping the output of the kth layer to a one-dimensional vector.201 • deterministic discriminative functions (DDF): The sub-network from kth layer to the output of the202 model. The output of model is a vector equal to the length of the number of classes n.203\nFrom Theorem 2, we showed that unknown parameter estimation can be improved using a detection204 approach. During training, we apply a binary classification on the early layers (PEF) of the model to205 improve the estimation of unknown parameters of pdf and subsequently provide rich feature vectors206 for DDF. We define the auxiliary binary classification problem (ABC problem) as follows:207 • Class 1 (alternative hypothesis) of ABC problem denoted byH1 is set of all data points of classes208 of C1 to Cn, i.e. θ ∈ ∪ni=1Θi.209 • Class 0 (null hypothesis) of ABC problem denoted by H0 is a dataset of data points from same210 distribution p(x,θ) but θ /∈ ∪ni=1Θi. We also define the dataset of Class 0 of ABC as ABC-noise211 dataset, i.e., the ABC is given by the following hypothesis testing problem: H1 : θ ∈ ∪ni=1Θi versus212 H0 : θ /∈ ∪ni=1Θi. In many practical problems, the noise, background or interference data related to213 the in-domain dataset have same type of probability distribution but different pdf parameters. Hence,214 using that dataset is a cheap and adept choice for the null hypothesis of ABC.215\nThe Auxiliary Binary Classification problem influences only the PEF and ABC units while the main216 classification problem with n classes updates the parameters of both PEF and DDF using in-domain217 data. Since the auxiliary classifier is only used during training, the inference model (IM) consists of218 only PEF and DDF and hence, there is no additional computation cost during inference. We formulate219 the aforementioned method in the following notations and loss functions. Assume that x is a data220 point that belongs to Class Ci, i ∈ {1, · · · , n} or Class H0 of ABC. Here, we define two type of221 labels denoted by lABC and lMC, where the subscription \"MC\" stands for multi-classes. So, if x222 belongs to class Ci, then lABC = 1 and lMC = i− 1, else if x is a ABC-noise data point, lABC = 0223 and lMC is None. Therefore, the loss function is defined as:224\nLtot = LABC(QABC(QPEF(x)), lABC) + λlABCLMC(QDDF(QPEF(x)), lMC), (2)\nwhere QPEF, QABC and QDDF are the functions of PEF, ABC and DDF blocks, respectively. We225 set the hyperparameter λ = 1 to balance the two loss terms. It is seen that, the second term of the226 total loss is zero if lABC = 0. LABC and LMC are selected based on the problem definition and227 datasets. For classification, a simple selection for them can be binary cross-entropy and cross-entropy,228 respectively. For a given task and deep neural network, the choice of k and LABC influences the229 feature representation of early layers differently and consequently the accuracy of the model. We230 provide empirical studies in the next section to verify the same.231\n5 EXPERIMENTAL STUDY OF DBT232 FACE RECOGNITION233 We conduct experiments on face recognition benchmarks and show that the DBT method learns rich234 features essential for face recognition. We also discover an important observation that current state-235 of-the-art (SOTA) face recognition models are very sensitive to non-face data, in particular, animal236 faces. Fig. 4 shows a few examples of misidentified faces and their corresponding animal distractors237 from the IJB-B dataset using the ArcFace (Deng et al. (2019)) model. We show that our DBT method238 not only improves the verification accuracy but also implicitly tackles this robustness issue of current239 models against non-face data. Implementation details are provided in the appendix.240\nWe consider the PEF discussed in Section 4 to be the first three layers of the model and DDF to be241 the rest of layers. Ablation studies on the choice of PEF and DDF are provided in the supplementary242 material. We define LMC in (2) as the SOTA ArcFace loss function proposed in (Deng et al. (2019)).243 The ABC-noise is a non-face dataset containing 500K images that we collected from background244 patches of MS1MV2 (Guo et al. (2016)) (More details in Appendix). We experimented with two245 different loss functions for LABC. For the first one, since popular face recognition models (Deng et al.246 (2019); Wang et al. (2018)) use normalized output features and compute the losses on a hypersphere,247 we select LABC as follows. Let pf ∈ Rd and pnf ∈ Rd denote the prototypes for faces and non-248 faces, respectively. Following (Mettes et al. (2019)), we constrain the face/non-face prototypes on249 diametrically opposite directions i.e cos(θpfpnf ) = −1 and normalize the output feature vectors for250 faces and non-faces such that ‖pfi‖ = ‖pnfi‖ = 1. We then define the LABC as,251\nLABC = − 1\nN N∑ i=1 log ( es(cos(m1θyi+m2)−m3) es(cos(m1θyi+m2)−m3) + es cos θ2 ) + 1 N N∑ i=1 (−1− |pfi .pnfi |) 2, (3)\nwhere θyi and θ2 correspond to the angles between the weights and the features for face and non-face252 labels, respectively; m1,m2,m3 are the angular margins; s denotes the radius of the hypersphere. For253 the second choice, we use simple binary cross entropy for LABC. Table 1 shows that the verification254 accuracy on LFW (Huang et al. (2007)) using (3) is 0.16% higher than simple cross entropy loss. This255 also shows that choosing a task-specific LABC is essential in obtaining more accurate results. We use256 Eqn.1 as the default for LABC in all our face recognition experiments, unless otherwise stated.257 Table 3 compares the verification accuracy of our method versus the current SOTA method ArcFace258 on five different test sets, LFW, CPLFW (Zheng & Deng (2018)), CALFW (Zheng et al. (2017)),259 CFP-FP (Sengupta et al. (2016)) and AgeDb-30 (Moschoglou et al. (2017)). For the LFW test set,260 we follow the unrestricted with labeled outside data protocol to report the performance. We trained261 ResNet-50 and ResNet-100 using ArcFace and DBT approaches on CASIA (small) and MS1MV2262 (large) datasets, respectively. The results show that DBT method outperforms ArcFace on all datasets.263 Table 7 shows the angle statistics of the trained ArcFace and DBT models on the LFW dataset. Min.264 Inter and Inter refer to the mean of minimum angles and mean of all angles between the template265 embedding features of different classes (mean of the embedding features of all images for each class),266 respectively. Intra refers to the mean of angles between xi and template embedding feature for each267 class. From Table 7, we infer that DBT extracts better face features and hence reduces the intra-class268 variations. Directly from Tables 3 and 7, we infer that first, DBT consistently improves the accuracy269\non all test sets. Second, learning better features in the early layers is crucial to obtain rich face feature270 embeddings. Third, the achieved gain using DBT is more pronounced on models trained using a271 smaller (CASIA) dataset (it has fewer identities and images). This shows that DBT can address the272 issue of the lack of in-domain data using cheap ABC-noise data.273\nWe also provide the results of training Inception-ResNet-V1 and ResNet-64 models using DBT on274 MS1MV2 to show the generalization capacity of the DBT method. For the Inception-ResNet-V1 and275 ResNet-64, the PEF is set to be the first six layers and the DDF is the rest of the model. We use large276 margin cosine loss (LMCL) Wang et al. (2018) for LMC and Cross entropy (CE) for LABC. Table 4277 shows the verification accuracy on LFW for Inception-ResNet-V1 and ResNet-64 models trained278 on MS1MV2 with and without DBT. The results show that DBT method is independent of model279 depth or architectures or loss functions and thereby consistently improves the accuracy compared280 to baseline results. Table 4 also compares the DBT method with state-of-the-art methods on LFW281 and YTF datasets. DBT method notably improves the baselines that are comparable to ArcFace and282 superior to all the other methods. We were not able to reproduce the results of the ArcFace paper283 using our Tensorflow implementation and dataset. We believe that using the original implementation284 and dataset from ArcFace will achieve superior results over the baselines on the benchmark datasets285 as evident from the results of our implementation. Finally, we compare the result ArcFace and DBT286 on IJB-B and IJB-C, in Table 5. It is seen that DBT provides a notable boost on both IJB-B and287 IJB-C by a considerable margin. DBT improves the verification accuracy as high as 1.94 % on IJB-B288 and 2.57 % on IJB-C dataset at 10−4 false alarm rate (FAR). We plot the receptive fields of the top289 ten maximally activated neurons of an intermediate layer of the face recognition model to visualize290 the features learned using the DBT method. Fig. 3 shows that the receptive fields of layer 15 of291 the inception-resnet-v1 model trained using DBT attends to the regions of eyes, nose and mouth as292 compared to insignificant regions in the normal training method. This shows that DBT learns more293 discriminative features essential to face recognition, corroborating our theoretical claims.294\nTo show that current SOTA models are not robust to animal faces, we performed a 1:N identification295 experiment with approximately 3000 animal distractors on the IJB-B (Whitelam et al. (2017)) dataset.296 We trained the face recognition model with about 500K non-face data which contains 200 animal297 faces. This is disjoint from the 3000 distractors used in the identification experiment. We collected the298 animal faces from web images using MTCNN (Zhang et al. (2016a)) face detector which are the false299 positives from the face detector. Table 2 shows the Rank-1 identification accuracy of ResNet-100300 on IJB-B dataset, trained on MS1MV2 using the ArcFace loss (ResNet-100-AF) versus our DBT301 approach (ResNet-100-DBT). The third column of Table 2 denotes the accuracy on a hard subset302 of images (false positives from ArcFace model) on the IJB-B dataset denoted by H-set. Results303 of Table 2 show that current face recognition models are unable to discriminate out-of-distribution304 (non-face) images from face images. Our ResNet-100-DBT significantly (as high as 21%) reduces the305 misidentification rate as compared to the ArcFace model which shows that DBT method inherently306 overcomes this issue while also improving face recognition accuracy.307 IMAGE CLASSIFICATION308 In this section, we evaluate ResNet-110 and ResNext-101 models trained with and without DBT on309 image classification problem using CIFAR-10, CIFAR-100, and ImageNet. We also show the power310 of DBT to compensate for the smaller in-domain training set. For all implementations, PEF is defined311 to be the first three layers and DDF is the rest of the model. LABC and LMC are set to cross-entropy312 loss. ABC-noise is the same data used in face recognition experiments. We follow the same training313 configurations from (He et al. (2016); Xie et al. (2017)).314\nTo study the efficacy of the DBT method in augmenting smaller in-domain training datasets, we315 also trained ResNet-100 and ResNext-101 using partial training data on CIFAR-10 and CIFAR-100.316\nWe randomly selected a fraction of the training data to be our training set, e.g., k/5 of dataset317 means that we only used k fifth of total samples for training. From first row of Table 8, we find that318 models trained with DBT show 0.59% and 0.35% improvement on CIFAR-10, 0.62% and 1.45%319 improvement on CIFAR-100 over baseline models for ResNet-110 and ResNext-101 architectures,320 respectively. Furthermore, using partial training data with our DBT method achieves superior results321 (as high as 5.49 % on ResNext (1/5) CIFAR-100) as compared to normal training. Table 6 shows322 the results on Imagenet. We see that DBT improves the accuracy by 0.28% on Top-1 accuracy. This323 shows that the DBT method consistently improves the results on both small and large datasets.324\nSPEAKER IDENTIFICATION325 We consider the problem of speaker identification using the VGG-M (Chatfield et al. (2014)) model.326 We set PEF as the first two CNN layers and DDF as the remaining CNN layers. LABC and LMC327 are defined to be the cross-entropy loss. The ABC-noise is generated from the silence intervals of328 VoxCeleb (Nagrani et al. (2017)) augmented with Gaussian noise with variance one. The input to the329 model is the short-time Fourier transformation of speech signals with a hamming sliding window330 of width 25 ms and step 10 ms. Table 9 provides the accuracies of VGG-M model trained with and331 without DBT on VoxCeleb, Librispeech (Panayotov et al. (2015)), VCTK (Veaux et al. (2016)) and332 ELSDR (L. (2004)) datasets. Table 9 shows that the trained models using DBT significantly improves333 the accuracy (as high as 5.62%) for all datasets. Implementation details are provided in the appendix.334\n335\nMISCELLANEOUS EXPERIMENTS336 In this section, we experiment with the naive way of using background data by considering non-faces337 as a separate class in the final classification layer. For face recognition, Table 11 shows the results338 of training with an additional background class on MS1MV2 dataset with and without using DBT.339 ResNet+mod refers to a model trained with ArcFace loss and n + 1 classes where the additional class340 corresponds to the non-faces. ResNet-DBT+mod refers to a model trained with both DBT and the341 additional non-face class. We find that adding the additional non-face class hurts the performance342 of the model whereas ResNet-DBT+mod improves the results significantly relative to ResNet+mod343 model. Since the non-face dataset is sampled from a wide range of a family of distributions compared344 with faces, it has a larger range of unknown parameters, then the sufficient statistic of them should be345 larger than the sufficient statistics of face data. Thus, when we restrict faces and non-faces on the346 surface of a hypersphere, the non-face data is more spread on the surface compared with each of the347 other face classes. We demonstrate this effect with the help of a toy example in Fig. 6 in the appendix.348 We also conduct this experiment on CIFAR-10/CIFAR-100 and report it in Table 10. We see that349 naively incorporating the background class is inferior to DBT showing that DBT is an effective350 technique to utilize background data to boost the performance of classification models.351\n6 CONCLUSION352 In this paper, we presented a detailed theoretical analysis of the dual relationship between estimating353 the unknown pdf parameters and classification accuracy. Based on the theoretical study, we presented354 a new method called DBT using ABC-noise data for improving in-distribution classification accuracy.355 We showed that using ABC-noise data helps in better estimation of unknown parameters of pdf of356 input data and thereby improves the feature representations and consequently the accuracy in image357 classification, speaker classification, and face recognition benchmarks. It also augments the training358 data when only limited labeled data is available by improving accuracy. We showed that the concept359 of DBT is generic and generalizes well across domains through extensive experiments using different360 model architectures and datasets. Our framework is complementary to existing training methods and361 hence, it can be easily integrated with current and possibly future classification methods to enhance362 accuracy. In summary, the proposed DBT method is a powerful technique that can augment limited363 training data and improve classification accuracy in deep neural networks.364\nREFERENCES365 M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean,366 M. Devin, and S. Ghemawat and. TensorFlow: Large-scale machine learning on heterogeneous367 systems, 2015.368\nPhilip Bachman, R Devon Hjelm, and William Buchwalter. Learning representations by maximizing369 mutual information across views. In Advances in Neural Information Processing Systems, pp.370 15535–15545, 2019.371\nT. L. Berg, A. C. Berg, J. Edwards, and D. A. Forsyth. Who’s in the picture. NeurIPS, 2004.372\nDavid Berthelot, Colin Raffel, Aurko Roy, and Ian Goodfellow. Understanding and improving373 interpolation in autoencoders via an adversarial regularizer. ICLR, 2019.374\nBen-Zion Bobrovsky, E Mayer-Wolf, and M Zakai. Some classes of global cramér-rao bounds. The375 Annals of Statistics, pp. 1421–1438, 1987.376\nKen Chatfield, Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Return of the devil in the377 details: Delving deep into convolutional nets. BMVC, 2014.378\nErnesto Conte, Antonio De Maio, and Giuseppe Ricci. Glrt-based adaptive detection algorithms for379 range-spread targets. IEEE transactions on signal processing, 49(7):1336–1348, 2001.380\nJ. Deng, J. Guo, X. Niannan, and S. Zafeiriou. Arcface: Additive angular margin loss for deep face381 recognition. In Computer Vision and Pattern Recognition (CVPR), 2019.382\nRichard O Duda, Peter E Hart, and David G Stork. Pattern classification. John Wiley & Sons, 2012.383\nMark Everingham, SM Ali Eslami, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew384 Zisserman. The pascal visual object classes challenge: A retrospective. International journal of385 computer vision, 111(1):98–136, 2015.386\nF. F. Li, R. Fergus, and P. Perona. Learning generative visual models from few training examples: An387 incremental bayesian approach tested on 101 object categories. In CVPR Workshop, pp. 178–178,388 2004.389\nSpyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by390 predicting image rotations. ICLR, 2018.391\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,392 Aaron Courville, and Yoshua Bengio. Generative adversarial nets. NeurIPS, pp. 2672–2680, 2014.393\nY. Guo, L. Zhang, Y. Hu, X. He, and J. Gao. Ms-celeb-1m: A dataset and benchmark for large-scale394 face recognition. ECCV, 9907:87–102, 2016.395\nMichael Gutmann and Aapo Hyvärinen. Noise-contrastive estimation: A new estimation principle396 for unnormalized statistical models. In Proceedings of the Thirteenth International Conference on397 Artificial Intelligence and Statistics, pp. 297–304, 2010.398\nK. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. Computer Vision399 and Pattern Recognition, pp. 770–778, 2016.400\nG. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller. Labeled faces in the wild: A database for401 studying face recognition in unconstrained environments. In Technical Report, 2007.402\nAapo Hyvärinen. Survey on independent component analysis. 1999.403\nS. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing404 internal covariate shift. International Conference on Machine Learning, 37:448–456, 2015.405\nDiederik P Kingma and Max Welling. Auto-encoding variational bayes. ICLR, 2014.406\nFeng L. Speaker recognition, informatics and mathematical modelling. Technical University of407 Denmark, DTU, 2004.408\nC-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeply-Supervised Nets. Proceedings of409 Machine Learning Research (PMLR), 38:562–570, 2015.410\nYoungjo Lee, John A Nelder, and Yudi Pawitan. Generalized linear models with random effects:411 unified analysis via H-likelihood, volume 153. CRC Press, 2018.412\nE. L. Lehmann and G. Casella. Theory of point estimation, 1998. 2ndn ed.413\nErich L Lehmann and George Casella. Theory of point estimation. Springer Science & Business414 Media, 2006.415\nTsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr416 Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European417 conference on computer vision, pp. 740–755. Springer, 2014.418\nBernard Lindgren. Statistical theory. Routledge, 2017.419\nB. Maze, J. Adams, J. A. Duncan, N. Kalka, T. Miller, C. Otto, A. K. Jain, W. T. Niggel, J. Anderson,420 J. Cheney, and P. Grother. Iarpa janus benchmark - c: Face dataset and protocol. International421 Conference on Biometrics, pp. 158–165, 2018.422\nP. Mettes, E. van der Pol, and C. Snoek. Hyperspherical prototype networks. NeuRIPS, 01 2019.423\nS. Moschoglou, A. Papaioannou, C. Sagonas, J. Deng, I. Kotsia, and S. Zafeiriou. Agedb: the first424 manually collected, in-the-wild age database. CVPR Workshop, 2(3):5, 2017.425\nArsha Nagrani, Joon Son Chung, and Andrew Zisserman. Voxceleb: a large-scale speaker identifica-426 tion dataset. arXiv preprint arXiv:1706.08612, 2017.427\nM. Noroozi and P. Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles.428 ECCV, 2016.429\nM. Oquab, L. Bottou, I. Laptev, and J. Sivic. Learning and transferring mid-level image representa-430 tions using convolutional neural networks. CVPR, pp. 1717–1724, 2014.431\nV. Panayotov, G. Chen, D. Povey, and S. Khudanpur. Librispeech: an asr corpus based on public432 domain audio books. International Conference on Acoustics, Speech and Signal Processing433 (ICASSP), pp. 5206–5210, 2015.434\nBLS Prakasa Rao. Cramer-rao type integral inequalities for estimators of functions of multidimen-435 sional parameter. Sankhyā: The Indian Journal of Statistics, Series A, pp. 53–73, 1992.436\nAli Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. Cnn features off-the-shelf:437 an astounding baseline for recognition. CVPR Workshops, 2014.438\nO. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla,439 M. Bernstein, A. C. Berg, and F.F Li. ImageNet Large Scale Visual Recognition Challenge.440 International Journal of Computer Vision (IJCV), 115(3):211–252, 2015.441\nChristian S., Vincent V., Sergey I., Jon S., and ZB W. Rethinking the inception architecture for442 computer vision. CVPR, 2016.443\nS. Sengupta, J. Chen, C. Castillo, V. M. Patel, R. Chellappa, and D. W. Jacobs. Frontal to profile face444 verification in the wild. In Winter Conference on Applications of Computer Vision (WACV), pp.445 1–9, 2016.446\nN. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way447 to prevent neural networks from overfitting. J. Mach. Learn. Res., 15(1):1929–1958, 2014.448\nC. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and449 A. Rabinovich. Going deeper with convolutions. CVPR, pp. 1–9, 2015.450\nS. Teerapittayanon, B. McDanel, and H. T. Kung. Branchynet: Fast inference via early exiting from451 deep neural networks. ICPR, 2016.452\nChristophe Veaux, Junichi Yamagishi, Kirsten MacDonald, et al. Superseded-cstr vctk corpus:453 English multi-speaker corpus for cstr voice cloning toolkit. University of Edinburgh. The Centre454 for Speech Technology Research (CSTR), 2016.455\nH. Wang, Y. Wang, Z. Zhou, X. Ji, D. Gong, J. Zhou, Z. Li, and W. Liu. Cosface: Large margin456 cosine loss for deep face recognition. CVPR, pp. 5265–5274, 2018.457\nC. Whitelam, E. Taborsky, A. Blanton, B. Maze, J. Adams, T. Miller, N. Kalka, A. K. Jain, J. A.458 Duncan, K. Allen, J. Cheney, and P. Grother. Iarpa janus benchmark-b face dataset. CVPR459 Workshops, pp. 592–600, 2017.460\nL. Wolf, T. Hassner, and I. Maoz. Face recognition in unconstrained videos with matched background461 similarity. CVPR, pp. 529–534, 2011.462\nS. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He. Aggregated residual transformations for deep neural463 networks. CVPR, pp. 5987–5995, 2017.464\nD. Yi, Z. Lei, S. Liao, and S. Z. Li. Learning face representation from scratch. arXiv, abs/1411.7923,465 2014.466\nJ. Yosinski, J. Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural467 networks? NIPS, 2014.468\nOfer Zeitouni, Jacob Ziv, and Neri Merhav. When is the generalized likelihood ratio test optimal?469 IEEE Transactions on Information Theory, 38(5):1597–1602, 1992.470\nK. Zhang, Z. Zhang, Z. Li, and Y. Qiao. Joint face detection and alignment using multi-task cascaded471 convolutional networks. Signal Processing Letters, 23(10):1499–1503, 2016a.472\nL. Zhang, J. Song, A. Gao, J. Chen, C. Bao, and K. Ma. Be your own teacher: Improve the473 performance of convolutional neural networks via self distillation. ICCV, 2019a.474\nLinfeng Zhang, Zhanhong Tan, Jiebo Song, Jingwei Chen, Chenglong Bao, and Kaisheng Ma. Scan:475 A scalable neural networks framework towards compact and efficient models. NeurIPS, 2019b.476\nRichard Zhang, Phillip Isola, and Alexei Efros. Colorful image colorization. ECCV, 2016b.477\nT. Zheng and W. Deng. Cross-pose lfw: A database for studying cross-pose face recognition in un-478 constrained environments. Technical Report, Beijing University of Posts and Telecommunications,479 2018.480\nT. Zheng, W. Deng, and J. Hu. Cross-age lfw: A database for studying cross-age face recognition in481 unconstrained environments. arXiv, abs/1708.08197, 2017.482\nAPPENDIX483 IN-DOMAIN FAMILY OF PDFS AND THE EXTENDED FAMILY OF DISTRIBUTIONS484 In this section, we discuss about background/noise and in-domain data points and their corresponding485 distributions to clarify the definition of those concepts in this paper. Consider a random vector denoted486 by s. Assume that the corresponding distribution is Gaussian with mean and variance given by α 6= 0487 and σ = 1, respectively. Now, assume that we observed x = s + n, where the pdf of n is assumed to488 be Guassian with zero mean and variance σ2n, hence the pdf of x is Gaussian with mean α and variance489 1 + σ2n. Here, n is the background or noise data and the vector of unknowns is given by, θ = [α, σ 2 n].490 The in-domain family of pdfs for x is then given by Px = {N (α, 1 + σ2n)|α 6= 0, σ2n > 0}. If we491 include the family of pdf of n to Px, then we can extend Px as P = {N (α, 1 +σ2n)|α ∈ R, σ2n > 0}.492 So P is the union of family of pdfs of in-domain data points and noise/background data. From493 estimation theory, we know that the sufficient statistics and the unknown parameters of P can also494 represent the sufficient statistics and the unknown parameters of Px. In other words, an estimation of495 α can help us detect if the observed data point is from s + n or n by comparing it with a threshold.496 Thus, estimating the unknown parameters of the family of pdfs using P can provide more information497 about the observed data useful for tasks such as classification.498\nIn general, we can assume that a generalized family of pdfs is given by the family of pdf of noise or background along with the family of pdfs of in-domain data. Hence, estimating from the extended\nfamily of distribution can provide more information about the in-domain distribution. Let us consider that the pdf of in-domain data points is given by px(x, [θs,θn]) and the pdf of noise/background is given by pn(x,θn), so the extended pdf can be represented by\nh(pn(x,θn), px(x, [θs,θn])),\nwhere h is a function that combines two pdfs in a general structure. So a general family of distribution can be denoted as follows:\nP = {h(pn(x,θn), px(x, [θs,θn]))|θ := [θs,θn] ∈ Θs,n}, where θ is defined as a new set of parameters in a higher dimension and Θs,n are set of all possible499 [θs,θn] that belongs to pn and px. The extended family of pdf provides more information about500 the nuisance parameters of pdf of in-domain datapoints. Inspired by this observation, we develop501 our detection booster training method using background/noise data. Figure 5 shows an example of502 background and in-domain data point.503\nPROOF OF THEOREM 1504 Let ti(·) denote deterministic discriminative function of Θi. Since the efficient minimum variance505 and unbiased estimation of ti(θ) exists, we have506\n∂ ln(p(x,θ))\n∂ti(θ) = Iti(θ)(t̂i(x)− ti(θ)), (4)\nwhere t̂i(x) is the minimum variance and unbiased estimation of ti(θ) using the data point x and Iti(x) is the Fisher information function of ti(θ), which is given by\nIti(θ) = ∂ti(θ)\n∂θ\nT\nI(θ) ∂ti(θ)\n∂θ ≥ 0,\nwhere T denotes the transpose and I(θ) is the Fisher information matrix of θ. Now we show that507 the log-likelihood ratio is an increasing function in t̂i(x). Note that Iti(θ) ≥ 0 (Lehmann & Casella508 (2006)).509\nOn the other hand, we have d ln(p(x,θ)) = ∑ j ∂ ln(p(x,θ)) ∂θj dθj , therefore,510\nln(p(x,θ)) + k(x) = ∑ j ∫ ∂ ln(p(x,θ)) ∂θj dθj = ∑ j ∫ ∂ ln(p(x,θ)) ∂ti(θ) ∂ti(θ)\n∂θj dθj =∫\n∂ ln(p(x,θ))\n∂ti(θ)\n∑ j ∂ti(θ) ∂θj dθj = ∫ ( Iti(θ)(t̂i(x)− ti(θ)) )∑ j ∂ti(θ) ∂θj dθj = α(θ)t̂i(x)− β(θ) (5)\nwhere the third equality is archived based on the third property of ti(·) in its definition and the forth equality is given by replacing (4; k(x) is the constant of integration. Finally, the last equality is given by defining the following terms\nα(θ) := ∫ Iti(θ) ∑ j ∂ti(θ) ∂θj dθj , β(θ) := ∫ Iti(θ)ti(θ) ∑ j ∂ti(θ) ∂θj dθj , (6)\nthus dα(θ)dti(θ) = Iti(θ) ≥ 0, i.e., α(θ) is increasing in ti(θ). Since, ti is a deterministic discriminative511 function of Θi, so for each j 6= i and θi ∈ Θi and θj ∈ Θj , we have ti(θi) > ti(θj), therefore512\nα(θi) ≥ α(θj). The later inequality is achieved based on the increasing property of α(θ) with513 respect to ti(θ).514\nUsing (5), the log likelihood ratio of class i against the rest of classes is given by LLR :=515 ln(p(x,θi)) − ln(p(x,θj)), so we have LLR = ( α(θi) − α(θi) ) t̂i(x) − ( β(θi) − β(θj) ) . LLR516 depends on x only via t̂i(x) and since for each j 6= i and θi ∈ Θi and θj /∈ Θi, α(θi)− α(θi) > 0,517 then LLR is increasing in t̂i(x). 518\nPROOF OF THEOREM 2519 The probability of true positive of class i of rj is given by\nPtp,i,j = Prθ(hj(x) > τ) = 1− Fjθ (τ), where Fiθ (·) denotes the Cumulative distribution function (CDF) of hj . Since the probability of true positive of class i of r1 is greater than r2 for all τ , F1θ (τ) < F2θ (τ), for all τ . Now we define a function as follows u(τ,θ) := F2θ (τ)− F1θ (τ). Since the CDFs are increasing in τ and tend to 1 and the number of inflection points of these CDFs are bounded, there is an hmin such that, for τ > hmin, such that u(τ, θ) is a monotonically decreasing function in τ . Thus for any θ that satisfies d(θ) > hmin we have\nu(d(θ) + , θ) < u(d(θ)− , θ). Replacing u(h, θ) = F2θ (h)− F1θ (h) in the last inequality, we have520\nF2θ (d(θ) + )− F1θ (d(θ) + ) < F2θ (d(θ)− )− F1θ (d(θ)− )⇒ (7) F2θ (d(θ) + )− F2θ (d(θ)− ) < F1θ (d(θ) + )− F1θ (d(θ)− ). (8)\nBased on the definition of CDF, we have521\nPrθ ( |h2(x)− d(θ)| < ) = Prθ ( d(θ)− < h2(x) < d(θ) + ) <\nPrθ ( d(θ)− < h1(x)) < d(θ) + ) = Prθ ( |h1(x)− d(θ)| < ) . (9)\n522\nPROOF OF THEOREM 3523 First, we prove the following claim,524 Claim: For any open set, there exists a set of disjoint countable open balls such that their union equals525 the origin open set.526 Proof of claim: Consider an open set O, and also consider x0 ∈ O, such that B(x0, r0) ⊆ O527 and r0 is the greatest possible radius between all possible open balls in O, where B(x0, r0) is the528 open ball with radius r0 at point x0. Now, we define x1 ∈ O − B(x0, r0), where B(x0, r0) is529 the closure of B(x0, r0), as the point with greatest radius in O − B(x0, r0) and similarly xi ∈530 O − ∪i−1k=0B(xk, rk) such that B(xi, ri) provides the greatest radius in O − ∪ i−1 k=0B(xk, rk). So531 we have O = ∪∞k=0B(xk, rk). This is because, if the latest equality is not valid, then there exists532 an open ball in O − ∪∞k=0B(xk, rk) hence another open ball with greatest radius will be added to533 ∪∞k=0B(xk, rk), which has a contradiction with the definition of ∪∞k=0B(xk, rk). The claim is proven534 at this point.535\nNow, we show the true positive probability of r1 is greater than r2. Let Θ′m be the set of interior points of Θm, then, there exists a union of disjoint open balls such that Θ′m = ∪∞k=0B(xk, rk). From assumptions in the theorem, we have Pr(‖θ̂1 − θ‖ ≤ ) ≥ Pr(‖θ̂2 − θ‖ ≤ ), then\nPrθ(θ̂1 ∈ B(xk, rk)) ≥ Prθ(θ̂2 ∈ B(xk, rk)), where θ ∈ Θm. Based on the claim we have536\nPrθ(θ̂1 ∈ Θ′m) ≥ Prθ(θ̂2 ∈ Θ′m). (10) Moreover, based on definition of ri, the true positive probability of class m is given by\nptp,i = Prθ(θ̂i ∈ Θm) = Prθ(θ̂i ∈ Θ′m) + Prθ(θ̂i ∈ Θm −Θ′m),\nfor i = 1, 2. Additionally, from the Cauchy–Schwarz inequality, we have\nPrθ(θ̂i ∈ Θm −Θ′m) ≤ µl(Θm −Θ′m) = 0,\nSo, ptp,i = Prθ(θ̂i ∈ Θ′m) and from (10) the true positive probability of class i of r1 is greater than537 r1.538\nThe error probability of rj is given by per,j = 1− ∑n i=1 PiPtp,i,j , where Pi is the prior probability539 of class i. Therefor, per,1 ≤ per,2. 540 541\nCONNECTING THE THEOREMS WITH THE PROPOSED METHOD542 Fig. 6 shows the connection between the proposed theorems and the approach. In part 1, Theorem543 2 connects the estimation of unknown parameters to the auxiliary classifier. In part 2, the learned544 features are passed to a decision making network (result of Theorem 2). In part 3, Theorem 3545 guarantees that the multi-class classifier outperforms other classifiers, because it is using the features546 from a better estimation of unknown parameters of pdf.547\n548\nTOY EXAMPLE:549 We demonstrate the effect of adding background class to the original classifier with a toy example550 and visualize it in Fig. 7. In this example, the input is a sequence of binary bits (+1 and −1) with551 length 3 in white Gaussian noise. the classifier is constructed using two fully connected layers with552 sigmoid and the last layer is normalized on unit circle. As seen from Fig. 7, adding an additional553 noise class visibly reduces the feature separation between all the other classes.554\nIMPLEMENTATION DETAILS555 FACE RECOGNITION556 We use Tensorflow (Abadi et al. (2015)) to conduct all our experiments. We train with a batch557 size of 256 on two NVIDIA TeslaV100 (32G) GPUs. We train our models following small (less558 than 1M training images) and large (more than 1M training images) protocol conventions. We use559 CASIA-Webface (Yi et al. (2014)) dataset for small protocol and MS1MV2 dataset for the large560 protocol. We use ResNet-50 (He et al. (2016)) and ResNet-100 models for small and large protocols,561 respectively. The PEF is selected as the first three layers. Following (Deng et al. (2019)), we apply562\nBN (Ioffe & Szegedy (2015)), dropout (Srivastava et al. (2014)) to the last feature map layer followed563 by a fully connected layer and batch normalization to obtain the 512-D embedding vector. We set564 the feature scale s parameter to 64 following (Wang et al. (2018); Deng et al. (2019)) and set the565 margin parameters (m1,m2,m3) to (1, 0.5, 0), respectively. For small scale protocol, we start the566 learning rate at 0.01 and divide the learning rate by 10 at 40K, 80K, and 100K iterations. We train for567 120K iterations. For large scale protocol, we start the learning rate at 0.01 and divide the learning568 rate by 10 at 80K, 100K, and 200K iterations. We train for 240K iterations. We use Momentum569 optimizer and set the momentum to 0.9 and weight decay to 5e-4. We use the feature centre of all570 images from a template or all frames from a video in order to report the results on IJB-B, IJB-C and571 YTF datasets. For ABC-noise data, we cropped background images patches from MS1MV2 (Guo572 et al. (2016)) dataset and cropped hard examples from the Caltech-101 (F. F. Li et al. (2004)) dataset573 plus a few open sourced images (animal faces) using MTCNN (Zhang et al. (2016a)) face detector.574 We generated roughly 500K non-face images for training the ABCs.575\nSPEAKER IDENTIFICATION576 L2 loss and dropout with a rate of 0.2 are applied during training for generalization. The ABC-noise577 is collected form silence intervals of the VoxCeleb dataset, where an energy-based voice activity578 detection (VAD) is applied to detect the silence intervals. To augment the ABC-noise, Gaussian579 noise is added to the silence intervals. Each batch size is set to 64 and the optimizer is ADAM with580 a learning rate of 0.001. The VoxCeleb dataset is trained for 11 epochs and the other datasets are581 trained for 6 epochs.582\nLFW AND YTF DATASETS583 LFW database contains the annotations for 5171 faces in a set of 2845 images taken from the Faces584 in the Wild data set (Berg et al. (2004)). YouTubeFaces (Wolf et al. (2011)) contains 3,425 videos of585 1,595 people. Following the standard convention, we report the results on 5000 video pairs using586 unrestricted with labeled outside data protocol.587\nIJB-B AND IJB-C DATASETS588 The IJB-B contains 1,845 subjects with 21.8K still images and 55K frames from 7,011 videos. In589 total, there are 12,115 templates with 10,270 genuine matches and 8M impostor matches. The IJB-C590 dataset (Maze et al. (2018)) is a further extension of IJB-B, having 3,531 subjects with 31.3K still591 images and 117.5K frames from 11,779 videos. In total, there are 23, 124 templates with 19,557592 genuine matches and 15, 639K impostor matches.593" } ]
2,020
DBT: A DETECTION BOOSTER TRAINING METHOD FOR IMPROVING THE ACCURACY OF CLASSIFIERS
SP:27aca0420a1a3fa6cc3fdcef19d0ffcc02345a3c
[ "The authors propose as extension of the successor-representation approach to Grid cells. The paper shows that this model can generate several experimentally observed properties of grid cells, and can be used in navigation of novel/mutable environments. Overall, the work should be of interest to any ICLR attendees who engage in research surrounding grid cells." ]
Knowing how the effects of directed actions generalise to new situations (e.g. moving North, South, East and West, or turning left, right, etc.) is key to rapid generalisation across new situations. Markovian tasks can be characterised by a state space and a transition matrix and recent work has proposed that neural grid codes provide an efficient representation of the state space, as eigenvectors of a transition matrix reflecting diffusion across states, that allows efficient prediction of future state distributions. Here we extend the eigenbasis prediction model, utilising tools from Fourier analysis, to prediction over arbitrary translation-invariant directed transition structures (i.e. displacement and diffusion), showing that a single set of eigenvectors can support predictions over arbitrary directed actions via action-specific eigenvalues. We show how to define a "sense of direction" to combine actions to reach a target state (ignoring task-specific deviations from translation-invariance), and demonstrate that adding the Fourier representations to a deep Q network aids policy learning in continuous control tasks. We show the equivalence between the generalised prediction framework and traditional models of grid cell firing driven by self-motion to perform path integration, either using oscillatory interference (via Fourier components as velocity-controlled oscillators) or continuous attractor networks (via analysis of the update dynamics). We thus provide a unifying framework for the role of the grid system in predictive planning, sense of direction and path integration: supporting generalisable inference over directed actions across different tasks.
[ { "affiliations": [], "name": "Changmin Yu" }, { "affiliations": [], "name": "Timothy E.J. Behrens" }, { "affiliations": [], "name": "Neil Burgess" } ]
[ { "authors": [ "M. Abadi", "A. Agarwal", "P. Barham", "E. Brevdo", "Z. Chen", "C. Citro", "G.S. Corrado", "A. Davis", "J. Dean", "M. Devin", "S. Ghemawat", "I. Goodfellow", "A. Harp", "G. Irving", "M. Isard", "Y. Jia", "R. Jozefowicz", "L. Kaiser", "M. Kudlur", "J. Levenberg", "D. Mané", "R. Monga", "S. Moore", "D. Murray", "C. Olah", "M. Schuster", "J. Shlens", "B. Steiner", "I. Sutskever", "K. Talwar", "P. Tucker", "V. Vanhoucke", "V. Vasudevan", "F. Viégas", "O. Vinyals", "P. Warden", "M. Wattenberg", "M. Wicke", "Y. Yu", "X. Zheng" ], "title": "TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL https://www.tensorflow.org/. Software available from tensorflow.org", "venue": null, "year": 2015 }, { "authors": [ "A.B. Baram", "T.H. Muller", "J.C. Whittington", "T.E. Behrens" ], "title": "Intuitive planning: global navigation through cognitive maps based on grid-like codes", "venue": "bioRxiv, page 421461,", "year": 2018 }, { "authors": [ "A.G. Barto", "R.S. Sutton", "C.W. Anderson" ], "title": "Neuronlike adaptive elements that can solve difficult learning control problems", "venue": "IEEE transactions on systems, man, and cybernetics, (5): 834–846,", "year": 1983 }, { "authors": [ "R.N. Bracewell" ], "title": "The Fourier transform and its applications, volume 31999", "venue": "McGraw-Hill New York,", "year": 1986 }, { "authors": [ "G. Brockman", "V. Cheung", "L. Pettersson", "J. Schneider", "J. Schulman", "J. Tang", "W. Zaremba" ], "title": "Openai gym", "venue": "arXiv preprint arXiv:1606.01540,", "year": 2016 }, { "authors": [ "Y. Burak", "I.R. Fiete" ], "title": "Accurate path integration in continuous attractor network models of grid cells", "venue": "PLoS computational biology, 5(2),", "year": 2009 }, { "authors": [ "C.P. Burgess", "N. Burgess" ], "title": "Controlling phase noise in oscillatory interference models of grid cell firing", "venue": "Journal of Neuroscience, 34(18):6224–6232,", "year": 2014 }, { "authors": [ "N. Burgess" ], "title": "Grid cells and theta as oscillatory interference: theory and predictions", "venue": "Hippocampus, 18(12):1157–1174,", "year": 2008 }, { "authors": [ "D. Bush", "N. Burgess" ], "title": "A hybrid oscillatory interference/continuous attractor network model of grid cell firing", "venue": "Journal of Neuroscience, 34(14):5065–5079,", "year": 2014 }, { "authors": [ "D. Bush", "N. Burgess" ], "title": "Advantages and detection of phase coding in the absence of rhythmicity", "venue": "Hippocampus,", "year": 2020 }, { "authors": [ "D. Bush", "C. Barry", "D. Manson", "N. Burgess" ], "title": "Using grid cells for navigation", "venue": "Neuron, 87(3): 507–520,", "year": 2015 }, { "authors": [ "J.R. Climer", "E.L. Newman", "H.M. E" ], "title": "Phase coding by grid cells in unconstrained environments: two-dimensional phase precession", "venue": "European Journal of Neuroscience,", "year": 2013 }, { "authors": [ "D.S. Corneil", "W. Gerstner" ], "title": "Attractor network dynamics enable preplay and rapid path planning in maze–like environments", "venue": "Advances in neural information processing systems, pages 1684–1692,", "year": 2015 }, { "authors": [ "P. Dayan" ], "title": "Improving generalization for temporal difference learning: The successor representation", "venue": "Neural Computation, 5(4):613–624,", "year": 1993 }, { "authors": [ "Y. Dordek", "D. Soudry", "R. Meir", "D. Derdikman" ], "title": "Extracting grid cell characteristics from place cell inputs using non-negative principal component analysis", "venue": "Elife, 5:e10094,", "year": 2016 }, { "authors": [ "V. Edvardsen", "A. Bicanski", "N. Burgess" ], "title": "Navigating with grid and place cells in cluttered environments", "venue": "Hippocampus,", "year": 2019 }, { "authors": [ "T. Eliav", "M. Geva-Sagiv", "M.M. Yartsev", "A. Finkelstein", "A. Rubin", "L. Las", "N. Ulanovsky" ], "title": "Nonoscillatory phase coding and synchronization in the bat hippocampal formation", "venue": "Cell, 175 (4):1119–1130,", "year": 2018 }, { "authors": [ "M.C. Fuhs", "D.S. Touretzky" ], "title": "A spin glass model of path integration in rat medial entorhinal cortex", "venue": "Journal of Neuroscience, 26(16):4266–4276,", "year": 2006 }, { "authors": [ "R. Gao", "J. Xie", "S.-C. Zhu", "Y.N. Wu" ], "title": "A representational model of grid cells based on matrix lie algebras", "venue": "arXiv preprint arXiv:2006.10259,", "year": 2020 }, { "authors": [ "T. Hafting", "M. Fyhn", "S. Molden", "M.-B. Moser", "E.I. Moser" ], "title": "Microstructure of a spatial map in the entorhinal cortex", "venue": "Nature, 436(7052):801–806,", "year": 2005 }, { "authors": [ "T. Hafting", "M. Fyhn", "T. Bonnevie", "M.-B. Moser", "E.I. Moser" ], "title": "Hippocampus-independent phase precession in entorhinal grid cells", "venue": "Nature, 453(7199):1248–1252,", "year": 2008 }, { "authors": [ "M.E. Hasselmo" ], "title": "Grid cell mechanisms and function: contributions of entorhinal persistent spiking and phase resetting", "venue": "Hippocampus, 18(12):1213–1229,", "year": 2008 }, { "authors": [ "A. Jeewajee", "C. Barry", "V. Douchamps", "D. Manson", "C. Lever", "B. N" ], "title": "Theta phase precession of grid and place cell firing in open environments", "venue": "Philosophical Transactions of the Royal Society B: Biological Sciences,", "year": 2012 }, { "authors": [ "D.P. Kingma", "J. Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "R. Kondor", "S. Trivedi" ], "title": "On the generalization of equivariance and convolution in neural networks to the action of compact groups", "venue": "arXiv preprint arXiv:1802.03690,", "year": 2018 }, { "authors": [ "E. Kropff", "A. Treves" ], "title": "The emergence of grid cells: Intelligent design or just adaptation? Hippocampus", "venue": null, "year": 2008 }, { "authors": [ "J. Krupic", "N. Burgess", "J. O’Keefe" ], "title": "Neural representations of location composed of spatially periodic bands", "venue": null, "year": 2012 }, { "authors": [ "E.A. Maguire", "N. Burgess", "J.G. Donnett", "R.S. Frackowiak", "C.D. Frith", "J. O’Keefe" ], "title": "Knowing where and getting there: a human navigation", "venue": null, "year": 1998 }, { "authors": [ "S. Mahadevan", "M. Maggioni" ], "title": "Proto-value functions: A laplacian framework for learning representation and control in markov decision processes", "venue": "Journal of Machine Learning Research, 8:2169–2231,", "year": 2007 }, { "authors": [ "B.L. McNaughton", "F.P. Battaglia", "O. Jensen", "E.I. Moser", "M.-B. Moser" ], "title": "Path integration and the neural basis of the’cognitive map", "venue": "Nature Reviews Neuroscience, 7(8):663–678,", "year": 2006 }, { "authors": [ "V. Mnih", "K. Kavukcuoglu", "D. Silver", "A. Graves", "I. Antonoglou", "D. Wierstra", "M. Riedmiller" ], "title": "Playing atari with deep reinforcement learning", "venue": "arXiv preprint arXiv:1312.5602,", "year": 2013 }, { "authors": [ "J. O’Keefe", "J. Dostrovsky" ], "title": "The hippocampus as a spatial map: preliminary evidence from unit activity in the freely-moving rat", "venue": "Brain research,", "year": 1971 }, { "authors": [ "B. Peng", "X. Li", "J. Gao", "J. Liu", "K.-F. Wong", "S.-Y. Su" ], "title": "Deep dyna-q: Integrating planning for task-completion dialogue policy learning", "venue": "arXiv preprint arXiv:1801.06176,", "year": 2018 }, { "authors": [ "P. Piray", "N.D. Daw" ], "title": "A common model explaining flexible decision making, grid fields and cognitive control", "venue": "bioRxiv, page 856849,", "year": 2019 }, { "authors": [ "S.E. Qasim", "I. Fried", "J. Jacobs" ], "title": "Phase precession in the human hippocampus and entorhinal cortex", "venue": "bioRxiv,", "year": 2020 }, { "authors": [ "K.S. Riedel" ], "title": "A sherman–morrison–woodbury identity for rank augmenting matrices with application to centering", "venue": "SIAM Journal on Matrix Analysis and Applications, 13(2):659–662,", "year": 1992 }, { "authors": [ "B. Sorscher", "G. Mel", "S. Ganguli", "S. Ocko" ], "title": "A unified theory for the origin of grid cells through the lens of pattern formation", "venue": "Advances in Neural Information Processing Systems, pages 10003–10013,", "year": 2019 }, { "authors": [ "K.L. Stachenfeld", "M.M. Botvinick", "S.J. Gershman" ], "title": "The hippocampus as a predictive map", "venue": "Nature neuroscience, 20(11):1643,", "year": 2017 }, { "authors": [ "R.S. Sutton", "A.G. Barto" ], "title": "Reinforcement learning: An introduction", "venue": "MIT press,", "year": 2018 }, { "authors": [ "E.C. Tolman" ], "title": "Cognitive maps in rats and men", "venue": "Psychological review, 55(4):189,", "year": 1948 }, { "authors": [ "L. Von Fersen", "C.D. Wynne", "J.D. Delius", "J.E. Staddon" ], "title": "Transitive inference formation in pigeons", "venue": "Journal of Experimental Psychology: Animal Behavior Processes, 17(3):334,", "year": 1991 }, { "authors": [ "A.C. Welday", "I.G. Shlifer", "M.L. Bloom", "K. Zhang", "H.T. Blair" ], "title": "Cosine directional tuning of theta cell burst frequencies: evidence for spatial coding by oscillatory interference", "venue": "Journal of Neuroscience, 31(45):16157–16176,", "year": 2011 }, { "authors": [ "J.C. Whittington", "T.H. Muller", "S. Mark", "G. Chen", "C. Barry", "N. Burgess", "T.E. Behrens" ], "title": "The tolman-eichenbaum machine: Unifying space and relational memory through generalization in the hippocampal formation", "venue": "Cell,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "A \"cognitive map\" encodes relations between objects and supports flexible planning (Tolman [40]), with hippocampal place cells and entorhinal cortical grid cells thought to instantiate such a map (O’Keefe and Dostrovsky [32]; Hafting et al. [20]). Each place cell fires when the animal is near a specific location, whereas each grid cell fires periodically when the animal enters a number of locations arranged in a triangular grid across the environment. Together, this system could support representation and flexible planning in state spaces where common transition structure is preserved across states and tasks, affording generalisation and inference, e.g., in spatial navigation where Euclidean transition rules are ubiquitous (Whittington et al. [43]).\nRecent work suggests that place cell firing provides a local representation of state occupancy, while grid cells comprise an eigenbasis of place cell firing covariance (Dordek et al. [15]; Stachenfeld et al. [38]; Sorscher et al. [37]; Kropff and Treves [26]). Accordingly, grid cell firing patterns could be learned as eigenvectors of a symmetric (diffusive) transition matrix over state space, providing a basis set enabling prediction of occupancy distributions over future states. This “intuitive planning\" operates by replacing multiplication of state representations by the transition matrix with multiplication of each basis vector by the corresponding eigenvalue (Baram et al. [2]; Corneil and Gerstner [13]). Thus a distribution over state space represented as a weighted sum of eigenvectors can be updated by re-weighting each eigenvector by its eigenvalue to predict future state occupancy.\n∗Please send any enquiries to: changmin.yu.19@ucl.ac.uk and n.burgess@ucl.ac.uk\nFast prediction and inference of the common effects of actions across different environments is important for survival. Intuitive planning, in its original form, supports such ability under a single transition structure, most often corresponding to symmetrical diffusion (Baram et al. [2]). Here we show that a single (Fourier) eigenbasis allows representation and prediction under the many different directed transition structures corresponding to different “translation invariant\" actions (whose effects are the same across states, such as moving North or South or left or right in an open environment), with predictions under different actions achieved by action-specific eigenvalues. We define a “sense of direction\" quantity, i.e., the optimal combinations of directed actions that most likely lead to the goal, based on the underlying translation-invariant transition structure (e.g., ignoring local obstacles). We then show how this method could be adapted to support planning in tasks that violate translation invariance (e.g. with local obstacles), and show how adding these Fourier representations to a deep RL network improves performance in a continuous control task.\nWe propose that the medial entorhinal grid cells support this planning function, as linear combinations of Fourier eigenvectors and therefore eigenvectors themselves, and show how traditional models of grid cells performing path integration are consistent with prediction under directed actions. Hence we demonstrate that the proposed spectral model acts as a unifying theoretical framework for understanding grid cell firing." }, { "heading": "2 “INTUITIVE PLANNING\" WITH A SINGLE TRANSITION STRUCTURE", "text": "Intuitive planning represents the occupancy distribution over the state space as a weighted sum of the eigenvectors of a single transition matrix (usually corresponding to symmetric diffusion), so that the effect of one step of the transition dynamics on the distribution can be predicted by reweighting each of the eigenvectors by the corresponding eigenvalue. And this generalises to calculating the cumulative effect of discounted future transitions (Baram et al. [2]).\nSpecifically, consider a transition matrix, T ∈ RN×N , Tss′ = P(st+1 = s′|st = s) where st encodes the state at time t and N is the number of states. Then, Tn is the n-step transition matrix, and has the same set of eigenvectors as T . Specifically, the eigendecomposition of T and Tn are: T = QΛQ−1, Tn = QΛnQ−1 (1) where each column of the matrix Q is an eigenvector of T and Λ = diag(σP (T )), where σP (T ) is the set of eigenvalues of T . Similarly, any polynomial in T , p(T ), shares the same set of eigenvectors as T and the set of eigenvalues σP (p(T )) = p(σP (T )). Hence:\n∞∑ k=0 (γT )k = (I − γT )−1 = Qdiag(w)Q−1, where w = { 1 1− γλ , for λ ∈ σP (T ) } (2)\nThe resolvent form (Eq. 2) is an infinite discounted summation of transitions, which under a policy and transition structure corresponding to diffusion, is equivalent to the successor representation (SR, Fig. 1E) with discounting factor γ (Dayan [14]; Stachenfeld et al. [38]). See Mahadevan and Maggioni [29] for a related spectral approach using Fourier decomposition of T for estimating the value function. The SR has been shown to be useful for navigation via gradient ascent of the future probability of occupying the target state, and has a linear relationship with the true underlying Euclidean distances in spatial tasks (hence \"intuitive planning\", see Fig. 1 and Fig. 2D-E).\nThe eigenvectors of the diffusion transition matrix generally show grid-like patterns, suggesting a close relationship to grid cells. However, intuitive planning is restricted to predictions over a\nsingle transition structure, hence cannot flexibly adjust its predictions corresponding to the effects of arbitrary directed actions (i.e., variable asymmetric transition structure), hence cannot support the presumed role of grid cells in path integration.Moreover, predictions over different directed actions would require different eigendecompositions, hence incurring high computational costs that undermines its biological plausibility. In Section 3 we unify the prediction and path integration approaches by exploiting translation invariant symmetries to generalise across actions, using a single common eigenbasis and cheaply calculated updates via action-dependent eigenvalues." }, { "heading": "3 FLEXIBLE PLANNING WITH DIRECTED TRANSITIONS", "text": "Updating state representations to predict the consequences of arbitrary directed actions is an important ability of mobile animals, known as path integration and thought to depend on grid cells (McNaughton et al. [30]). To generalise the intuitive planning scheme to simultaneously incorporate arbitrary directed transition structures, we consider the transition dynamics corresponding to translation (drift) and Gaussian diffusion with arbitrary variance (including 0, equivalent to plain translation). Our assumption that the transition structure is translation invariant (implying periodic boundary conditions), leads to circulant transition matrices.\nConsider a 2D rectangular environment with length L and width W where each state is a node of the unit square grid, then the transition matrix can be represented by T ∈ RLW×LW , with each row the vectorisation (vec(·)) of the matrix of transition probabilities starting from the specified location, i.e., T[j, :] = vec[P(st+1|st = j)], where T is constructed by considering the 2D state space as a 1D vector and concatenating the rows (j = xL+ y for (x, y) ∈ [0,W − 1]× [0, L− 1]), see Fig. 2A. The transition matrix is circulant due to the translation invariance of the transition structure (see Appendix Prop. A.1), and takes the following form:\nT = T0 TLW−1 · · · T2 T1 T1 T0 TLW−1 · · · T2 ... T1 T0 . . . ...\nTLW−2 · · · . . . . . . TLW−1 TLW−1 TLW−2 · · · T1 T0\n (3)\nThe normalised eigenvectors of the circulant matrix T ∈ RN×N (N = LW ) are the vectors of powers of N th roots of unity (the Fourier modes):\nqk = 1√ N\n[ 1, ωk, ω 2 k, · · · , ω N−1 k ]T (4)\nwhere ωk = exp( 2πiN k), for k = 0, . . . , N − 1, and i = √ −1. Hence the matrix of eigenvectors (as the columns), F = (q0,q1, . . . ,qN−1), is just the (inverse) discrete Fourier transform matrix (Bracewell [4]), where Fkj = ωkj for 0 ≤ k, j ≤ N − 1. The Fourier modes projected back onto the L×W 2D spatial domain are plane waves, as shown in Fig. 2G, with wavevector determined by the value of k that specifies the direction and spatial frequency of each plane wave (see Appendix B). We can immediately compute the corresponding eigenvalues for the eigenvectors in Eq. 4 (equivalent to taking the discrete Fourier transform (DFT) of the first row (or column) of T , see Bracewell [4]):\nλm = N−1∑ j=0 Tjω m j , for m = 0, . . . , N − 1 (5)\nwhere {T0, . . . , TN−1} are the N unique elements that fully specifies the circulant matrix T (Eq. 3). We can then utilise tools from Fourier analysis for efficient updating of the eigenvalues whilst leaving the universal eigenbasis unaffected. For a transition matrix Tv corresponding to an arbitrary action (translation velocity) v = (vx, vy), each row of Tv is again a circulant, but shifted version of the corresponding row vector of the symmetric transition matrix corresponding to zero drift velocity, T0. Specifically, the first rows of the two matrices are related as follows:\nTv(k) = T0(k + vxL+ vy), for k = 0, . . . , N − 1 (6)\nGiven the eigenvalues for T0, Λ0 = [ λ00, λ 0 1, . . . , λ 0 N−1 ] ∈ CN (via the DFT of the first row of T0, Eq. 5), we can immediately derive the eigenvalues of Tv, Λv, via a one-step update based on the Fourier shift theorem (Bracewell [4]) without recomputing the eigendecomposition:\nΛv[k] = exp\n( 2πi\nN (vxL+ vy)k\n) Λ0[k], for k = 0, . . . , N − 1, for arbitrary v,\ni.e., Λv = ΦδvΛ0, Φδ(v) = [ 1, ωδ(v), ω 2 δ(v), . . . , ω N−1 δ(v) ] , where δ(v) = vxL+ vy (7)\nThis allows path integration by reweighting the common set of eigenvectors at each timestep by the updated eigenvalues corresponding to the current drift velocity (Eq. 7). Note that additionally, T 0 can include diffusion, thus reweighting by the eigenvalues of the diffusive transition matrix also allows tracking of increasing uncertainty.\nUtilising the fixed eigenbasis (Eq. 4) and the respective eigenvalues (Eq. 7) for arbitrary transition structures, we can make efficient prediction for the distribution of future state occupancy with respect to arbitrary action (see Figs. 2B-C). Adding translation to the translation-invariant transition matrix does not change the set of eigenvectors - allowing one set of eigenvectors (Fourier modes) to support prediction for actions in all directions (or plain diffusion), hence prediction of effects of directed actions can be efficiently generalised across environments.\nSense of Direction. We define a \"sense of direction\", θ∗, as the angle of the transitions (or the linear combinations of the available actions in a non-spatial setting) that maximise the future probability of reaching the target state given an initial state, which is modelled by the SR matrix.\nθ∗ = arg max θ ∑ j exp[2πi(xG − x0) · kj ] 1− γDj exp[2πivθ · kj ]\n(8)\nwhere γ is the discounting factor,Dj , j = 1, · · · , LW are the eigenvalues for the symmetric diffusion transition matrix, kj , j = 1, . . . , LW are the wavevectors for the j-th Fourier components, x0,xG are the coordinates of the start and goal states, and vθ = (v cos(θ), v sin(θ)) represents the velocity (with speed v and head direction θ). We see that the \"sense of direction\" supports generalisation of predictions of effects of actions across all environments with the same translation-invariant transition\nstructure, i.e., such predicted effects ignore any local deviations from translation invariance. See Appendix B for the derivation of Eq. 8. Note that here we assume that the goal state sG is known a priori, e.g., we consider a problem where the animal is navigating towards a previously visited location. The derived analytical expression for the sense of direction can be retrieved via a lookup table when the state space is small and discrete, whereas in large or continuous state spaces, it can be computed either via optimisation algorithms, or modelled by a non-linear function approximator that represents Eq. 8. See Bush et al. [11] for neural network approaches to finding goal directions from grid representations.\nWe thus propose that a computational role for the neural grid codes: generating a “sense of direction\" (capturing the transition structure of the state space, ignoring the obstacles and boundaries) that reflects a global sense of orientation which allows generalisation to completely new environments.\nFlexible Planning & Application Beyond Translation-Invariant Structures. The proposed model can be applied to flexible planning under arbitrary drift velocity as demonstrated in Fig. 3 (A-E). An agent is trying to navigate towards a goal state in a windy grid world. The navigation is performed by following the ascending \"gradient\" of the SR for occupancy of the target state (the resolvent metric, Eq. 2). The SR computed from the transition matrix including the effects of diffusion and wind (Fig. 3 B) based on our analysis (eq. 7) leads straight to the target (Fig. 3 C).\nGiven the analytical expression of the SR (Eq. 2), we could efficiently adjust the SR matrix to accommodate local changes in the state space, e.g., insertion of a barrier, using the Woodbury inversion formula to update the parts of the SR matrix affected by the local obstacles (see Appendix A.3 for derivations [34]); and again in this case, the agent correctly adjusts for the wind as well as taking the shortest path around the inserted wall (Fig. 3 D-E).\nWe note, however, that the proposed model is also able to solve tasks without periodic boundary conditions, by considering the original task state space, S0, being embedded into a larger, periodically bounded pseudo state space Sp, at least twice as large in each dimension as S0 (Fig. 3 F). We again follow the previous procedures, utilising the Fourier modes, this time computed on Sp, to perform predictions in S0 (Fig. 3 F-G), and the performance is unaffected. Note that under such formulation, the underlying transition structures can be applied to environments with both periodic and non-periodic boundary conditions - allowing sense of direction planning in either case.\nPath Integration. We can also use our model for path integration (see also Section 4) in S0, by taking velocity inputs (given any path in the grid world) to update the state occupancy distribution (Eq. 7). The path integration performance is strongly correlated with the degree of uncertainty (i.e., the diffusion strength caused by self-motion noise in addition to translations). This is indeed captured by our model (Fig. 3H), with perfect path integration when the uncertainty is low up to 1000 time steps (the discretisation of state space means that uncertainty below 0.075 has no effect ), and monotonically increasing path integration error when the uncertainty is higher." }, { "heading": "3.1 NEURAL IMPLEMENTATION FOR DEEP REINFORCEMENT LEARNING", "text": "Our proposed framework supports prediction and planning under arbitrary direction actions and path integration. To further demonstrate its utility in non-spatial tasks, we propose GridCell-DQN (gcDQN), a neural network implementation based on a modified version of the classic Deep-Q Network (DQN, Mnih et al. [31]). The architecture of gc-DQN is designed so that quantities corresponding to combinations of Fourier modes weighted by action-dependent values are explicitly available to action-value computation in addition to value estimates based on the state-inputs alone. This should enable the network to predict future state occupancy and thus facilitate the learning of Q values. We evaluated the performance of gc-DQN on the CartPole task (Barto et al. [3]) and compared with plain DQN.\nWe restrict our introduction of the gc-DQN architecture based on the evaluation on the CartPole task. The state space of the CartPole task is 4-dimensional, corresponding to the cart position and velocity, pole angle and angular velocity, and there are 2 possible actions (0 and 1 corresponding to cart movement left or right). The Q-values are learnt using the standard temporal-difference rule (Sutton and Barto [39]). The gc-DQN has 2 additional sub-networks (below the standard DQN in Fig. 4.A): the first takes as inputs the n low-frequency Fourier modes over the state space after discretisation into 84 bins and has nactions = 2 outputs to allow representation for each action (left or right) given\nthe input Fourier modes; the second takes as inputs the state variables and action and has 2n outputs which serve as action-dependent multipliers to the connection weights from the input layer of the first network. The second sub-network receives state as well as action inputs to mitigate the absence of translation-invariance. The outputs of the first network and of the standard DQN are fully connected to an output layer to learn the updated Q values. We also evaluated a model-based version of gc-DQN based on deep Dyna agents (Peng et al. [33]). Preliminary results in Fig. 4(B) show that the gc-DQN and deep gc-Dyna-Q greatly accelerates learning comparing to the baseline agents, with relatively minor increase in the model complexity and computational costs. The results support our hypothesis that Fourier eigenvectors weighted by action-specific values can aid prediction (in this case, of future value). See Appendix. E for details.\nThe focus of this paper is proposing a theoretical framework for state representation, prediction, planning and path integration via grid-like eigenvectors and action-dependent eigenvalues. The proposed gc-DQN is only a preliminary attempt towards a neural network implementation of the proposed approach (see also Mahadevan and Maggioni [29]), more rigorous studies in this direction is left for future work." }, { "heading": "4 A UNIFYING FRAMEWORK FOR MODELS OF GRID CELL FIRING", "text": "Our focus so far has been on proposing a flexible and efficient extension of the prediction models of grid cells (Baram et al. [2]; Dordek et al. [15]; Stachenfeld et al. [38]) to arbitrary directed transitions. However, many other computational models of grid cells emphasise path integration rather than inputs from place cells, such as continuous attractor network (CAN) models, in which grid-like patterns emerge in recurrently connected networks performing path integration (Fuhs and Touretzky [18]; Burak and Fiete [6]; Corneil and Gerstner [13]), and oscillatory interference (OI) models in which grid-like patterns reflect coincidence detection of velocity-dependent oscillatory inputs during path integration (Hasselmo [22]; Burgess [8]; Welday et al. [42]). Here we build upon the previous work of unifying the prediction and path integration models of grid cells firing by Sorscher et al. [37], to show the equivalence of our generalised prediction framework with the CAN models of grid cells; we additionally show the equivalence between the proposed model and oscilla-\ntory interference models in terms of their interpretations of path integration and theta phase precession." }, { "heading": "4.1 RELATION TO CONTINUOUS ATTRACTOR NETWORK MODELS OF GRID CELLS", "text": "One of the most prominent unifying analyses for different grid cell models (Sorscher et al. [37]) proves the equivalence between maximising a spatial representation objective function under the prediction models and the pattern formation dynamics of CAN models of path integration. Their analysis, however, does not include non-zero velocity inputs, corresponding to the asymmetric velocity-dependent connectivities in CANs which perform path integration (Fuhs and Touretzky [18]; Burak and Fiete [6]). We can explicitly address this equivalence using our framework. Assuming that grid cell firing rate, g, reflects linear combinations of selected Fourier modes (e.g., 6 Fourier modes at π3 radians increments with the same frequencies):\ng = G∑ j=1 wjfj (9)\nwhere fj’s are the selected Fourier modes with weights wj .\nNote that our proposed model is on the theory-level, rather than the implementation level of CAN models, hence we do not assume any specific neural network structure here. Utilising the grid cell firing described by Eq. 9, followed by similar analysis as in Sorscher et al. [37], we show that, under non-zero velocities, the differential equations governing the dynamics of the CAN model updates are equivalent to the derivative of the Lagragian equation underlying the optimsation problem of the prediction models (up to scaling factors). Hence we show that under non-stationary transition dynamics, the CAN models and the prediction models should yield identical updates to grid cell firing (up to scaling factors). The complete proof can be found in Appendix B." }, { "heading": "4.2 RELATION TO OSCILLATORY INTERFERENCE MODELS OF GRID CELLS", "text": "Another major class of computational models of grid cells is the oscillatory interference model, here we show the equivalence between the generalised prediction model and the OI models of grid cells by showing that they perform path integration via similar phase coding.\nIn OI models of grid cells, path integration is achieved via the phases of the “velocity controlled oscillators\" (VCOs), which encode movement speed and direction by variations in burst firing frequency. The VCOs generate grid-like firing patterns via coincidence detection by the grid cells\n(Burgess [8]; Hasselmo [22]; Welday et al. [42]). The variation of frequency with velocity produces a phase code for displacement, enabling the modelled grid cells to perform path integration. Namely, VCOs change their frequencies relative to the baseline according to the projection of current velocity, v(t), onto the VCO’s \"preferred direction\", d:\nfa(t) = fb(t) + βv(t) · d (10) where β is a positive constant, and fb(t) is the baseline frequency (the 4−11Hz EEG theta rhythm). It follows that VCOs perform linear path integration since the relative phase between VCO and baseline at time t, φab(t) = φa(t)− φb(t), is proportional to the displacement in the preferred direction:\nφab(t)− φab(0) = ∫ t 0 2π[fa(τ)− fb(τ)]dτ = 2πβ[x(t)− x(0)] · d (11)\nwhere x(t) is the agent’s location at time t. The interference of VCOs whose preferred directions differ by multiples of π/3 generates grid-like patterns, provides an explanation of “theta phase precession\" in grid cells (in which firing phase relative to the theta rhythm encodes distance travelled through the firing field; Hafting et al. [21]; Burgess [8]) and complements the attractor dynamics given by symmetrical connections between grid cells (Bush and Burgess [9]). We note that the main experimental results held against OI models (that bats and humans do not have reliable theta frequency oscillations) has recently been resolved: the required phase coding (theta phase precession) can occur relative to a variable baseline frequency (Bush and Burgess [10]) and has now been found in both bats and humans (Eliav et al. [17]; Qasim et al. [35]).\nWe simulated the firing of a grid cell with 6 Fourier mode inputs (Fig. 5 A-B), each firing a spike at its complex phase in the current state, as a leaky integrate and fire neuron performing coincidence detection, using a real trajectory of a rat exploring a 50cm× 50cm box. At each time step (corresponding to one theta cycle), the phase of each Fourier mode is updated according to Eq. 7 given the current velocity, and fires a spike at this phase if it is within the interval [−π/4, π/4] (modelling modulation by the baseline theta oscillation). The grid cell fires a spike at the current location if the integrated inputs reach a threshold. Note that we could also simulate a set of grid cells, with different offsets (depending on the initial phases of the Fourier modes) and different scales and orientations (depending on the choice of Fourier modes), such that the grid cells, like the Fourier modes, comprise a basis for the state space and do so on the basis of path integration (for which environmental inputs are only required to prevent error accumulation [8; 7]).\nGrid cells show \"precession\" in their firing phase relative to theta as the animal moves through the firing field (signaling distance travelled within the field; Hafting et al. [21]; Jeewajee et al. [23]; Climer et al. [12]). Our model captures this, like an OI model. The Fourier modes whose wavevectors that are aligned with the current direction of translation advance in phase as the movement progresses. Phase precession results from assuming that the Fourier modes aligned with movement direction are the dominant influence on grid cell firing (c.f. those aligned to the reverse direction). The baseline “theta frequency\" corresponds to the mean rate of change of phase of all Fourier components and so could vary (e.g., for noise reduction, see Burgess and Burgess [7]; Burgess [8]), without precluding\nphase coding (Eliav et al. [17]; Bush and Burgess [10]). By simulating straight runs, we can see clear late-to-early phase precession (Fig. 5 C), as observed in grid cells.\nThus, the OI model and our model perform path integration or prediction in the same way: the phase of each VCO changes corresponding to the component of translation along the VCO’s preferred direction, which is exactly analogous to the complex phases of the Fourier modes being updated to reflect transitions along their wavevectors (multiplication by corresponding eigenvalues, Eq. 7)." }, { "heading": "5 DISCUSSION", "text": "Understanding how different actions affect the agent’s state across environments is essential for generalisation. Existing models are capable of such prediction under a single fixed transition matrix, e.g. corresponding to symmetrical diffusion, by using eigenvectors of this transition matrix as a basis for representing state occupancy (Baram et al. [2]; Corneil and Gerstner [13]; Stachenfeld et al. [38]). Here we generalised these models to provide a mathematical framework for predicting the effects of specific actions, so long as their effects (and corresponding transition matrices) are translation invariant. This uses a common set of eigenvectors of all such matrices (Fourier modes of the state space) to represent state occupancy, so that the effects of actions correspond to multiplication by action-specific eigenvectors.\nThis model explains how grid cells (as superpositions of Fourier modes) could support prediction of the effects of actions across environments that share the same underlying transition structure (see also Whittington et al. [43]), and could, for example, perform sense-of-direction planning in new environments (i.e., finding combinations of actions that most likely lead to the target state by ignoring local obstacles). We assume that other (e.g., fronto-parietal) brain areas are responsible for detecting and avoiding obstacles following the overall direction provided by the grid cells [16; 28]. However, topology-dependent modifications to grid firing patterns could be used to accommodate local deviations from translation-invariance (e.g. obstacles), utilising the Woodbury inversion formulae, see Fig. 3E, Appendix A3 and Piray and Daw [34]. We also show show our framework corresponds to other computational models of grid cells based on path integration, and provide a functional explanation for theta phase precession.\nA number of questions and predictions are raised by the proposed model. If a basis of neurons with Fourier-mode-like firing patterns act as inputs to cells in entorhinal cortex, then grid cell firing patterns are only a small proportion of the set of firing patterns that could be synthesised. This is consistent with the existence of periodic non-grid cells in entorhinal cortex that resemble combinations of small numbers of Fourier modes (Krupic et al. [27]). We predict the use of the same set of grid cells (superpositions of Fourier modes inputs) for indicating “sense of direction\" to goal locations across different environments after a single visit, i.e. showing generalisation across Euclidean environments. Finally, the proposed model predicts future state occupancy from the transition matrix, future work could also consider the reversed direction: inferring the translation between two locations given the phase codes for each (see Bush et al. [11]), as linked discriminative and generative models.\nThe current model applies to translation-invariant transition structures, and use of the Fourier shift theorem to calculate eigenvalues also assumes a Euclidean state space. We demonstrated ways of generalising planning to bounded or locally non-translation invariant transition structures in Section 3. We note that machine learning methods based on a similar premise (creating a single representation to support planning via multiple different actions) might work even when the transitions are not strictly translation invariant (e.g., family trees, see Whittington et al. [43]). Here we showed that, by giving standard DQN agents the ability to represent the current state as eigenvectors of the state space weighted by state- and action-specific values, significantly improves learning of the CartPole task (Fig. 4), which is not strictly spatial or translation invariant. Hence our approach offers potential generalisation to non-spatial tasks, such as transitive inference (Von Fersen et al. [41], Appendix D). Future work will involve more rigorous study of the neural network implementation of the proposed grid cell model and its application to reinforcement learning. A future direction in generalising the current model to non-spatial tasks will be to consider Fourier analysis on groups of operators utilising group-theoretic knowledge (Kondor and Trivedi [25]; Gao et al. [19])." }, { "heading": "ACKNOWLEDGEMENTS", "text": "C.Y. thanks a DeepMind PhD studentship offered by the UCL Centre for Artificial Intelligence. T.E.J.B and N.B. thank the Wellcome Trust for support. The authors would like to thank Daniel Bush, Talfan Evans, Kimberly Stachenfeld, Will de Cothi and Maneesh Sahani for helpful comments and discussions. The authors declare no competing financial interests." }, { "heading": "A SOME PROOFS IN SECTIONS 3", "text": "Proposition A.1 Given our assumption of periodic boundary condition, the transition matrix, T ∈ CN×N (Eq. 3), is indeed a circulant matrix.\nProof It is easy to see that the proposition holds trivially for transition matrices with only one-step translations but without Gaussian spread. Hence here we only show the proof for the case where the transition structure includes both Gaussian spread and one-step translations. Consider for an arbitrary transition matrix T for a 2D rectangular environment with length L and widthW , and the underlying transition velocity is v = (vx, vy), remembering that T is theLW×LW 2D matrix formed from concatenating rows from what would be the 4D matrix of transitions between all pairs of states in a L ×W 2D state space. An arbitrary entry on the kth lower subdiagonal is Ti,i−k = P(x(t+ 1) = i− k|x(t) = i) for any suitable state i given k (i.e., i ≥ k). If the Gaussian spread is radially symmetric with constant variance across states, the value of Ti,i−k only depends on the distance between state i− k and the state iv, where iv is the translated state of state i given the effect of the velocity v. The states i−k and iv are equivalent to the states ((i−k)//L, (i−k) modL) and (i//L + vx, i modL + vy) in the two-dimensional spatial domain respectively (where a//b denotes the integer part of a/b). Note that we need to have the velocity v ∈ [±L/2,±W/2] so that the translation leaves the actual distance d unchanged. The distance between the state i− k and the expected state iv in the 2D state space is then\nd = √ ((i− k)//L− i//L− vx)2 + ((i− k) modL− i modL− vy)2 (12)\nFor any arbitrary i′ 6= i such that i′ = i+m, we could compute similarly the distance between state i′ − k and its corresponding expected state (Gaussian center) i′ + v. After some algebra, we have that the distance between states i− k and i+ v equals the distance between states i′ − k and i′ + v.\nd′ = √ ((i′ − k)//L− i′//L− vx)2 + ((i′ − k) modL− i′ modL− vy)2\n= √ ((i+ δ − k)//L− (i+ δ)//L− vx)2 + ((i+ δ − k) modL− (i+ δ) modL− vy)2\n(13)\nNow if we look at the two square terms within the square root separately, we have\n((i+ δ − k)//L− (i+ δ)//L− vx)2\n= ((i− k)//L+ δ//L− i//L− δ//L− vx)2 = ((i− k)//L− i//L− vx)2 (14)\n((i+ δ − k) modL− (i+ δ) modL− vy)2\n= (((i− k) modL+ δ modL) modL− (i modL+ δ modL) modL− vy)2\n= ((i− k) modL+ δ modL− i modL− δ modL− vy)2\n= ((i− k) modL− i modL− vy)2\n(15)\nThe second equality holds due to the fact that (i− k) modL+ δ modL ≤ L since this is simply the x-position of state i+ δ− k, which is never larger than L, hence ((i− k) modL+ δ modL) modL = (i − k) modL + δ modL. Similarly (i modL + δ modL) modL = i modL + δ modL. Hence we have\nd′ = √ ((i′ − k)//L− i′//L− vx)2 + ((i′ − k) modL− i′ modL− vy)2 = d (16)\nHence all entries on the kth lower subdiagonal are identical, i.e. Ti,i−k = Ti′,i′−k for all 1 ≤ k ≤ LW − 1. And by similar arguments, we could show that all entries on the kth upper subdiagonal are identical (for 1 ≤ k ≤ LW − 1), and equals to the corresponding entries on the LW − kth lower subdiagonals. And the fact that all the main diagonal entries are identical is immediate from the problem setting. Hence our target transition matrix is indeed a circulant matrix. (Note that in simulations the transition matrix will only be approximately circulant due to normalisation and numerical issues.)\nNow we consider the corresponding (LW − k)th upper subdiagonal (to the kth lower subdiagonal), by similar arguments, we have that for any Ti′′,i′′+k = P(x(t+ 1) = i′′ + k|x(t) = i′′) for suitable i′′ (i.e. i′′ + k ≤ L), the distance between the state i′′ + k and expected next state i′′ + vt are the same as d, which is equivalent to Ti′′,i′′+k = Ti,k−i. Hence all entries on the (LW − k)th upper subdiagonal are identical and equal to the entries on the kth lower subdiagonal. This holds for arbitrary 1 ≤ k ≤ LW − 1.\nProposition A.2 For any circulant matrix T ∈ CN×N as shown in Eq. 3, its kth eigenvector takes the form:\nvk = 1√ N\n[ 1, ωk, ω 2 k, · · · , ω N−1 k ]T (17)\nwhere ωk = exp ( 2πki N ) is the kth N th root of unity, and the set of eigenvalues equals to the set of DFTs of an arbitrary row/column of T .\nProof Firstly, note that the product between the circulant matrix T and an arbitrary vector v is equivalent to a convolution.\nw = T · v = T0 TN−1 · · · T2 T1 T1 T0 TN−1 · · · T2 ... T1 T0 . . . ...\nTN−2 · · · . . . . . . TN−1 TN−1 TN−2 · · · T1 T0\n · v0 v1 ...\nvN−1 (18) And we immediately have that\nwk = N−1∑ j=0 Tj−kvj (19)\nThis is true due to the periodicity of the entries given by the circulant structure. Then if we take the dot product of T and an arbitrary vector vm of the form shown in Eq. 17, the lth entry of the output vector has the following form.\nN−1∑ j=0 Tj−lω m j = ω m l N−1∑ j=0 Tj−lω m j−l (20)\nwhere the equality holds since ωmj = exp ( 2πi N jm ) = exp ( 2πi N (j − l)m ) exp ( 2πi N lm ) = ωml ω m j−l. Note that the last sum in Eq. 20 is independent of the choice of l since both Tj and ωj are periodic hence any change in l is simply rearranging the terms in the summation. Also we have that ωml = ω l m is the lth entry of the eigenvector vm. Hence we have\nTvm = λmvm (21)\nwhere\nλm = N−1∑ j=0 Tjω m j (22)\nfor m = 0, . . . , N − 1. Hence for an arbitrary N ×N circulant matrix T , the eigenvalues take the form as shown in Eq. 22 and the corresponding eigenvectors take the form as shown in Eq. 17, and the eigenvalues are equivalent to the DFT of the first row of the circulant matrix immediately follows from Eq. 22 and the definition of DFT (Bracewell [4]).\nThe predicted phase change in the eigenvalues over the eigenvalues of the baseline symmetric transition matrix computed with Fourier modes computed via Fourier shift theorem (Eq. 7) under our formulation perfectly captures the actual phase changes caused by the one-step translations in the eigenvalues between the symmetric and and asymmetric transition matrices, as shown in Fig. 6A. However, when the transition dynamics is a combination of diffusion and one-step translations, the predicted phase changes in eigenvalues will no longer perfectly match the actual phase changes observed as shown in Fig. 6B, and the oscillation is caused by the diffusion process. Namely, although the expected translation is indicated by the velocity, the actual translation spans a range of states depending on the width of the diffusion field.\nProposition A.3 The updated SR given the insertion of a barrier is\nS = S0 − C(I +RC)−1RS0 (23) where S0 and S are the initial and updated SR, R = S0[J, :] and C = S0[:, J ] are the J-th rows and columns of S0 respectively, where J is the index set of states adjacent to the inserted barrier.\nProof This derivation is inspired by Piray and Daw [34]. Given the definition of the SR, we have\nS = (I − γT )−1, S0 = (I − γT0)−1 ∈ RN×N (24) where N is the number of states. Given the insertion of a barrier, S and S0 only differ in their j-th rows for j ∈ J where J is the index set of states adjacent to the barrier. Hence we could write:\nR = T [J, :]− T0[J, :] ∈ R|J|×N (25)\nThen if we have E ∈ R|J|×N with zeros everywhere but ones on the j-th rows for j ∈ J , then by setting W = I − T and W0 = I − T0, we could write:\nW = W0 + ER (26)\nThe Woodbury inversion formula is usually use in cases whn we are trying to compute the inverse of a matrix given a low-dimensional perturbution (Riedel [36]).\n(A+ UCV )−1 = A−1 −A−1U(C−1 + V A−1U)−1V A−1 (27) Hence by applying the Woodbury inversion formula, we have:\nW−1 = W−10 − EW −1 0 (I +REW −1 0 ) −1RW−10\n⇒ S = S0 − C(I +RC)−1RS0 (28)\nwhere C = ES0 are the j-th columns of S0 for j ∈ J .\nProposition A.4 The \"sense of direction\", θ∗, is given by the form shown in Eq. 8.\nProof Essentially, we wish to find value of θ such that under the drift velocity vθ = (v cos(θ), v sin(θ)), given the start and target states, s0 and sG, the future discounted occupancy of sG starting from s0 (or W [s0, sG], where W is the SR matrix), is maximised. Under our formulation, W can be calculated as follows:\nW = Fdiag(1/(1− γΛvθ ))F−1 (29) where F is the DFT matrix (Eq. 4), and Λvθ is the set of eigenvalues of the transition matrix given velocity vθ. From our analysis based on Fourier shift theorem (Eq. 7), for each λvθi ∈ Λvθ , we have that:\nλvθi = Diω vθ·ki (30)\nwhere Di is the ith eigenvalue of the symmetric (baseline) diffusion transition matrix, and ki is the wavevector for the ith Fourier mode. Then using linear algebra, we immediately arrive at the expression in Eq. 8." }, { "heading": "B SOME PROOFS IN SECTION 4", "text": "Proposition B.1 The equations governing the dynamics of the prediction model and the CAN model of path integration are equivalent.\nProof We show the proof under the single-cell formulation, which can be immediately generalised to the situation with multiple cells.\nWe firstly note that the prediction model can be mathematically categorised as minimising the following reconstruction objective function.\nE(g) = ||T− gw||2F (31) where g ∈ RnG represents the grid cells firing rates, and w ∈ R1×N represents the linear readout weights. Following Sorscher et al. [37], we replace w by its optimal value given a fixed g, i.e., ŵ = (gT g)−1gTT. Note that any scaling of g can be absorbed into a corresponding reversed scaling into ŵ, hence g is assumed to be of unit modulus (or the matrix G can be taken to be orthonormal in the multi-cell case). Additionally, following the non-negativity constraint proposed in Dordek et al. [15], the overall optimisation problem becomes.\nmin E(g) = ||T− gŵ||2F , subject to gT g = 1, and gi >= 0∀i (32) Hence we can immediately write down the Lagrangian as follows.\nL = gTT0g − γgT g + µ1T g (33) where γ and µ are the multiplicative constant for the additive penalty terms corresponding to the constraints in Eq. 32. The derivate of the Lagrangian with respect to g then takes the following form.\ndg dt = { −γg + Tg + µ, g > 0 −γg + σ(Tg + µ), g = 0 (34)\nwhere σ(·) is the rectified linear function. Inserting the grid cell firing representation as a linear summation of the Fourier modes into Eq. 34, we obtain the following.\ndg dt =\n{ −αg + γ( ∑G j=1 λjwjfj) + µ, g > 0\n−αg + σ(γ( ∑G j=1 λjwjfj) + µ), g = 0\n(35)\nwhere λj are the corresponding eigenvalue of fj with respect to the (symmetric) transition matrix, T0.\nThe dynamics of the grid cells under the CAN models can be written as following.\nτ dg\ndt = −g + σ(Wg + b(v)) (36)\nwhere W is the recurrent connectivity matrix, b(v) is the velocity-dependent feedforward input to the grid cell under the CAN model which involves a constant baseline term and a velocity dependent term (Burak and Fiete [6]).\nNow suppose the agent is moving under non-zero velocity, v. Given the grid cell firing represented by the linear summation of Fourier modes (Eq. 9), Eq. 35 can be written as following,\ndg dt =\n{ −αg + 2πiN γT 0g + ( 2πiN γ ∑G j=1 λjwjfj(〈v, êj〉 − 1) + µ), g > 0\n−αg + 2πiN γT 0g + σ( 2πiN γ ∑G j=1 λjwjfj(〈v, êj〉 − 1) + µ), g = 0\n(37)\nwhere ej is the unit-norm wavevector of the Fourier mode fj for all j.\nNow check with Eq. 36 by setting\nτ = 1\nα ,\nW = 2πi\nN\nγ α T0,\nb(v) = ( 2πi\nN γ G∑ j=1 λjwjfj(〈v, êj〉 − 1) + µ)/α,\n(38)\nBy checking that when g > 0, σ( 2πiN γT 0g+σ( 2πiN γ ∑G j=1 λjwjfj(〈v, êj〉−1)+µ)) = 2πi N γT 0g+\nσ( 2πiN γ ∑G j=1 λjwjfj(〈v, êj〉−1)+µ), we see that under non-zero velocity inputs, by appropriately adjusting the additive velocity input term, b(v), the equations governing the dynamics for the normative and mechanistic models are equivalent.\nC 2D FOURIER MODES\nWe know that the Fourier basis vectors from Eq. 4 form plane waves as shown in Fig. 5. From standard Fourier analysis in 2D space, the 2D Fourier modes form an orthonormal basis, and takes the following form.\nvu[x] = exp (2πiu · x) (39)\nwhere the 2D Fourier basis vectors are encoded by the position vectors u = (u1/L, u2/W ) ∈ [0, 1]×[0, 1] (position vectors of each location in the L×W environment projected onto [0, 1]×[0, 1]). The direction of the encoder position vector u represents the direction of the plane wave and the frequency of the plane wave is the unnormalised direction vector ||u′|| (where u′ = u×(L,W )), note that u′ is also the wavevector for the plane wave. This is a slightly different formulation comparing to the formulation given in Eq. 4, which consider the state space as a 1-dimensional flattened vector of the 2-dimensional environment, hence the Fourier basis vectors are the corresponding 1-dimensional Fourier modes. Though both formulation give us the same set of Fourier basis vectors, under the definition in Eq. 39, we could easily track the frequency and direction of the plane wave formed from the 2D Fourier modes. And the phase shift via the Fourier shift theorem 7 equivalently applies for this 2D Fourier formulation.\nThe Fourier modes comprises a basis for representing any distribution over the task state space, so we could use a linearly weighted combination of Fourier modes to reconstruct any firing patterns, such as those observed in place cells (Welday et al. [42], Fig. 8). However note that the coincidence detection of small number of oscillators with different frequencies will generate periodic patterns, e.g., grid cells, and more oscillators will be needed for those with more local firing fields such as place cells. Note that the total number of Fourier modes equals the number of states in the environment (e.g., LW for the L×W rectangular environment on a square grid), and it could be infeasible and inefficient to compute and store a large number of such Fourier modes (or neurons with VCO-like firing patterns) in the brain. Hence here we only use the principal modes (taking the top n Fourier basis vectors in terms of the corresponding eigenvalues (frequencies)), within contain the majority of the information is contained, with the number of principal modes depending on the desired reconstruction resolution. We utilised the top 100 principal Fourier modes for most of the simulations in the main text (see Fig. 7 for a typical fixed set of Fourier modes). Fig. 8 demonstrates that the small number of Fourier modes are able to reconstruct grid cells firing fields with various spacings and orientations, and place cells firing fields." }, { "heading": "D TRANSITIVE INFERENCE", "text": "In the main paper we argued that the same set of eigenvectors can be used to predict future occupancy distribution given the transition matrix for symmetrical relations like diffusion between adjacent states and directed transitions (e.g. moving N S E W). Here we briefly discuss the generalisation of our model to non-spatial tasks.\nWe could apply our method to the one-dimensional transitive inference tasks of this type. e.g., given A > B, B > C, C > D, then infer if A > D (Von Fersen et al. [41])? This would be like having a 1D track (and Fourier eigenvectors for 1-step transitions in both directions) corresponding to actions \"greater\" or \"smaller\", and using \"intuitive planning\" to see if using eigenvalues for \"greater\" will take you from A to B in the discounted future more likely than eigenvalues for \"smaller\". In order to deal with the non-periodicity of the task, we simulate transitive inference in a small subset of the state space of the torus. As shown in Fig. 9, we see that our framework correctly predicts the transitive relationship between the chosen state x129 and states close to x129.\nDespite the simplicity of 1D transitive inference (Fig. 9), our model is still an advance on the original intuitive planning method in being able to predict the effects of both \"greater\" and \"smaller\" transitions\nwith the same set of eigenvectors, rather than being restricted to prediction with one or the other alone." }, { "heading": "E SIMULATIONS", "text": "E.1 FURTHER DETAILS OF THE GC-DQN AGENT\nThe overall architecture can be found in the graphical illustration in fig. 4. At each timestep, the state values and a specific action value are fed into a neural network for all possible values of actions (Blue box in the bottom left of fig. 4), which outputs nactions output, where n is the number of Fourier modes inputs to the second network. The output can considered as the specific updates to each Fourier modes corresponding to the action in the current location, like the phase shift in the Fourier shift theorem.\nThe inputs to the grid cell network are the first n Fourier modes, whose dimensions (D) are determined by the size of the state space. When the state variables are continuous, we compute an approximate size of the state space by discretising each state variable. The number of principal Fourier modes (n) is chosen arbitrarily as long as the majority of the information can be reconstructed from the chosen set of Fourier modes. Higher values of n leads to finer details of the prediction, but also induces higher computational costs.\nAt each timestep, the n Fourier modes is fed as the input to the grid cell network (shown in the middle row of fig. 4(A)). Each action multiplier (outputs from the state-action network) is multiplied with the corresponding column of the weight matrix between the input layer and the hidden layer of the grid cell network. The outputs of the hidden layer is then transposed, and forward propagate to the output layer of the second network. The computations of the grid cell network is considered to be equivalent to using the Fourier modes to construct a weight value for choosing each action at a given state that aids navigation/planning.\nThe outputs from the grid cell network and the standard DQN agent is then combined to output a vector, that acts as the values for each action that guides action choice in the current timestep.\nE.2 SIMULATION DETAILS\nAll simulations were implemented in Python. The simulation details for each task is as follows:\n• Fig. 1: The state space is assumed to be a 1D ring with 20 states, with the transition probabilities P(st+1 = i + 1|st = i) = P(st+1 = i − 1|st = i) = 0.5, and discounting factorγ = 0.9 for generating the resolvent (Eq. 2).\n• Fig. 2: Variance of each (Gaussian) firing field (representing the strength of diffusion) is 3; B: (0, 5) drift velocity with increasing diffusion (variance increase by 3 per step); C: (3, 3) drift velocity with 0 diffusion; E: The successor representation is computed using the Fourier modes and corresponding eigenvalues, with the discounting factor γ = 0.9.\n• Fig. 3: A: The wind effect causes (0, 2) (2 units southward) displacement at each timestep; B, D: The successor representation is computed given a transition matrix that assumes the variance at each (Gaussian) firing field is 1.5, followed by directed actions under the wind effects (with 0 diffusion), the discounting factor is γ = 0.9; F, G: The optimal following the ascending values of the successor representation, without any wind effect. The SR is computed given a transition matrix that assumes the variance at each (Gaussian) firing field is 1.5, followed by directed actions, the discounting factor is γ = 0.9; All computations are done by working directly with the Fourier modes instead of the transition matrices.\n• Fig. 4: The environment is the CartPole task (Barto et al. [3]), and is simulated using the OpenAI gym environment (Brockman et al. [5]). The state value consists of 4 variables: (Cart position, Cart velocity, Pole angle, Pole angular velocity), the action value is an integer takes value from {0, 1}, where 0 represents moving left, and 1 represents moving right. For constructing the Fourier modes, we discretised each state variable into 8 bins, hence resulting in 84 number of states, and we chose the top 50 low-frequency Fourier modes as the inputs to the grid cell network. The standard DQN agent consists of two fully connected hidden layers with standard ReLU activations, with 48 and 24 units, respectively. The target network is updated every 500 timesteps. The deep Dyna-Q agent is a simplified version of the model proposed in Peng et al. [33], with an additional 2-layer neural network learning the environmental transition dynamics, with 64 and 32 units in each hidden layer followed by ReLU activations. At each timestep, the learnt environment model is called to generateK imaginary trajectories that are used for model-based updates to the DQN agent. The number of model-based updates, K, is taken to be 2. The state-action network in the gc-DQN has one hidden layer, with 32 units followed by ReLu activation. The grid cell network has one hidden layers, with hidden size (n,A) followed by ReLU activation, where n represents the number of input Fourier modes, and A represents the number of possible actions. The deep gc-Dyna-Q has similar architecture as the gc-DQN agent, but with an additional environment network that learns the transition dynamics of the environment that is used for model-based updates (with same architecture as the standard deep Dyna-Q agent). All models are learnt using the mean squared error loss function and Adam optimiser (Kingma and Ba [24]) with learning rate 0.001 and no learning rate decay. The exploration strength, , is set to be 0.8 at the start of each independent run, decreases by 0.05 at each episode, and is bounded below by 0.01. A total of 5 independent runs of 100 episodes are performed for each agent. Note that 100 episodes were simulated for each independent run due to the limited time and computational resources, but the results show that it is sufficient for demonstrating the increase in performance of the gc-DQN agent comparing to the baseline agents. We will, upon acceptance, show simulations with more episodes (up to the points where convergence\nof the baseline agents are observed) in the camera-ready version. All implementations are performed in the TensorFlow framework (Abadi et al. [1]). • Fig. 5: A: wavevectors of chosen input Fourier modes: k1 = (4/50, 1/50),k2 =\n(1/50, 4/50),k3 = (3/50,−3/50),k4 = (−4/50,−1/50),k5 = (−1/50,−4/50),k6 = (−3/50, 3/50); B: real rat trajectory projected onto 50× 50 2D spatial domain, firing phase interval of the input Fourier modes: [−2.5π/12, 2.5π/12], integration time interval: 8, exponential decay rate: 0.2, grid cell firing threshold: 2.95, directional bias: within ±π/2 of the head direction (the range of the relative difference between the direction of the wavevector and the head direction, within which the Fourier modes are allowed to fire); C: running direction: arctan(1/3). • Fig. 9: The discounting factor: γ = 0.3, the number of states: 26, number of effective\ntransitive inference states: 10.\nThe Python-based implementations can be found at https://github.com/ucabcy/ Prediction_and_Generalisation_over_Directed_Actions_by_Grid_Cells." } ]
2,021
DIRECTED ACTIONS BY GRID CELLS
SP:6c57ec1533acf8cfcc2f8d9cdc8fe4d7acf9f77f
[ "This paper tackles a fair classification problem with an invisible demographic, a situation where the records who have some specific target labels and sensitive attributes are missing. In this setting, the authors introduce a disentangled representation learning framework to make the resultant classifier fair by taking advantage of the additional dataset, context dataset. They demonstrate by the empirical evaluations that the proposed disentangled representation learning algorithm success to mitigate unfair bias by utilizing the perfect dataset, a dataset in which the target label and sensitive attribute are independent. Usually, the perfect dataset is unavailable; hence, they introduce a method to convert the context dataset into the perfect dataset. The authors also show that even if the context dataset is not perfect, the presented method successes to mitigate an unfair bias." ]
In a statistical notion of algorithmic fairness, we partition individuals into groups based on some key demographic factors such as race and gender, and require that some statistics of a classifier be approximately equalized across those groups. Current approaches require complete annotations for demographic factors, or focus on an abstract worst-off group rather than demographic groups. In this paper, we consider the setting where the demographic factors are only partially available. For example, we have training examples for white-skinned and dark-skinned males, and white-skinned females, but we have zero examples for dark-skinned females. We could also have zero examples for females regardless of their skin colors. Without additional knowledge, it is impossible to directly control the discrepancy of the classifier’s statistics for those invisible groups. We develop a disentanglement algorithm that splits a representation of data into a component that captures the demographic factors and another component that is invariant to them based on a context dataset. The context dataset is much like the deployment dataset, it is unlabeled but it contains individuals from all demographics including the invisible. We cluster the context set, equalize the cluster size to form a “perfect batch”, and use it as a supervision signal for the disentanglement. We propose a new discriminator loss based on a learnable attention mechanism to distinguish a perfect batch from a non-perfect one. We evaluate our approach on standard classification benchmarks and show that it is indeed possible to protect invisible demographics.
[]
[ { "authors": [ "Ola Abualghaib", "Nora Groce", "Natalie Simeu", "Mark Carew", "Daniel Mont" ], "title": "Making visible the invisible: Why disability-disaggregated data is vital to “leave non-one behind", "venue": "Sustainability,", "year": 2019 }, { "authors": [ "Ashrya Agrawal", "Florian Pfisterer", "Bernd Bischl", "Jiahao Chen", "Srijan Sood", "Sameena Shah", "Francois Buet-Golfouse", "Bilal A Mateen", "Sebastian Vollmer" ], "title": "Debiasing classifiers: is reality at variance with expectation", "venue": null, "year": 2011 }, { "authors": [ "Martin Arjovsky", "Léon Bottou", "Ishaan Gulrajani", "David Lopez-Paz" ], "title": "Invariant risk minimization", "venue": "arXiv preprint arXiv:1907.02893,", "year": 2019 }, { "authors": [ "Arturs Backurs", "Piotr Indyk", "Krzysztof Onak", "Baruch Schieber", "Ali Vakilian", "Tal Wagner" ], "title": "Scalable fair clustering", "venue": "arXiv preprint arXiv:1902.03519,", "year": 2019 }, { "authors": [ "Solon Barocas", "Moritz Hardt", "Arvind Narayanan" ], "title": "Fairness and Machine Learning", "venue": "fairmlbook.org,", "year": 2019 }, { "authors": [ "Maxime Bucher", "Tuan-Hung Vu", "Matthieu Cord", "Patrick Pérez" ], "title": "Zero-shot semantic segmentation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Flavio Chierichetti", "Ravi Kumar", "Silvio Lattanzi", "Sergei Vassilvitskii" ], "title": "Fair clustering through fairlets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Alexandra Chouldechova" ], "title": "Fair prediction with disparate impact: A study of bias in recidivism prediction instruments", "venue": "Big data,", "year": 2017 }, { "authors": [ "Elliot Creager", "David Madras", "Jörn-Henrik Jacobsen", "Marissa A Weis", "Kevin Swersky", "Toniann Pitassi", "Richard Zemel" ], "title": "Flexibly fair representation learning by disentanglement", "venue": null, "year": 1906 }, { "authors": [ "Yann N Dauphin", "Angela Fan", "Michael Auli", "David Grangier" ], "title": "Language modeling with gated convolutional networks", "venue": "In Proceedings of the 34th International Conference on Machine LearningVolume", "year": 2017 }, { "authors": [ "Harrison Edwards", "Amos Storkey" ], "title": "Censoring representations with an adversary", "venue": "arXiv preprint arXiv:1511.05897,", "year": 2015 }, { "authors": [ "Ian J. Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron C. Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems", "year": 2014 }, { "authors": [ "Kai Han", "Sylvestre-Alvise Rebuffi", "Sébastien Ehrhardt", "Andrea Vedaldi", "Andrew Zisserman" ], "title": "Automatically discovering and learning new visual categories with ranking statistics", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Moritz Hardt", "Eric Price", "Nati Srebro" ], "title": "Equality of opportunity in supervised learning", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Tatsunori B. Hashimoto", "Megha Srivastava", "Hongseok Namkoong", "Percy Liang" ], "title": "Fairness without demographics in repeated loss minimization", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Jon Hendricks" ], "title": "Ageism: Looking across the margin in the new millennium", "venue": "Generations,", "year": 2005 }, { "authors": [ "Irina Higgins", "David Amos", "David Pfau", "Sébastien Racanière", "Loı̈c Matthey", "Danilo J. Rezende", "Alexander Lerchner" ], "title": "Towards a definition of disentangled", "venue": "representations. CoRR,", "year": 2018 }, { "authors": [ "Lingxiao Huang", "Shaofeng Jiang", "Nisheeth Vishnoi" ], "title": "Coresets for clustering with fairness constraints", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Mikella Hurley", "Julius Adebayo" ], "title": "Credit scoring in the era of big data", "venue": "Yale Journal of Law and Technology,", "year": 2017 }, { "authors": [ "Ayush Jaiswal", "Rex Yue Wu", "Wael Abd-Almageed", "Prem Natarajan" ], "title": "Unsupervised adversarial invariance", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Ayush Jaiswal", "Rex Yue Wu", "Wael Abd-Almageed", "Prem Natarajan" ], "title": "Unsupervised adversarial invariance", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "F. Kamiran", "T. Calders" ], "title": "Data preprocessing techniques for classification without discrimination", "venue": "Knowledge and Information Systems,", "year": 2012 }, { "authors": [ "Faisal Kamiran", "Toon Calders" ], "title": "Data preprocessing techniques for classification without discrimination", "venue": "Knowledge and Information Systems,", "year": 2012 }, { "authors": [ "Sampath Kannan", "Aaron Roth", "Juba Ziani" ], "title": "Downstream effects of affirmative action", "venue": "In Proceedings of the Conference on Fairness, Accountability, and Transparency,", "year": 2019 }, { "authors": [ "Michael Kearns", "Seth Neel", "Aaron Roth", "Zhiwei Steven Wu" ], "title": "Preventing fairness gerrymandering: Auditing and learning for subgroup fairness", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Thomas Kehrenberg", "Myles Scott Bartlett", "Oliver Thomas", "Novi Quadrianto" ], "title": "Null-sampling for interpretable and fair representations", "venue": "In Computer Vision – ECCV 2020. Springer International Publishing,", "year": 2020 }, { "authors": [ "Fereshte Khani", "Aditi Raghunathan", "Percy Liang" ], "title": "Maximum weighted loss discrepancy", "venue": "CoRR, abs/1906.03518,", "year": 2019 }, { "authors": [ "Byungju Kim", "Hyunwoo Kim", "Kyungsu Kim", "Sungjin Kim", "Junmo Kim" ], "title": "Learning not to learn: Training deep neural networks with biased data", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition", "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Günter Klambauer", "Thomas Unterthiner", "Andreas Mayr", "Sepp Hochreiter" ], "title": "Self-normalizing neural networks", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Jon Kleinberg", "Sendhil Mullainathan", "Manish Raghavan" ], "title": "Inherent trade-offs in the fair determination of risk scores", "venue": "arXiv preprint arXiv:1609.05807,", "year": 2016 }, { "authors": [ "Christoph H Lampert", "Hannes Nickisch", "Stefan Harmeling" ], "title": "Learning to detect unseen object classes by between-class attribute transfer", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Hugo Larochelle", "Dumitru Erhan", "Yoshua Bengio" ], "title": "Zero-data learning of new tasks", "venue": "In Association for the Advancement of Artificial Intelligence (AAAI),", "year": 2008 }, { "authors": [ "Francesco Locatello", "Gabriele Abbati", "Thomas Rainforth", "Stefan Bauer", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "On the fairness of disentangled representations", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Francesco Locatello", "Stefan Bauer", "Mario Lucic", "Sylvain Gelly", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "Challenging common assumptions in the unsupervised learning of disentangled representations", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "David Madras", "Elliot Creager", "Toniann Pitassi", "Richard Zemel" ], "title": "Learning adversarially fair and transferable representations", "venue": "arXiv preprint arXiv:1802.06309,", "year": 2018 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "arXiv preprint arXiv:1802.05957,", "year": 2018 }, { "authors": [ "Fabian Pedregosa", "Gaël Varoquaux", "Alexandre Gramfort", "Vincent Michel", "Bertrand Thirion", "Olivier Grisel", "Mathieu Blondel", "Peter Prettenhofer", "Ron Weiss", "Vincent Dubourg" ], "title": "Scikit-learn: Machine learning in python", "venue": "Journal of machine Learning research,", "year": 2011 }, { "authors": [ "Caroline Criado Perez" ], "title": "Invisible women: Exposing data bias in a world designed for men", "venue": null, "year": 2019 }, { "authors": [ "Manish Raghavan", "Solon Barocas", "Jon M. Kleinberg", "Karen Levy" ], "title": "Mitigating bias in algorithmic hiring: evaluating claims and practices", "venue": null, "year": 2020 }, { "authors": [ "Alfréd Rényi" ], "title": "On measures of dependence", "venue": "Acta Mathematica Academiae Scientiarum Hungarica,", "year": 1959 }, { "authors": [ "R. Suter", "D. Miladinovic", "B. Schölkopf", "S. Bauer" ], "title": "Robustly disentangled causal mechanisms: Validating deep representations for interventional robustness", "venue": "In Proceedings of the 36th International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Michael Wick", "Swetasudha Panda", "Jean-Baptiste Tristan" ], "title": "Unlocking fairness: a trade-off revisited", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Yongqin Xian", "Christoph H Lampert", "Bernt Schiele", "Zeynep Akata" ], "title": "Zero-shot learning—a comprehensive evaluation of the good, the bad and the ugly", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2018 }, { "authors": [ "Muhammad Bilal Zafar", "Isabel Valera", "Manuel Gomez-Rodriguez", "Krishna P. Gummadi" ], "title": "Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment", "venue": "In Proceedings of the 26th International Conference on World Wide Web, WWW 2017,", "year": 2017 }, { "authors": [ "Richard Zemel", "Yu Wu", "Kevin Swersky", "Toniann Pitassi", "Cynthia Dwork" ], "title": "Learning fair representations", "venue": "International Conference on Machine Learning (ICML), Proceedings of Machine Learning Research,", "year": 2013 }, { "authors": [ "GLU Dauphin" ], "title": "Leaky ReLU as the hidden activation; 2) max-pooling is used for spatial downsampling instead of strided convolutions. The final convolutional layer is followed by a global average pooling layer followed by a fully-connected classification layer. For the MIM and ZSF models, the architecture matches that of the discriminator, excluding the components from the aggregation operation onward in the latter case. All classifiers", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Machine learning is already involved in decision-making processes that affect peoples’ lives such as in screening job candidates (Raghavan et al., 2020) and in pricing credit (Hurley & Adebayo, 2017). Efficiency can be improved, costs can be reduced, and personalization of services and products can be greatly enhanced – these are some of the drivers for the widespread development and deployment of machine learning algorithms. Algorithms such as classifiers, however, are trained from large amount of labeled data, and can therefore encode and even reinforce past discriminatory practices that are present in the data. The classifier might treat some groups of individuals unfavorably, for example, denying credit on the grounds of language, gender, age and their combined effect. Algorithmic fairness aims at building machine learning algorithms that can take biased datasets and outputs fair/unbiased decisions for people with differing protected attributes, such as race, gender, and age.\nA typical setting of algorithmic fairness is as follows. We are given a training set of observations x ∈ X , their corresponding protected attributes s ∈ S, and the target label y ∈ Y for learning a classifier. In a statistical notion of algorithmic fairness e.g. (Kamiran & Calders, 2012a; Hardt et al., 2016; Zafar et al., 2017), we control the discrepancy of a classifier’s loss for a small number of demographic groups defined on protected attributes. Recently, several works have considered the setting where protected attributes are unknown (Kearns et al., 2018; Hashimoto et al., 2018; Khani et al., 2019). They aim to control the losses of groups whose size is greater than some predefined value. These works focus on an abstract worst-off group rather than demographic groups. It has been noted that the implied worst-off groups may differ from well-specified demographic groups who are known to suffer from past discriminatory practices (Hashimoto et al., 2018).\nWe are interested in the setting that is in between having complete annotations for demographic groups and having none. In this paper, we introduce algorithmic fairness with invisible demographics.\nWho are the invisible demographics? In the context of machine learning systems, those are individuals with thin or non-existent labeled training data. The invisible population is primarily composed of individuals with certain protected attributes (Hendricks, 2005; Abualghaib et al., 2019; Perez, 2019). We now elaborate on several algorithmic decision scenarios involving invisible demographics. One scenario is when we observe partial outcomes for some of the demographic groups, e.g. we have labeled training data for males (with positive and negative outcomes), but for the group of females, we only observe the one-sided labels (negative outcome). Another scenario is when we do not observe any outcome for some of the demographic (sub)groups, e.g. we have training samples for white-skinned and dark-skinned males, and white-skinned females, but we have zero labeled data for dark-skinned females. An extreme version of the last scenario is when we do not observe any outcome for females regardless of their skin colors, e.g. we only have training samples for males and no training examples for females. To summarize, in the invisible demographics problem, we define the demographics groups that are expected to be seen, so they are not abstract. However, not all of the demographics are observed (labeled) during training, forming missing or invisible demographics.\nThis paper presents learning disentangled representations in the presence of invisible demographics. Our source of supervision is motivated by the observation that we want to deploy our classifier to the eventual real-world population. This deployment dataset will contain individuals from all demographics. We thus consider the setting where unlabeled data is available for learning disentangled representation. We call this data a context set and this context set is much like the deployment dataset, it is unlabeled but it contains all demographics including the invisible ones.\nWe aim to convert our unlabeled context set into a perfect dataset (Kleinberg et al., 2016; Chouldechova, 2017), a dataset in which the target label and protected attribute are independent (i.e. y ⊥ s). We will then use this perfect dataset as the inductive bias for learning disentangled representations. How do we construct this perfect dataset without labels? We assume that the number of demographic groups (hence clusters) is known a priori corresponding to the diverse demographic groups in the real-world population in which our machine learning system will be deployed. We use unsupervised kmeans clustering, or a supervised clustering based on rank statistics; the latter one allows to form the clusters that also support annotations in the train data. Once the clusters have been found, we can equalize the cluster size to form a perfect dataset and use it as an input for learning a disentangled fair representation. See fig. 1 for an overview of our learning with invisible demographic framework.\nSpecifically, our paper provides the following main contributions:\n1. A problem of algorithmic fairness with invisible demographics where we have zero data for some of demographics and we still have to make predictions for those groups. 2. Applying clustering methods to the task of transforming unlabeled context dataset into a perfect dataset. 3. Theoretical and experimental justification that the disentangled model with the perfect dataset as an inductive bias provides a well-disentangled fair representation, one component captures the demographic factors and another component is invariant to them.\nRelated work We describe related work in three areas: zero-shot learning, semi-supervised learning, and disentangled representation learning. On zero-shot learning. The setting with incomplete training data, where we aim to account for seen and unseen outcomes is also known as generalized zero-shot learning. Traditionally, zero-shot learning transfers knowledge from classes for which we\nhave training data to classes for which we do not have via auxiliary knowledge, e.g. via prototype examples (Larochelle et al., 2008), intermediate class description such as semantic attributes (Lampert et al., 2009; Xian et al., 2018), word2vec embeddings (Bucher et al., 2019). Our method similarly uses a context set as a source of auxiliary knowledge but in in contrast to generalized zero-shot learning, our context set is an unlabeled pool of data, where class descriptions are unknown. On semisupervised learning. Wick et al. (2019) proposed a semi-supervised method that can successfully harness unlabeled data to correct for the selection bias and label bias in the training data. The unlabeled data, despite not containing the target label y, is labeled in terms of the protected variable s. Our setting is significantly harder because there is no label information about y and s in the context set. On disentangled representations learning. Locatello et al. (2019a) suggested that disentanglement in representation learning may be a useful property to encourage fairness when protected variables are not observed. In order for disentangled representations to improve fairness measure without the knowledge of protected attribute s, they have to assume that the target label y and the protected attribute s are independent, i.e. y ⊥ s. Though, in fairness settings, the variable s is correlated with the variable y, and therefore unsupervised methods are not suitable for fairness (Jaiswal et al., 2018b; 2019). Indeed, experiments in (Locatello et al., 2019a) were wholly done with procedurally generated synthetic datasets involving 2D and 3D shapes. Without some supervision or inductive bias, disentangled representation methods would not solve the issue of algorithmic fairness with invisible demographics (Locatello et al., 2019b)." }, { "heading": "2 METHODOLOGY", "text": "" }, { "heading": "2.1 THEORETICAL BACKGROUND", "text": "In this section, we first formulate mathematically the problem of invisible demographics and its associated issue of algorithmic fairness. We then motivate theoretically the idea of perfect dataset for achieving fairness, and its use for an inductive bias in learning disentangled representations.\nInvisible demographics and algorithmic fairness. Let S denote the set of discrete-valued protected attributes with an associated domains S. S can take the values taken by a single protected attribute, or, S = S1 × S2 × . . .× Sp with S1, . . . , Sp be discrete-valued protected attributes more generally. X , with the associated domain X , represents other attributes of the data. Let Y denote the space of class labels for a classification task (Y = {0, 1} for binary classification or Y = {1, 2, . . . , Ccls} for multi-class classification). For ease of exposition, we assume that we have multiple sourcesM of samples, one for each combination of class label y and protected attribute s. That is, we have:\nMys, ∀y ∈ Y,∀s ∈ S, (1) where, for example, the source My=0,s=0 contains all data points with class label y = 0 and protected attribute s = 0. As in a standard supervised learning task, we have access to a training set Dtr = {(xi, si, yi)}, that is used to learn a model M : X → Y . Dtr is composed of several sources. This labeled training dataset, however, lacks samples from some of the sources:\n∃y ∈ Y,∃s ∈ S : Dtr ∩Mys = ∅. (2) For example, we might not have samples from two sources: My=0,s=0 andMy=1,s=0. In binary classification, this corresponds to zero-labeled data for the invisible demographic group s = 0. Or we only observe a negative outcome for the invisible demographic s = 0, i.e. we haveMy=1,s=0 = ∅. Once the model M is trained, we deploy it to the real-world population with diverse demographic groups. That is, we have a deployment set, Dt = {(xi)} which has overlap with all sources:\nDt ∩Mys 6= ∅ ∀y ∈ Y,∀s ∈ S. (3) If the model relies only on the incomplete training set, it is not unreasonable to expect that the model to easily misunderstand the invisibles. We can all agree that this sounds unfair, and we would like to rectify this. We will be precise shortly about the adopted mathematical definitions of fairness.\nWe propose to alleviate the issue of unfairness to the invisibles by mixing labeled with unlabeled data, which is usually much cheaper to obtain. In this paper, we call this unlabeled data a context set Dctx = {(xi)}. This context set has overlap with all sources:\nDctx ∩Mys 6= ∅ ∀y ∈ Y,∀s ∈ S (4)\nThe context set is much like the deployment set: it has no information about class labels y or the protected attributes s.\nWe adopt a statistical notion of algorithmic fairness in which it balances a certain condition between groups of individuals with different protected attributes. The term ȳ below is the prediction of a machine learning model M . Several statistical fairness criteria have been proposed (Kamiran & Calders, 2012a; Hardt et al., 2016; Zafar et al., 2017; Chouldechova, 2017; Raghavan et al., 2020) (shown below for the case where s and y are binary):\nPr(ȳ = 1|s = 0) = Pr(ȳ = 1|s = 1) (equality of acceptance rate) (5) Pr(ȳ = 1|s = 0, y) = Pr(ȳ = 1|s = 1, y) (equality of true positive/negative rate) (6) Pr(y = 1|s = 0, ȳ) = Pr(y = 1|s = 1, ȳ) (equality of positive/negative predicted value) (7)\nGenerally, those statistical notions can be expressed in terms of different (conditional) independence statements between the involved random variables (Barocas et al., 2019): ȳ ⊥ s (equation 5), ȳ ⊥ s | y (equation 6), and y ⊥ s | ȳ (equation 7). If our training set has no positive outcome for the demographic s = 0, i.e. My=1,s=0 = ∅, the true positive rate for this group will suffer, and therefore we will likely not be able to satisfy, among others, equality of true positive rate.\nPerfect dataset. We call a dataset for which y ⊥ s holds, a perfect dataset (Chouldechova, 2017; Kleinberg et al., 2016). If we have access to a perfect dataset, we could equalize true positive/negative rates (eq. 6) and also equalize positive/negative predicted values (eq. 7) for all demographic groups. This can be shown by using the sum and product rule of conditional probabilities, e.g. (Kannan et al., 2019). Let’s consider a binary-valued protected attribute, s′ versus s′′. For s′, we can compute: Pr(y = 1|ȳ = 1, s′) = Pr(ȳ=1|y=1,s′)Pr(y=1|s′)/(Pr(ȳ=1|y=1,s′)Pr(y=1|s′)+Pr(ȳ=1|y=0,s′)(1−Pr(y=1|s′))), and accordingly for s′′. The conditional probability on the left hand side is a positive predicted value, and this quantity can be expressed in terms of true positive/negative rates and the base (prior) rate, shown on the right hand side. If we have a perfect dataset (y ⊥ s holds, which means equal base rates Pr(y = 1|s′) = Pr(y = 1|s′′)), an equality in the true positive/negative rates will give us an equality in the positive/negative predicted values. Similarly, with a perfect dataset, we can equalize true positive/negative rates (eq. 6) and also acceptance rates (eq. 5) for all demographic groups. From the sum probability rule, we have: Pr(ȳ = 1|s′) = Pr(ȳ = 1|y = 1, s′)Pr(y = 1|s′) + Pr(ȳ = 1|y = 0, s′)(1− Pr(y = 1|s′)) for s′ value, and accordingly for s′′ value. Here, an acceptance rate on the left hand side is related to true positive/negative rates and the base (prior) rate as shown on the right hand side. In general, however, our given dataset is likely to be imperfect. In this paper, we pursue learning a fair classifier for all demographics as learning disentangled representations with an approximately perfect dataset.\nDisentangled representation. Disentanglement learning aims to find a split representation of a data point x and a mapping function f such that f(x) = (z1, z2, . . . , zp) where z1, z2, . . . , zp are p distinct (independent) factors of variations. We can mathematically formalize this intuitive definition using group and representation theories Higgins et al. (2018), or using structural causal models Suter et al. (2019). Specifically in this paper, we would like to split representation of data into two factors as f(x) = (zy, zs) where zy contains factors that are relevant for y-prediction and zs contains factors related to demographic group s. As noted by Jaiswal et al. (2018a; 2019) (also vide sec. 1), since the protected variable s is correlated with the class label y, we need annotations of undesired nuisance variable s to be successful in using disentanglement learning methods for fairness. We only have annotations of variable s in the training set Dtr = {(xi, si, yi)}, however, crucially, this set contains missing demographic groups. We have all demographic groups in the context set Dctx = {(xi)} (also in the deployment set Dtr = {(xi)}), though, the challenge is we should not expect annotations of protected variable s at the deployment time. The next section will show that we can still leverage the context set for learning the disentangled representations.\nDisentanglement with a perfect dataset. Our framework for learning the disentangled representations comprises four core modules: 1) an encoder function f that embeds x into a bipartite space f(x) → (zy, zs); 2) a decoder function g that learns the inverse of f , mapping back from the embedded space into the input domain g(zy, zs)→ x̃; 3) a predictor function l that predicts y from zy, and 4) a discriminator function h that classifies whether a given batch of samples embedded in zy derives from the either context set or the training set; this marks a significant departure from the typical GAN discriminator, which takes as input batches of data and yields a prediction for each sample independently of the other samples in the batch. Fig. 2a shows our framework, where the\ntraining signal comes from the perfect dataset. Formally, given the training set, Dtr and samples from the balanced (i.e. perfect – see section 2.2 for how this details on how this can be practically achieved) context set Xperf , our learning objective can be written as:\nLmatch = ∑\nx∈Xtr ⋃ Xperf Lrecon(x, g(zs, zy)) + λ1 ∑ x∈Xtr Lsup(y, l(zy))+\n+ λ2 (log h(f(zy ⊂ Xperf )) + log(1− h(f(zy ⊂ Xtr))) , (8)\nwhere Lrecon and Lsup denote the reconstruction loss, and supervised loss, respectively, and λ1 and λ2 are pre-factors. In practice, this objective is computed over mini-batches, B, and the discriminator h is trained via the standard JSD loss (Goodfellow et al., 2014) to map a batch of data points from the training set and the context set to a binary label: 1 if the batch is judged to have been sampled from the context set, 0 otherwise. Its goal is to effectively estimate the probability that a batch of samples, as a set, has been sampled from one distribution or the other. Since the task is a set-prediction one, we require that the function it defines respects the exchangeability of the batch dimension – that is the discriminator’s predictions should take into account dependencies between samples in a batch but should be invariant to the order in which they appear, i.e. we have h({zy}Bb=i) = h({zy}Bb=π(i)) for all permutations π ∈ Π. To render the entirety of the function h composed of sub-functions h1(h2(h3...))), it requires only the innermost, sub-function, ρ in the chain to have this property. While there are a number of choices when it comes to defining ρ, we choose a weighted average ρ = 1B ∑ i({attention(zy)}Bb=i), with weights computed according to a learned attention mechanism. It takes the form of the scaled dot-product attention (Vaswani et al., 2017) attention(Q,K, V ) := softmax(QKT / √ d)V, , weighting values (V) according to the similarity between the associated key (K) and query (Q) matrices, as measured by their dot-product. Q, K, and V are used after they have been embedded into linear subspaces by matrix-multiplication with learned weight matrices of dimension Rm×d. We found that defining K and V as zy , and Q as the mean of zy over B, yielded good results and leave it to future work to explore more sophisticated methods. The result of ρ is then processed by a series of fully-connected layers, following the DeepSets (Zaheer et al., 2017) paradigm, which ultimately computes a single prediction for the current batch.\nWe know that the independence condition y ⊥ s holds in the perfect set, but not in the training set due to sampling bias. To do well, the discriminator should rely on this knowledge. More concretely, since the context and training set have differing support over S×Y , namely (Str×Ytr) ( (Sperf×Yperf ), that support serves as an indicator of the distribution from which the data has been drawn. The scenarios we consider dictate Ytr = Yperf , making the disentangling well-posed. However, since we wish to use Sctx × Yctx \\ Str × Ytr as the training signal for the encoder, and not the relative frequency of the target classes, it is important that, like the context set, we weight the samples of the training set such that p(str|ytr)p(ytr) is equal for all str, ytr ∈ Str × Ytr. To guide the network towards the desired solution, we supplement this implicit constraint with the explicit constraint that zy be predictive of y, which we achieve using a linear predictor l; whenever we have dim(S) > 1 (in\nour experiments this corresponds to the partial outcomes setting) we also impose the same constraint on zs, but with respect to s. With these conditions met, to fool the discriminator, the encoder must separate out information pertaining to S into the embedded space zs not part of the discriminator’s input, leaving only unprotected information in zy ." }, { "heading": "2.2 IMPLEMENTATION", "text": "Our framework overall entails two steps: 1) a method to construct a perfect dataset from an unlabeled context set, and 2) a method to produce disentangled representations using the perfect dataset.\nConstructing approximately perfect dataset via clustering. We cluster the data points from the context set into K = dim(Y) · dim(S) number of clusters, i.e. the number of data sourcesMy,s. We use the k-means clustering algorithm, and a recent method based on rank statistics (Han et al., 2020). The cluster assignments can then be used as the basis for constructing a perfect dataset for the subsequent disentangling phase. As a result of clustering, the data points in the context set Dctx are labeled with cluster assignments Dctx = {(xi, ci)}, ci = C(zi). We balance Dctx so that all clusters have equal size to form a perfect dataset (see fig. 1), and use it as a supervision signal for the disentangling step.\nClustering requirements. We do not need to explicitly name the clusters (i.e. finding the demographic groups and labels that the clusters correspond to is unnecessary). The clustering is needed for drawing an equal number of samples from each cluster, for each batch of data. Thus, constructing the perfect dataset in this way does not require solving the linear assignment problem of cluster-source association. In our experiments, we provide an analysis with unsupervised k-means clustering where we do not use annotations from the training set even for the known groups. When clustering with the training labels (such as with the rank statistics approach), we use the information that they provide to ensure samples from the known subgroups are clustered together with others with the same label." }, { "heading": "3 EXPERIMENTS", "text": "We conduct experiments using the Colored MNIST (Kim et al., 2019; Arjovsky et al., 2019; Kehrenberg et al., 2020) and Adult Income (Dheeru & Karra Taniskidou, 2017) datasets that are publicly available. To validate the first step of creating the perfect dataset, we compare three approaches: the proposed model with clustering via rank statistics (ZSF+bal. (ranking)), with clustering via k-means (ZSF+bal. (k-means)), and without balancing, when the context set Dctx is used directly (ZSF); followed by the disentangling step as described. Additionally, we evaluate a variant where the batches are balanced with ground truth labels (ZSF+bal. (ground truth)).\nTo validate the second step of learning fair representation via disentangling, we compare with two other baselines. We train a binary classifier on the labeled training data which simply trained with balanced batches (CNN for Colored MNIST and MLP for Adult Income). This is essentially what Kamiran & Calders (2012b) proposed, so we refer to it as Kamiran & Calders. The second is the fairness without demographics (FWD) method (Hashimoto et al., 2018) that learns fair classifier with abstract groups. This is the only fairness-based method that is intended for the setting with invisible demographics.\nIn Adult Income, in the setting of learning with partial outcomes, i.e. when we observe one-sided outcome for one of the demographics, we compare with one additional fairness-aware baseline. It is an adaptation of our model based on a common fair representation learning paradigm (Edwards & Storkey, 2015; Madras et al., 2018; Creager et al., 2019). Using the same AutoEncoder model we do for ZSF to generate a bipartite space, we train an adversarial network to minimize the approximate mutual information between str and the representation the zy, with the reconstruction likewise a function of zs and zy. We refer to this model as MIM (Mutual Information Minimization). MIM is similar to the FFVAE model proposed Creager et al. (2019), in the sense of learning of a bipartite latent space. However, since our experiments consider only single protected attribute, the disentangling term is irrelevant. Furthermore, rather than encouraging zs to be predictive of s, we encourage zy to be unpredictive of s for MIM, preventing the possibility of s-related information occupying both subspaces. Overall, this adaptation amounts to just modifying the adversarial loss but is only applicable in the setting where both protected groups are present in the training data (partial outcome setting), and cannot handle cases in which demographics are missing entirely.\nWe report the following performance metrics: clustering accuracy on the context set, classification accuracy and fairness metrics of the prediction task on the test set.\nColored MNIST dataset with 2 digits. The colored MNIST dataset is a variant of the MNIST dataset in which the digits are colored, and the color simulates the protected attributes of the digit. We study binary classification, digit two versus digit four, and explore two settings: with one digit-color source invisible (learning with partial outcomes) and two digit-color sources invisible (learning with missing demographics). Specifically, in the first setting we have training data for digit two in green and purple colors, but the digit four only comes in green color, so the sourceMy=’four’ ,s=purple is invisible. The second setting is learning with missing demographics, where we have training data for both digits in green, but we do not have training data for the demographics of purple color, i.e. two sourcesMy=’two’,s=purple andMy=’four’,s=purple) are invisible. At test time and in the context set we observe all possible colored digits combinations. We follow the colorization procedure outlined by Kim et al. (2019), with the mean color values selected so as to be maximally dispersed. The images are symmetrically zero-padded to a size of 32x32. In the 2-digit case, we use 5,339 images in the unlabeled context set, 2,328 training samples, and 2,014 for testing the classifier.\nWe report the results in Table 1. In both settings, we are able to clearly outperform the baselines. In the partial outcome setting, we can see that balancing the batches (bal.), to emulate a perfect dataset, significantly improves performance, not only in terms of accuracy but in all fairness metrics. The results in Table 1 show relatively high variance, but this likely stems from the smallness of the training set and potentially the imbalance in the classes. As Agrawal et al. (2020) observed, high variance is expected with fairness-enforcing methods (see especially Section 3 there).\nFor the setting with missing demographics, the results show that balancing is still useful, but only marginally so. This makes sense, because in this setting, the network does not have to ride the fine\nline of identifying the intended difference between training and context set. Instead, here it is very clear what the difference is: the training set only contains green digits whereas the context set has also purple digits. k-means performs relatively poorly here. It might be that it produces batches more biased than random batches which prevents the network from learning the right disentanglement.\nColored MNIST dataset with 3 digits. We conduct the experiments for a 3-digits-3-colors variant of ColoredMNIST dataset using the setting of learning with partial outcomes, to investigate how an increase in the number of classes affects disentangling of classes ang groups. We report the results with four sources missing in Table 2 and Table 7 in the Appendix. Our method (ZSF) outperforms the baselines by a significant margin with respect to both accuracy and all fairness metrics. Since, in this case, S and Y are both no longer binary, we generalize the fairness metrics applied to the binary datasets in two ways. We compute the mean of the pairwise AR/TPR/TNR ratios across all pairwise combinations. Additionally we compute the minimum (i.e. farthest away from 1) of the pairwise ratios (AR ratio min) as well as the largest difference between the raw AR values (AR diff max) reported in the Appendix. Also we compute the Hirschfeld-Gebelein-Renyi (HGR) maximal correlation (Rényi, 1959) between S and Y , serving as a measure of dependence defined between two variables with arbitrary support. See Appendix D for visualizations of the learned fair representation.\nAdult Income dataset. The Adult Income dataset is a common dataset for evaluating fair machine learning models. In this dataset, each instance is described by 14 characteristics including gender, education, marital status, number of work hours per week among others, along with a label denoting income level (≥50K or not). We transform the representation into 62 real and binary features along with the protected attribute s. In the whole dataset, 30% of the male individuals earn more than $50K per year (high income), however of the female individuals only 11% have a high income. Following standard practice in algorithmic fairness e.g. Zemel et al. (2013), we consider gender to be the protected attribute.\nFor evaluation, we balanced the test set such that all elements of S × Y are equally sized, observing that a random subset of the data could lead to a majority classifier achieving comparable accuracy to the fairness-unaware baselines, while achieving perfect fairness in terms of TPR ratios. We repeat the procedure with 30 train/context/test splits and report the average performance across repeats. We study the following two settings. 1) Invisible demographics: we have training data for males with positive and negative outcomes, but do not have labeled data for females, i.e. My=1s=0 and My=0s=0 are missing; 2) Partial outcomes: we have labeled training data for males s = 1 with both positive and negative outcomes, but for the group of females s = 0, we only observe the one-sided negative outcome, so the sourceMy=1s=0 is invisible. The results are reported in Table 3.\nPartial outcomes: We see that the proposed approach consistently outperforms the baselines in terms of fairness and performs on-par or better than them in terms of accuracy. The importance of the context-set being properly balanced for our method is reflected in the superiority of ZSF+bal. (ranking) over other variants, while the k-means variant fails to recognize the correct clustering according to S × Y , and the downstream accuracy suffers. The MIM baseline with ground truth balancing performs better than Kamiran & Calders and FWD in terms of fairness at the expense of accuracy drop. Our ZSF+bal. (ground truth) variant is dominating MIM+bal. (ground truth) in terms of both performance metrics.\nInvisible demographics: In the absence of str = Female altogether, FWD outperforms the baseline Kamiran & Calders (2012) MLP in all respects but TNR ratio, being the top-performer in accuracy among the tabulated methods. However, ZSF+bal. variants show to be considerably fairer according to all fairness metrics, at the cost of accuracy, and demonstrate the importance of a balanced context set (when compared to ZSF alone). In both settings, even though the clustering accuracy is far from perfect, it is good enough to benefit the distribution matching phase, indicating that while Lmatch is sensitive to the quality of Xperf , it is not overly so." }, { "heading": "4 DISCUSSION AND CONCLUSION", "text": "We have introduced a problem of algorithmic fairness in the presence of invisible demographics, which is at the intersection of demographic group fairness with each training data point annotated with protected attributes, and abstract group fairness with unknown protected attributes. We want to protect well-specified demographic groups but some demographics have non-existent labeled training data - those individuals are the invisible demographics. Our proposed model consists of discovering the missing demographic clusters in the unlabeled context set and subsequently learning a disentangled fair representation that can be used at deployment. We consider the train and context sets as coming from the same data domain, such that the knowledge about invisible demographics can be directly transferred from the latter. Extending the model to allow for a domain adaptation step between those sets will be explored in the future. This work is the first attempt in addressing learning with invisible demographics, and we hope it will spark broad interest in the community." }, { "heading": "5 CURRENT LIMITATIONS", "text": "First, dataset consumers should take extra care about the cost-benefit analysis of selecting particular datasets for their machine learning tasks. Although having zero labeled examples for some demographic groups is not uncommon, especially at the intersection of protected attributes, we should do go/no-go decisions w.r.t. this dataset. Corrective action such as fairness interventions or inaction should be recorded.\nSecond, the problem of fairness has no one size fits all solution as fairness definitions are context specific, i.e. different fairness definitions have different meanings in different contexts and not all fairness criteria can be simultaneously fulfilled in one decision process. Every decision process involves different stakeholders, decision makers, individuals affected by the decision, and sometimes the general public, as some decisions (e.g. criminal justice) has impact on the society as a whole. It is also the case that bias by models and perceived bias by a human observer might not be the same and has to be studied in a broad interdisciplinary context." }, { "heading": "A WHY NOT USE A FAIR CLUSTERING METHOD?", "text": "Current fair clustering methods (Chierichetti et al., 2017; Backurs et al., 2019; Huang et al., 2019) cluster based on the protected attribute and thus are not applicable to our setting in which the context set is unlabeled and the training set is incomplete with respect to s." }, { "heading": "B DATASET CONSTRUCTION", "text": "B.1 COLORED MNIST BIASING PARAMETERS\nTo simulate a real-word setting where the data, labeled or otherwise, is usually not naturally balanced, we bias the Colored MNIST training and context sets by downsampling certain color/digit combinations. The proportions of each such combination retained in the partial outcomes (in which we have one source missing from the training set) and invisible demographics (in which we have two sources missing from the training set) are enumerated in Table 4 and 5, respectively. For the 3-digit-3-color variant of the problem, no biasing is applied to either the context set or the training set (the missing combinations are specified in the caption accompanying Table 2); this variant was experimented with only under the partial-outcomes setting.\nB.2 ADULT INCOME\nFor the Adult Income dataset, we do not need to apply any synthetic biasing as the dataset naturally contains some bias wrt s. Thus, we instantiate the context as just a random subset of the original dataset. Explicit balancing of the test set is needed to yield informative evaluation, however, namely in the penalizing of biased classifiers, but care must be taken in doing so. Balancing the test set such that\n|{x ∈ X|s = 0, y = 0}| = |{x ∈ X|s = 1, y = 0}| and (9) |{x ∈ X|s = 0, y = 1}| = |{x ∈ X|s = 1, y = 1}| .\nwhere for both target classes, y = 0 and y = 1, the proportions of the groups s = 0 and s = 1 are made to be the same, is intuitive, yet at the same time precludes sensible comparison of the accuracy/fairness trade-off of the different classifiers. Indeed, with the above conditions, a majority classifier (predicting all 1s or 0s) achieves comparable accuracy to the fairness-unaware baselines, while also yielding perfect fairness, by definition. This observation motivated us to devise an alternative scheme, where we balance the test set according to the following constraints\n|{x ∈ X|s = 0, y = 0}| = |{x ∈ X|s = 0, y = 1}| (10) = |{x ∈ X|s = 1, y = 1}| = |{x ∈ X|s = 1, y = 0}|.\nThat is, all subsets of S ×Y are made to be equally sized. Under this new scheme the accuracy of the the majority classifier is 50% for the binary-classification task." }, { "heading": "C OPTIMIZATION", "text": "The hyperparameters and architectures for the AutoEncoder (AE), Predictor and Discriminator subnetworks used for the Adult Income and Colored MNIST experiments are detailed in Table 6. For fair comparison, identical hyperparameters are used for the MIM baseline. All networks are trained using the Adam optimizer (Kingma & Ba, 2015).\nFor ColorMNIST dataset, the baseline CNN and FWD model use an architecture similar to the encoder with two substitutions. 1) GLU Dauphin et al. (2017) is replaced with Leaky ReLU as the hidden activation; 2) max-pooling is used for spatial downsampling instead of strided convolutions. The final convolutional layer is followed by a global average pooling layer followed by a fully-connected classification layer. For the MIM and ZSF models, the architecture matches that of the discriminator, excluding the components from the aggregation operation onward in the latter case. All classifiers were trained for 60 epochs with a learning rate of 1× 10−3 and a batch size of 256. For evaluating on the Adult Income dataset we use scikit-learn’s (Pedregosa et al., 2011) logistic regression (LR) model, optimized with LBFGS, as the base classifier for K&C, MIM and our method (ZSF). SVM and LR with cross-validation (LRCV) from the same library are also included as baselines. Due to the discrepancy between the training and test sets leading to biased CV estimates, we found that using LR with a fixed regularization constant (C = 1.0) consistently yielded better performance. For the FWD baseline, logistic regression was again used but with it trained via gradient descent (Adam, learning rate=1× 10−3) instead of via convex optimization due to the non-standard loss function.\nSince, by design, we do not have labels for all subgroups the model will be tested on, and bias against these invisible subgroups is what we aim to avoid, properly validating, and thus conducting hyperparameter selection for models generally, is not straightforward. We can use estimates of the mutual information between the learned-representation and s and y (which we wish to minimize wrt to the former, maximize wrt the latter) to guide the process, though as we see from MIM baseline, optimizing the model wrt to these metrics obtained from only the training set does not guarantee generalization to the missing subgroups. We can however measure, additionally measure the entropy of the predictions on the encoded test set and seek to maximize it across all samples, or alternatively train a discriminator of the same kind used for training ZSF as a measure the shift in the latent space between datasets. We use the latter approach (considering, the learned distance between subspace distributions, accuracy, and reconstruction loss) to inform an extensive grid-search over the hyperparameter space of ZSF, and by extension MIM, for which we use the same encoder architecture as for ZSF, and the same discriminator architecture up until the aggregation step.\nFor the FWD-model, we allowed access to the labels of the test set for the purpose of hyperparameters selection, performing a grid-search over multiple splits to avoid overfitting to any particular instantiation. Specifically, the threshold (η) parameter for FWD was determined by a grid-search over the space {0.01, 0.1, 0.3, 1.0}. In addition to the losses stated in the distribution matching objective, Lmatch, in the main text, we also regularize the encoder by the `2 norm of its embedding, finding this to work better than more complex regularization methods such as spectral normalization (Miyato et al., 2018), finding this helped stabilize training. The weight associated with this parameter is denoted as ’`2-norm weight’ Table 6." }, { "heading": "D QUALITATIVE RESULTS", "text": "Given a learned invariant representation, we can generate a reconstruction to visualize the information contained in it. An example of this can be seen in figure 3. This is from our experiment with 3 digits\nin Colored MNIST. We can clearly see that the reconstructed invariant representation has lost all color information; instead all digits are magenta-colored, which was the majority color in the training set." }, { "heading": "E ADDITIONAL METRICS", "text": "E.1 COLORED MNIST DATASET WITH 3-DIGITS-3-COLORS TASK" } ]
2,020
null
SP:4a6172aeb95ae800b1a1e86f15a61c6b82cca9d9
[ "This paper studied the uncertainty estimation in GBDT method. The authors described 3 methods to estimate the uncertainty. With SGB, the estimation is achieved by training multiple models using data sub-samples. With SGLB, the authors derived that we can estimate the posterior distribution of the model parameters. These two methods both have the disadvantage that the training time is multiplicative of the number of trained models. To address this issue, the authors proposed an improvement to SGLB which they call virtual SGLB. The main idea is to use a subset of trees in a GBDT as a model sample, so that we can train a single model but still able to estimate the uncertainty." ]
For many practical, high-risk applications, it is essential to quantify uncertainty in a model’s predictions to avoid costly mistakes. While predictive uncertainty is widely studied for neural networks, the topic seems to be under-explored for models based on gradient boosting. However, gradient boosting often achieves stateof-the-art results on tabular data. This work examines a probabilistic ensemblebased framework for deriving uncertainty estimates in the predictions of gradient boosting classification and regression models. We conducted experiments on a range of synthetic and real datasets and investigated the applicability of ensemble approaches to gradient boosting models that are themselves ensembles of decision trees. Our analysis shows that ensembles of gradient boosting models successfully detect anomalous inputs while having limited ability to improve the predicted total uncertainty. Importantly, we also propose a concept of a virtual ensemble to get the benefits of an ensemble via only one gradient boosting model, which significantly reduces complexity.
[ { "affiliations": [], "name": "GRADIENT BOOSTING" }, { "affiliations": [], "name": "VIA ENSEMBLES" }, { "affiliations": [], "name": "Andrey Malinin" }, { "affiliations": [], "name": "Liudmila Prokhorenkova" }, { "affiliations": [], "name": "Aleksei Ustimenko" } ]
[ { "authors": [ "Arsenii Ashukha", "Alexander Lyzhov", "Dmitry Molchanov", "Dmitry Vetrov" ], "title": "Pitfalls of in-domain uncertainty estimation and ensembling in deep learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Thierry Bertin-Mahieux", "Daniel PW Ellis", "Brian Whitman", "Paul Lamere" ], "title": "The million song dataset", "venue": "In Proceedings of the 12th International Society for Music Information Retrieval Conference (ISMIR", "year": 2011 }, { "authors": [ "Christopher M Bishop" ], "title": "Pattern recognition and machine learning", "venue": "springer,", "year": 2006 }, { "authors": [ "Christopher JC Burges" ], "title": "From RankNet to LambdaRank to LambdaMART: An overview", "venue": null, "year": 2010 }, { "authors": [ "Rich Caruana", "Alexandru Niculescu-Mizil" ], "title": "An empirical comparison of supervised learning algorithms", "venue": "In Proceedings of the 23rd international conference on Machine learning,", "year": 2006 }, { "authors": [ "Hugh A Chipman", "Edward I George", "Robert E McCulloch" ], "title": "Bart: Bayesian additive regression trees", "venue": "The Annals of Applied Statistics,", "year": 2010 }, { "authors": [ "John W Coulston", "Christine E Blinn", "Valerie A Thomas", "Randolph H Wynne" ], "title": "Approximating prediction uncertainty for random forest regression models", "venue": "Photogrammetric Engineering & Remote Sensing,", "year": 2016 }, { "authors": [ "Stefan Depeweg", "José Miguel Hernández-Lobato", "Finale Doshi-Velez", "Steffen Udluft" ], "title": "Decomposition of uncertainty for active learning and reliable reinforcement learning in stochastic systems", "venue": null, "year": 2017 }, { "authors": [ "Tony Duan", "Anand Avati", "Daisy Yi Ding", "Sanjay Basu", "Andrew Y Ng", "Alejandro Schuler" ], "title": "Ngboost: Natural gradient boosting for probabilistic prediction", "venue": "Proc. 37th International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Jerome H Friedman" ], "title": "Greedy function approximation: a gradient boosting machine", "venue": "Annals of statistics,", "year": 2001 }, { "authors": [ "Jerome H Friedman" ], "title": "Stochastic gradient boosting", "venue": "Computational Statistics & Data Analysis,", "year": 2002 }, { "authors": [ "Yarin Gal" ], "title": "Uncertainty in Deep Learning", "venue": "PhD thesis, University of Cambridge,", "year": 2016 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning", "venue": "In Proc. 33rd International Conference on Machine Learning", "year": 2016 }, { "authors": [ "Franz Graf", "Hans-Peter Kriegel", "Matthias Schubert", "Sebastian Pölsterl", "Alexander Cavallaro" ], "title": "2d image registration in ct images using radial image descriptors", "venue": "In International Conference on Medical Image Computing and Computer-Assisted Intervention,", "year": 2011 }, { "authors": [ "Dan Hendrycks", "Kevin Gimpel" ], "title": "A Baseline for Detecting Misclassified and Out-ofDistribution", "venue": "Examples in Neural Networks", "year": 2016 }, { "authors": [ "Alex Kendall", "Yarin Gal", "Roberto Cipolla" ], "title": "Multi-task learning using uncertainty to weigh losses for scene geometry and semantics", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Andreas Kirsch", "Joost van Amersfoort", "Yarin Gal" ], "title": "BatchBALD: Efficient and diverse batch acquisition for deep bayesian active learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Ron Kohavi" ], "title": "Scaling up the accuracy of naive-bayes classifiers: A decision-tree hybrid", "venue": "In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining,", "year": 1996 }, { "authors": [ "B. Lakshminarayanan", "A. Pritzel", "C. Blundell" ], "title": "Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles", "venue": "In Proc. Conference on Neural Information Processing Systems (NIPS),", "year": 2017 }, { "authors": [ "Antonio R Linero" ], "title": "A review of tree-based bayesian methods", "venue": "Communications for Statistical Applications and Methods,", "year": 2017 }, { "authors": [ "Wesley Maddox", "Timur Garipov", "Pavel Izmailov", "Dmitry Vetrov", "Andrew Gordon Wilson" ], "title": "A simple baseline for bayesian uncertainty in deep learning", "venue": null, "year": 1902 }, { "authors": [ "Andrey Malinin" ], "title": "Uncertainty Estimation in Deep Learning with application to Spoken Language Assessment", "venue": "PhD thesis, University of Cambridge,", "year": 2019 }, { "authors": [ "Andrey Malinin", "Mark JF Gales" ], "title": "Reverse kl-divergence training of prior networks: Improved uncertainty and adversarial robustness", "venue": null, "year": 2019 }, { "authors": [ "Andrey Malinin", "Bruno Mlodozeniec", "Mark JF Gales" ], "title": "Ensemble distribution distillation", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Yaniv Ovadia", "Emily Fertig", "Jie Ren", "Zachary Nado", "D Sculley", "Sebastian Nowozin", "Joshua V Dillon", "Balaji Lakshminarayanan", "Jasper Snoek" ], "title": "Can you trust your model’s uncertainty? Evaluating predictive uncertainty under dataset shift", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "F. Pedregosa", "G. Varoquaux", "A. Gramfort", "V. Michel", "B. Thirion", "O. Grisel", "M. Blondel", "P. Prettenhofer", "R. Weiss", "V. Dubourg", "J. Vanderplas", "A. Passos", "D. Cournapeau", "M. Brucher", "M. Perrot", "E. Duchesnay" ], "title": "Scikit-learn: Machine Learning in Python", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Liudmila Prokhorenkova", "Gleb Gusev", "Aleksandr Vorobev", "Anna Veronika Dorogush", "Andrey Gulin" ], "title": "Catboost: unbiased boosting with categorical features", "venue": "In Proceedings of the 32nd International Conference on Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Maxim Raginsky", "Alexander Rakhlin", "Matus Telgarsky" ], "title": "Non-convex learning via stochastic gradient langevin dynamics: a nonasymptotic analysis", "venue": null, "year": 2017 }, { "authors": [ "Matthew Richardson", "Ewa Dominowska", "Robert Ragno" ], "title": "Predicting clicks: estimating the clickthrough rate for new ads", "venue": "In Proceedings of the 16th international conference on World Wide Web,", "year": 2007 }, { "authors": [ "Byron P Roe", "Hai-Jun Yang", "Ji Zhu", "Yong Liu", "Ion Stancu", "Gordon McGregor" ], "title": "Boosted decision trees as an alternative to artificial neural networks for particle identification", "venue": "Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment,", "year": 2005 }, { "authors": [ "Mohammad Hossein Shaker", "Eyke Hüllermeier" ], "title": "Aleatoric and epistemic uncertainty with random forests", "venue": "In International Symposium on Intelligent Data Analysis,", "year": 2020 }, { "authors": [ "Lewis Smith", "Yarin Gal" ], "title": "Understanding Measures of Uncertainty for Adversarial Example Detection", "venue": "In UAI,", "year": 2018 }, { "authors": [ "Aleksei Ustimenko", "Liudmila Prokhorenkova" ], "title": "SGLB: Stochastic Gradient Langevin Boosting", "venue": "arXiv e-prints, art", "year": 2020 }, { "authors": [ "Qiang Wu", "Christopher JC Burges", "Krysta M Svore", "Jianfeng Gao" ], "title": "Adapting boosting for information retrieval measures", "venue": "Information Retrieval,", "year": 2010 }, { "authors": [ "Yanru Zhang", "Ali Haghani" ], "title": "A gradient boosting method to improve travel time prediction", "venue": "Transportation Research Part C: Emerging Technologies,", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Gradient boosting (Friedman, 2001) is a widely used machine learning algorithm that achieves stateof-the-art results on tasks containing heterogeneous features, complex dependencies, and noisy data: web search, recommendation systems, weather forecasting, and many others (Burges, 2010; Caruana & Niculescu-Mizil, 2006; Richardson et al., 2007; Roe et al., 2005; Wu et al., 2010; Zhang & Haghani, 2015). Gradient boosting based on decision trees (GBDT) underlies such well-known libraries like XGBoost, LightGBM, and CatBoost. In this paper, we investigate the estimation of predictive uncertainty in GBDT models. Uncertainty estimation is crucial for avoiding costly mistakes in high-risk applications, such as autonomous driving, medical diagnostics, and financial forecasting. For example, in self-driving cars, it is necessary to know when the AI-pilot is confident in its ability to drive and when it is not to avoid a fatal collision. In financial forecasting and medical diagnostics, mistakes on the part of an AI forecasting or diagnostic system could either lead to large financial or reputational loss or to the loss of life. Crucially, both financial and medical data are often represented in heterogeneous tabular form — data on which GBDTs are typically applied, highlighting the relevance of our work on obtaining uncertainty estimates for GBDT models.\nApproximate Bayesian approaches for uncertainty estimation have been extensively studied for neural network models (Gal, 2016; Malinin, 2019). Bayesian methods for tree-based models (Chipman et al., 2010; Linero, 2017) have also been widely studied in the literature. However, this research did not explicitly focus on studying uncertainty estimation and its applications. Some related work was\n∗All authors contributed equally and are listed in alphabetical order.\ndone by Coulston et al. (2016); Shaker & Hüllermeier (2020), who examined quantifying predictive uncertainty for random forests. However, the area has been otherwise relatively under-explored, especially for GBDT models that are widely used in practice and known to outperform other approaches based on tree ensembles.\nWhile for classification problems GDBT models already return a distribution over class labels, for regression tasks they typically yield only point predictions. Recently, this problem was addressed in the NGBoost algorithm (Duan et al., 2020), where a GBDT model is trained to return the mean and variance of a normal distribution over the target variable y for a given feature vector. However, such models only capture data uncertainty (Gal, 2016; Malinin, 2019), also known as aleatoric uncertainty, which arises due to inherent class overlap or noise in the data. However, this does not quantify uncertainty due to the model’s inherent lack of knowledge about inputs from regions either far from the training data or sparsely covered by it, known as knowledge uncertainty, or epistemic uncertainty (Gal, 2016; Malinin, 2019). One class of approaches for capturing knowledge uncertainty are Bayesian ensemble methods, which have recently become popular for estimating predictive uncertainty in neural networks (Depeweg et al., 2017; Gal & Ghahramani, 2016; Kendall et al., 2018; Lakshminarayanan et al., 2017; Maddox et al., 2019; Smith & Gal, 2018). A key feature of ensemble approaches is that they allow overall uncertainty to be decomposed into data uncertainty and knowledge uncertainty within an interpretable probabilistic framework (Depeweg et al., 2017; Gal, 2016; Malinin, 2019). Ensembles are also known to yield improvements in predictive performance.\nThis work examines ensemble-based uncertainty-estimation for GBDT models. The contributions are as follows. First, we consider generating ensembles using both classical Stochastic Gradient Boosting (SGB) as well as the recently proposed Stochastic Gradient Langevin Boosting (SGLB) (Ustimenko & Prokhorenkova, 2020). Importantly, SGLB allows us to guarantee that the models are asymptotically sampled from a true Bayesian posterior. Second, we show that using SGLB we can construct a virtual ensemble using only one gradient boosting model, significantly reducing the computational complexity. Third, to understand the attributes of using ensembles-based uncertainty estimation in GBDT models, we conduct extensive analysis on several synthetic datasets. Finally, we evaluate the proposed approach on a range of real regression and classification datasets. Our results show that this approach successfully enables the detection of anomalous out-of-domain inputs. Importantly, our solution is easy to combine with any implementation of GBDT. Our methods have been implemented within the open-source CatBoost library. The code of our experiments is publicly available at https://github.com/yandex-research/GBDT-uncertainty." }, { "heading": "2 PRELIMINARIES", "text": "Uncertainty Estimation via Bayesian Ensembles In this work we consider uncertainty estimation within the standard Bayesian ensemble-based framework (Gal, 2016; Malinin, 2019). Here, model parameters θ are considered random variables and a prior p(θ) is placed over them to compute a posterior p(θ|D) via Bayes’ rule:\np(θ|D) = p(D|θ)p(θ) p(D) . (1)\nwhere D = {x(i), y(i)}Ni=1 is the training dataset. Each set of parameters can be considered a hypothesis or explanation about how the world works. Samples from the posterior should yield explanations consistent with the observations of the world contained within the training data D. However, on data far from D each set of parameters can yield different predictions. Therefore, estimates of knowledge uncertainty can be obtained by examining the diversity of predictions.\nConsider an ensemble of probabilistic models {P(y|x;θ(m))}Mm=1 sampled from the posterior p(θ|D). Each model P(y|x,θ(m)) yields a different estimate of data uncertainty, represented by the entropy of its predictive distribution (Malinin, 2019). Uncertainty in predictions due to knowledge uncertainty is expressed as the level of spread, or “disagreement”, of models in the ensemble (Malinin, 2019). Note that exact Bayesian inference is often intractable, and it is common to consider either an explicit or implicit approximation q(θ) to the true posterior p(θ|D). While a range of approximations has been explored for neural network models (Gal & Ghahramani, 2016; Lakshminarayanan et al., 2017; Maddox et al., 2019)1, to the best of our knowledge, limited work\n1A full overview is available in (Ashukha et al., 2020; Ovadia et al., 2019).\nhas explored Bayesian inference for gradient-boosted trees. Given p(θ|D), the predictive posterior of the ensemble is obtained by taking the expectation with respect to the models in the ensemble:\nP(y|x,D) = Ep(θ|D) [ P(y|x;θ) ] ≈ 1\nM M∑ m=1 P(y|x;θ(m)), θ(m) ∼ p(θ|D) . (2)\nThe entropy of the predictive posterior estimates total uncertainty in predictions: H [ P(y|x,D) ] = EP(y|x,D) [ − ln P(y|x,D) ] . (3)\nTotal uncertainty is due to both data uncertainty and knowledge uncertainty. However, in applications like active learning (Kirsch et al., 2019) and out-of-domain detection it is desirable to estimate knowledge uncertainty separately. The sources of uncertainty can be decomposed by considering the mutual information between the parameters θ and the prediction y (Depeweg et al., 2017):\nI [ y,θ|x,D ]︸ ︷︷ ︸ Knowledge Uncertainty = H [ P(y|x,D) ]︸ ︷︷ ︸ Total Uncertainty −Ep(θ|D) [ H[P(y|x;θ)] ]︸ ︷︷ ︸ Expected Data Uncertainty\n≈ H [ 1 M M∑ m=1 P(y|x;θ(m)) ] − 1 M M∑ m=1 H[P(y|x;θ(m))] . (4)\nThis is expressed as the difference between the entropy of the predictive posterior, a measure of total uncertainty, and the expected entropy of each model in the ensemble, a measure of expected data uncertainty. Their difference is a measure of ensemble diversity and estimates knowledge uncertainty.\nUnfortunately, when considering ensembles of probabilistic regression models {p(y|x;θ(m))}Mm=1 over continuous-valued target y ∈ R, it is no longer possible to obtain tractable estimates of the (differential) entropy of the predictive posterior, and, by extension, mutual information. In this cases uncertainty estimates can instead derived via the law of total variation:\nVp(y|x,D)[y]︸ ︷︷ ︸ Total Uncertainty\n= Vp(θ|D) [ Ep(y|x,θ)[y] ]︸ ︷︷ ︸ Knowledge Uncertainty +Ep(θ|D) [ Vp(y|x,θ)[y] ]︸ ︷︷ ︸ Expected Data Uncertainty . (5)\nThis is conceptually similar to the decomposition (4) obtained via mutual information. For an ensemble of probabilistic regression models which parameterize the normal distribution, and where each models yields a mean and standard-deviation, the total variance can be computed as follows:\nVp(y|x,D)[y]︸ ︷︷ ︸ Total Uncertainty ≈ 1 M M∑ m=1 [( M∑ m=1 µm M ) − µm ]2 ︸ ︷︷ ︸\nKnowledge Uncertainty\n+ 1\nM M∑ m=1\nσ2m︸ ︷︷ ︸ Expected Data Uncertainty , {µm, σm} = f(x;θ(m)). (6)\nHowever, while these measures are tractable, they are based on only first and second moments, and may therefore miss high-order details in the uncertainty. They are also not scale-invariant, which can cause issues is the scale of prediction on in-domain and out-of-domain data is very different.\nGradient boosting is a powerful machine learning technique especially useful on tasks containing heterogeneous features. It iteratively combines weak models, such as decision trees, to obtain more accurate predictions. Formally, given a dataset D and a loss function L : R2 → R, the gradient boosting algorithm (Friedman, 2001) iteratively constructs a model F : X → R to minimize the empirical risk L(F |D) = ED[L(F (x), y)]. At each iteration t the model is updated as:\nF (t)(x) = F (t−1)(x) + ϵh(t)(x) , (7)\nwhere F (t−1) is a model constructed at the previous iteration, h(t)(x) ∈ H is a weak learner chosen from some family of functionds H, and ϵ is learning rate. The weak learner h(t) is usually chosen to approximate the negative gradient −g(t)(x, y) := −∂L(y,s)∂s ∣∣ s=F (t−1)(x) :\nh(t) = argmin h∈H\nED [( − g(t)(x, y)− h(x) )2] . (8)\nA weak learner h(t) is associated with parameters ϕ(t) ∈ Rd. We write h(t)(x,ϕ(t)) to reflect this dependence. The set of weak learners H often consists of shallow decision trees, which are models that recursively partition the feature space into disjoint regions called leaves. Each leaf Rj of the tree is assigned to a value, which is the estimated response y in the corresponding region. We can write h(x,ϕ(t)) = ∑d j=1 ϕ (t) j 1{x∈Rj}, so the decision tree is a linear function of ϕ\n(t). The final GBDT model F is a sum of decision trees (7) and the parameters of the full model are denoted by θ.\nFor classification tasks, a model yields estimates data uncertainty if it is trained via negative loglikelihood and provides a distribution over class labels. However, classic GBDT regression models yield point predictions, and there has been little research devoted to estimating predictive uncertainty. Recently, this issue was addressed by Duan et al. (2020) via an algorithm called NGBoost (Natural Gradient Boosting), which allows estimating data uncertainty. NGBoost simultaneously estimates the parameters of a conditional distribution p(y|x,θ) over the target y given the features x, by optimizing a proper scoring rule. Typically, a normal distribution over y is assumed and negative log-likelihood is taken as a scoring rule. Formally, given an input x, the model F predicts two parameters of normal distribution - the mean µ and the logarithm of the standard deviation log σ. The loss function is the expected negative log-likelihood:2\np(y|x,θ(t)) = N (y|µ(t), σ(t)), {µ(t), log σ(t)} = F (t)(x) .\nL(θ|D) = ED[− log p(y|x,θ)] = − 1\nN N∑ i=1 log p(y(i)|x(i),θ) .\n(9)\n(10)\nNote that θ denotes the concatenation of two parameter vectors used to predict µ and log σ." }, { "heading": "3 GENERATING ENSEMBLES OF GDBT MODELS", "text": "As discussed in Section 2, knowledge uncertainty can be estimated by considering an ensemble of models {p(y|x;θ(m))}Mm=1 sampled from the posterior p(θ|D). The level of diversity or “disagreement” between the models is an estimate of knowledge uncertainty. In this section, we consider three approaches to generating an ensemble of GBDT models. We emphasize that this section discusses ensembles of GBDT models, where a each GBDT model is itself an ensemble of trees.\nSGB ensembles One way to generate an ensemble is to consider several independent models generated via Stochastic Gradient Boosting (SGB). Stochasticity is added to GBDT models via random subsampling of the data at every iteration (Friedman, 2002). Specifically, at each iteration of (8) we select a subset of training objects D′ (via bootstrap or uniformly without replacement), which is smaller than the original training dataset D, and use D′ to fit the next tree instead of D. The fraction of chosen objects is called sample rate. This implicitly injects noise into the learning process, effectively inducing a distribution q(θ) over such models. Thus, an SGB ensemble is an ensemble of independent models {θ(m)}Mm=1 built according to SGB with different random seeds for subsampling data. Unfortunately, there are no guarantees on how well the distribution q(θ) estimates the true posterior p(θ|D). SGLB ensembles Remarkably, there is a way to sample GBDT models from the true posterior p(θ|D) via a recently proposed Stochastic Gradient Langevin Boosting (SGLB) algorithm (Ustimenko & Prokhorenkova, 2020). SGLB combines gradient boosting with stochastic gradient Langevin dynamics (Raginsky et al., 2017) in order to achieve convergence to the global optimum even for non-convex loss functions. The algorithm has two differences compared with SGB. First, Gaussian noise is explicitly injected into the gradients, so (8) is replaced by:\nh(t) = argmin h∈H\nED [( −g(t)(x, y)− h(x,ϕ) + ν )2] , ν ∼ N ( 0, 2\nβϵ I|D|\n) , (11)\nwhere β is the inverse diffusion temperature and I|D| is an identity matrix. This random noise ν helps to explore the solution space in order to find the global optimum and the diffusion temperature controls the level of exploration. Second, the update (7) is modified as:\nF (t)(x) = (1− γϵ)F (t−1)(x) + ϵh(t)(x,ϕ(t)) , (12) 2Since GBDT model is determined by θ, we use notation L(F |D) and L(θ|D) interchangeably." }, { "heading": "Last model", "text": "where γ is regularization parameter. If the number of all possible trees is finite (a natural assumption given that the training dataset is finite), then the SGLB parameters θ(t) at each iteration form a Markov chain that weakly converges to the stationary distribution, also called the invariant measure:\np∗β(θ) ∝ exp(−βL(θ|D)− βγ∥Γθ∥22) , (13)\nwhere Γ = ΓT > 0 is an implicitly defined regularization matrix which depends on a particular tree construction algorithm (Ustimenko & Prokhorenkova, 2020).\nWhile Ustimenko & Prokhorenkova (2020) used the weak convergence to (13) to prove the global convergence, we apply this to enable sampling from the true posterior. For this purpose, we set β = |D| and γ = 12|D| . For the negative log-likelihood loss function (10) the invariant measure (13) can be expressed as:\np∗β(θ) ∝ exp ( log p(D|θ)− 1\n2 ∥Γθ∥22\n) ∝ p(D|θ)p(θ) , (14)\nwhich is proportional to the true posterior distribution p(θ|D) under Gaussian prior p(θ) = N (0,Γ). Thus, an SGLB ensemble is an ensemble of independent models {θ(m)}Mm=1 generated according to the SGLB algorithm using different random seeds. In this case, asymptotically, models are sampled from the true posterior p(θ|D). Virtual SGLB ensembles While SGB and SGLB yield ensembles of independent models, their time and space complexity is M times larger than that of a single model, which is a significant overhead. Consequently, generating an ensemble requires either significantly increasing complexity or sacrificing the quality by reducing the number of training iterations. To address this, we introduce the concept of a virtual ensemble that enables generating an ensemble using only one model. This is possible since a GBDT model is itself an ensemble of trees. However, in contrast to random forests formed by independent trees (Shaker & Hüllermeier, 2020), the sequential nature of GBDT models implies that all trees are dependent and individual trees cannot be considered as separate models. Hence, we use “truncated” sub-models of a single GBDT model as elements of an ensemble, as illustrated in Figure 1. Notably, a virtual ensemble can be obtained using any already constructed GBDT model. Below we formally describe this procedure applied to SGLB models since in this case we can guarantee asymptotically sampling from the true posterior p(θ|D).\nEach “truncated” model is described by the vector of parameters θ(t). As the parameters θ(t) at each iteration of the SGLB algorithm form a Markov chain that weakly convergences to the stationary distribution (14), we can consider using them as an ensemble of models. However, unlike parameters taken from different SGLB trajectories, these will have a high degree of correlation, which adversely affects the ensemble’s quality. This problem can be overcome by retaining only every K-th set of parameters. Formally, fix K ≥ 1 and consider a set of models ΘT,K = {θ(Kt), [ T 2K ] ≤ t ≤ [ T K ] }, i.e., we add to ΘT,K every K-th model obtained while constructing one SGLB model using T iterations of gradient boosting. Choosing larger values of K allows us to reduce the correlation between samples from the SGLB Markov chain. Furthermore, we do not include to the ensemble the models θ(t) with t < T/2 as (14) holds only asymptotically. The set of M = [ T 2K ] models ΘT,K is called a virtual ensemble. Note that virtual ensembles behave similarly to true ensembles in the limit (for large K and T ).\nImportantly, we can compute the prediction of ΘT,K with the same computation time as one θ(T ). Indeed, when computing the prediction of one model, we have to sum up the predictions made by individual trees. To get the virtual ensemble, we only have to store the partial sums. For SGLB, we also have to account for regularization (12). Formally, according to (12), for SGLB we have\nθ(T ) = ∑T\ni=1 ϵ(1 − γϵ)T−iϕ(i), where (1 − γϵ)T−i appears due to shrinkage. While computing θ(T ) we store the partial sums θ(T )≤t = ∑t i=1 ϵ(1 − γϵ)T−iϕ(i). Then, any model θ(t) from ΘT,K can easily be obtained from the stored values:\nθ(t) = t∑ i=1 ϵ(1− γϵ)t−iϕ(i) = (1− γϵ)t−Tθ(T )≤t . (15)" }, { "heading": "4 ANALYSIS ON SYNTHETIC DATA", "text": "In this section, we analyze how ensemble algorithms discussed in Section 3 perform on synthetic data. The aim is to understand the attributes of ensembles of GBDT models for estimating data and knowledge uncertainty in a controllable setting.\nGBDT models are usually applied to tabular data, where features are often categorical. Hence, we first generate a dataset with each example described by two categorical features x1, x2 with 9 values each, resulting in 81 possible combinations. The target depends on the features as y = a(x1, x2) + ε(x1, x2), where ε(x1, x2) ∼ N (0, b(x1, x2)) and a(x1, x2), b(x1, x2) are some deterministic functions. The values for a(x1, x2) are randomly generated according to the uniform distribution over [0, 1]. The values for b(x1, x2) are shown on Figure 2(a). We generate a heartshaped dataset with this distribution: inside the heart (white region on Figure 2(a)) there are no training points, for the other cells we have 1000 examples per cell.\nWe train an ensemble of 10 SGLB models (each model consists of 1000 trees) and observe the following effects. First, Figure 2(b) shows the total uncertainty estimated with SGLB ensemble and we see that the models correctly capture this uncertainty in all cells containing training examples. At the same time, arbitrary values can be predicted inside the heart, as no training data constrain the models’ behavior there. Second, Figure 2(c) shows that estimates of knowledge uncertainty allow us to detect regions that are out-of-domain and are not covered by the training data. Notably, the separation is perfect, as there is no trace of the original heart border.\nTo further analyze ensembles of GBDT models, we apply them to a two-dimensional classification task with continuous-valued features. We consider a 3-class spiral dataset shown on Figure 3(a); and this setting is much harder for gradient boosted trees.3 Figure 3(b) shows the total uncertainty estimated with SGLB ensemble, while Figure 3(c) demonstrates knowledge uncertainty. We observe several effects. First, total uncertainty correctly detects class boundaries and ‘sectors’ of input space outside the training dataset. Second, looking at these ‘sectors’ of high uncertainty, we can better understand how GBDT ensembles work: as decision trees are discriminative functions (Bishop, 2006), if features have values outside the training domain, then the prediction is the same as for the “closest” elements in the dataset. In other words, the models’ behavior on the boundary of the dataset is further extended to the outer regions. Third, estimates of knowledge uncertainty allow discrimination between out-of-domain regions and class boundaries. However, we still can see traces of the class boundaries in Figure 3(c). A possible reason is the fact that for real-valued features, near the class borders, the splitting values may vary across all models in the ensemble, resulting in nonzero estimates of knowledge uncertainty due to decision-boundary ‘jitter’.\n3To partially mitigate the difficulties, we use coordinates in rotated axes and radius as additional features.\nOn both “heart” and “spiral” datasets, we observed that the absolute values of knowledge uncertainty are much smaller than of data uncertainty and therefore contribute very little to total uncertainty. Thus, we expect that while knowledge uncertainty is especially useful for detecting anomalous inputs, the proposed approaches will contribute little to error detection on top of estimates of data uncertainty provided by single models.\nFinally, on Figure 4, we compare the performance of ‘true’ SGLB ensembles with the virtual SGLB ensembles (vSGBL) on both the “heart” and “spiral” datasets. The virtual ensemble is ten times cheaper to train and infer, but the ensemble members are strongly correlated. We observe that on the “heart” dataset, vSGLB perfectly detects regions not covered by training data. However, the absolute values of knowledge uncertainty are much smaller than for SGLB, which can be explained by the correlations. The “spiral” dataset is more challenging for both SGLB and vSGLB. While having qualitatively similar behavior, virtual ensembles struggle to detect out-of-domain regions and separate them from class boundaries. In all cases, the absolute values of knowledge uncertainty are far lower than for ‘true’ SGLB ensembles. This shows that while vSGLB yields very cheap estimates of knowledge uncertainty by exploiting the ‘ensemble of trees’ structure of GBDT models, the quality of these estimates is inferior to ensembles of independent models." }, { "heading": "5 EXPERIMENTS ON CLASSIFICATION AND REGRESSION DATASETS", "text": "In this section, we evaluate the performance of ensembles of GBDT models on a range of classification and regression tasks, focusing on their ability to detect errors and out-of-domain inputs.\nExperimental setup Our implementation of all GBDT models is based on the CatBoost library that is known to achieve state-of-the-art results in a variety of tasks (Prokhorenkova et al., 2018). Classification models yield a probability distribution over binary class labels, while regression models yield the mean and variance of the normal distribution, as discussed in Section 2. All models are trained by optimizing the negative log-likelihood.4 We consider SGB and SGBL single models as the baselines and examine all ensemble methods defined in Section 3. Ensembles of SGB and SGLB models consist of 10 independent (with different seeds) models with 1000 trees each. The virtual ensemble vSGLB is obtained from one model with 1000 trees, where each 50th model from the interval [501, 1000] is added to the ensemble. Thus, vSGLB has the same computational and\n4In Appendix A.1, we compare our implementation with the original NGBoost and Deep Ensembles in terms of NLL (negative log-likelihood) and RMSE. Our implementation has comparable performance to the existing methods.\nspace complexity as just one SGB or SGLB model. Hyper-parameters are tuned by grid search, for details see Appendix A.2.\nWe compare the algorithms on several classification and regression tasks (Gal & Ghahramani, 2016; Prokhorenkova et al., 2018), the description of which is available in Appendix A.3.\nWhile not being the focus of the current research, Random Forest (RF) models are naturally suitable for ensemble approaches. Hence, we conduct additional experiments and analyze the performance of ensemble approaches applied to RF models in Appendix C.\nDetection of errors and anomalous inputs We analyze whether measures of total and knowledge uncertainty can be used to detect errors and out-of-domain inputs. Error detection can be evaluated via the Prediction-Rejection Ratio (PRR) (Malinin, 2019; Malinin et al., 2020), which measures how well uncertainty estimates correlate with errors and rank-order them. The best value is 100, random is 0. Out-of-domain (OOD) detection is assessed via area under the ROC curve (AUCROC) (Hendrycks & Gimpel, 2016). For OOD detection, we need an OOD test-set. However, obtaining ‘real’ OOD examples for the datasets considered in this work is challenging, so we instead create synthetic OOD data as follows. For each dataset, we take its test set as the in-domain examples and sample an OOD dataset of the same size from the Year MSD dataset to get out-of-domain (OOD) data. The only exceptions are KDD datasets (Appetency, Churn, Upselling) and Year MSD, for which we sample OOD data from the Relative location of CT slices on axial axis Data Set (Graf et al., 2011). All numerical features in OOD data are normalized by the per-column mean and variance obtained on the in-domain training data. For categorical features, we sample a random category uniformly at random from the set of all feature’s categories. Total and knowledge uncertainty are estimated via entropy of the predictive posterior (3) and mutual information (4) for classification models and via total variance and variance of the mean (5) for regression ones.\nTest errors can occur due to both noise and lack of knowledge, so we expect that ranking elements by total uncertainty would give better values of PRR. Table 1 shows that measures of total uncertainty consistently yield better PPR results across all datasets. This is consistent with results obtained for ensembles of neural network models (Lakshminarayanan et al., 2017; Malinin, 2019; Malinin & Gales, 2019; Malinin et al., 2020). However, ensembles do not outperform single models. We believe this occurs for two reasons. First, due to the additive nature of boosting, GDBT models are already ensembles. Second, as we have discussed in Section 4, for GBDT models, estimates of knowledge uncertainty obtained via the approaches considered here contribute little to estimates of total uncertainty.\nIn contrast, Table 1 shows that measures of knowledge uncertainty yield superior OOD detection performance compared to total uncertainty in terms of AUC-ROC, which is consistent with results for non-GBDT models (Malinin, 2019; Malinin & Gales, 2019; Malinin et al., 2020).5 The results also show that SGB and SGLB ensembles performed almost equally well. At the same time, virtual ensembling (vSGLB) performed consistently worse (with one exception) than SGB/SGLB ensembles, which is explained by the presence of strong correlations between the models in a virtual ensemble. However, in classification tasks, estimates of knowledge uncertainty provided by vSGLB nevertheless outperform uncertainty estimates derived from single SGB and SGLB models. This shows that useful measures of knowledge uncertainty can be derived from a single SGLB model by interpreting it as a virtual ensemble at no additional computational or memory cost. For vSGLB, the difference between classification and regression tasks can be explained by the presence or absence of categorical features. In our preliminary experiments on synthetic data, we noticed that categorical features may have a noticeable effect on the diversity of vSGLB models, and our classification datasets contain categorical features." }, { "heading": "6 CONCLUSION", "text": "This work examined principled, ensemble-based uncertainty-estimation for GBDT models. Two main approaches to generating ensembles of GDBT models, where each model is itself an ensemble of trees, were considered — Stochastic Gradient Boosting (SGB) and Stochastic Gradient Langevin Boosting (SGLB). Based on SGLB, we propose constructing a virtual ensemble (vSGLB) by ex-\n5Note that single models do not allow distinguishing between the types of uncertainty.\nploiting the ‘ensemble-of-trees’ nature of GBDT models. Properties of the estimates of total, data, and knowledge uncertainty derived from these ensembles were first analyzed on synthetic data. It was shown that the proposed approach can successfully detect anomalous inputs and is especially successful on tabular data. On continuous data, detecting knowledge uncertainty is still possible, but it becomes harder to differentiate it with data uncertainty due to decision-boundary ‘jitter’. Further experiments on a wide range of classification and regression datasets showed that while ensembles of GDBT models do not offer much advantage in terms of error detection, as each model is already an ensemble of trees, they do yield useful measures of knowledge uncertainty, which enables outof-domain detection in both regression and classification tasks. Notably, measures of knowledge uncertainty, which can only be obtained via ensembles, achieve far better OOD detection performance than measures of total uncertainty. It is also shown that while there is little practical difference between SGB and SGLB ensembles, vSGLB performs noticeably worse. However, for classification tasks containing categorical features, vSGLB still yields useful measures of knowledge uncertainty at the computational time and space complexity of a single SGLB model. Thus, vSGLB allows us to derive the benefits of an ensemble at no additional computational and memory cost." }, { "heading": "ACKNOWLEDGMENTS", "text": "We would like to thank Ekaterina Ermishkina and Stanislav Kirillov for implementing the proposed methods within the CatBoost library." }, { "heading": "A EXPERIMENTAL SETUP", "text": "" }, { "heading": "A.1 OUR IMPLEMENTATION OF DATA UNCERTAINTY", "text": "As discussed in Section 2.2 of the main text, for regression we simultaneously predict the parameters µ and log σ of the Normal distribution. Similarly to NGBoost, we use the natural gradients. For our loss and parameterization, the natural gradient is:\ng(t)(x, y) = ( µ(t−1) − y, 1\n2 − 1 2\n( y − µ(t−1)\nσ(t−1)\n)2 ) . (16)\nAt each step of the gradient boosting procedure, we construct one tree predicting both components of g(t), similarly to the MultiRMSE regime of CatBoost.6\nRecall that for classification we optimize the logistic loss.\nIn Table 2, we compare our implementation with NGBoots (Duan et al., 2020) and Deep Ensembles (Lakshminarayanan et al., 2017) on regression datasets. For our implementation, we consider SGB with fixed sample rate (0.5) and perform parameter tuning as described below. The best results are highlighted." }, { "heading": "A.2 PARAMETER TUNING", "text": "For all approaches, we use grid search to tune learning-rate in {0.001, 0.01, 0.1}, tree depth in {3, 4, 5, 6}. We fix subsample to 0.5 for SGB and to 1 for SGLB. This is done to avoid joint randomization effects of SGB sampling and SGLB noise in gradients. We also set diffusion-temperature = N and model-shrink-rate = 12N for SGLB." }, { "heading": "A.3 DATASETS", "text": "The datasets are described in Table 3. For regression, we use standard train/validation/test splits (UCI). For classification, we split the datasets into proportion 65/15/20 in train, validation, and test sets. For more details, see our GitHub repository.7" }, { "heading": "A.4 STATISTICAL SIGNIFICANCE", "text": "For regression, we perform cross-validation to estimate statistical significance with paired t-test. In the corresponding tables, we highlight the approaches that are insignificantly different from the best one (p-value > 0.05).\n6https://catboost.ai/docs/concepts/loss-functions-multiregression.html 7https://github.com/yandex-research/GBDT-uncertainty\nFor classification (and Year MSD), we measure statistical significance for NLL and error/RMSE on the test set. In the corresponding tables, the approaches that are insignificantly different from the best one are highlighted. For PRR and AUC-ROC (for classification and Year MSD), we highlight the best value." }, { "heading": "B ADDITIONAL EXPERIMENTAL RESULTS", "text": "In Table 4, we compare ensemble approaches with single models in terms of NLL and error rate for classification and in terms of NLL and RMSE for regression tasks. Results for NLL demonstrate an advantage of ensembling approaches compared to single models. However, in some cases the difference is not significant, which can be explained by the additive nature of boosting: averaging several tree ensembles gives another (larger) tree ensemble. Thus, improved NLL can result from the increased complexity of ensemble models. We can make a similar conclusion from the results for RMSE and error rate." }, { "heading": "C COMPARISON WITH RANDOM FOREST", "text": "Our paper specifically focuses on uncertainty estimation in Gradient Boosted Decision Trees (GBDT) models. However, some related work was done for quantifying uncertainty in random forests (Coulston et al., 2016; Shaker & Hüllermeier, 2020), which are also ensembles of decision trees. Thus, for completeness, we also analyze how ensemble approaches perform in combination with random forests.\nIn these experiments, we use the scikit-learn implementation of random forests (Pedregosa et al., 2011). We limit the maximum depth to 10 and keep all other parameters default. For categorical features, we use leave-one-out encoding.\nUnlike GBDT, where trees are added to correct the previous model’s mistakes, random forests (RF) consist of decision trees that are independently trained on bootstrapped sub-samples of the dataset. Hence, for knowledge uncertainty we can divide RF into several independent parts, each consisting of several trees. Drawing a parallel to virtual SGLB, we call this approach vRF (virtual RF) since it allows estimating knowledge uncertainty using only one trained random forest model. In our experiments with vRF, we divide one RF model into 10 independent parts, each consisting of 100 trees. Similarly, one can also construct an ensemble of several independently trained random forest models, which is expected to be a stronger baseline. However, we expect a small difference between vRF and an ensemble of random forests, as there are, a priori, no correlations between trees both in a single model and across multiple RF models.\nIn Tables 5 and 6, we compare the predictive performance of random forests (both individual models and explicit ensembles of multiple models) to SGLB individual and ensemble models on classification and regression tasks. The results show that generally GBDT models outperform random forest models in terms of classification error rate and NLL. Note that we cannot calculate NLL for RF regression models as they are not naturally probabilistic (do not yield a predicted variance). As a result, they are unable to estimate data uncertainty, and therefore we can only obtain estimates of knowledge uncertainty.\nTable 7 compares SGLB and RF ensembles in terms of error detection (PRR) and out-of-domain input detection (ROC-AUC). One can see that SGLB usually outperforms RF, especially for OOD detection. Notably, as we expected, for OOD detection vRF and RF give similar results. Thus, we conclude that for random forests, a virtual ensemble is a good and cheap alternative to the true one." } ]
2,021
null
SP:1fd72534803649141dce71dd19d3998faf96f625
[ "In this paper, the task is to train an implicit and an explicit model simultaneously via GAN setting and a new regularizer called \"stein bridge\", which is constructed from the kernel Stein discrepancy between the implicit and explicit models. The idea of adding such regularization, with the notion of mutual regularization of two models, is interesting. The proposed regularization term is clearly presented, the illustration of stablizing the training procedure, and the empirical results are clearly shown and discussed. The sample quality from the generative models are compared." ]
Deep generative models are generally categorized into explicit models and implicit models. The former defines an explicit density form that allows likelihood inference; while the latter targets a flexible transformation from random noise to generated samples. To take full advantages of both models, we propose Stein Bridging, a novel joint training framework that connects an explicit (unnormalized) density estimator and an implicit sample generator via Stein discrepancy. We show that the Stein bridge 1) induces novel mutual regularization via kernel Sobolev norm penalization and Moreau-Yosida regularization, and 2) stabilizes the training dynamics. Empirically, we demonstrate that Stein Bridging can facilitate the density estimator to accurately identify data modes and guide the sample generator to output more high-quality samples especially when the training samples are contaminated or limited.
[]
[ { "authors": [ "Guillaume Alain", "Yoshua Bengio" ], "title": "What regularized auto-encoders learn from the data-generating distribution", "venue": "J. Mach. Learn. Res.,", "year": 2014 }, { "authors": [ "Martı́n Arjovsky", "Soumith Chintala", "Léon Bottou" ], "title": "Wasserstein generative adversarial networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale GAN training for high fidelity natural image synthesis", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Kacper Chwialkowski", "Heiko Strathmann", "Arthur Gretton" ], "title": "A kernel test of goodness of fit", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Zihang Dai", "Amjad Almahairi", "Philip Bachman", "Eduard H. Hovy", "Aaron C. Courville" ], "title": "Calibrating energy-based generative adversarial networks", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Chao Du", "Kun Xu", "Chongxuan Li", "Jun Zhu", "Bo Zhang" ], "title": "Learning implicit generative models by teaching explicit", "venue": "ones. CoRR,", "year": 2018 }, { "authors": [ "Yilun Du", "Igor Mordatch" ], "title": "Implicit generation and generalization in energy-based models", "venue": "CoRR, abs/1903.08689,", "year": 2019 }, { "authors": [ "L.C. Evans" ], "title": "Partial Differential Equations. Graduate studies in mathematics", "venue": "American Mathematical Society,", "year": 2010 }, { "authors": [ "Stuart Geman", "Donald Geman" ], "title": "Stochastic relaxation, gibbs distributions, and the bayesian restoration of images", "venue": "IEEE Trans. Pattern Anal. Mach. Intell.,", "year": 1984 }, { "authors": [ "Ian Gemp", "Sridhar Mahadevan" ], "title": "Global convergence to the equilibrium of gans using variational inequalities", "venue": null, "year": 2018 }, { "authors": [ "Gauthier Gidel", "Hugo Berard", "Gaëtan Vignoud", "Pascal Vincent", "Simon Lacoste-Julien" ], "title": "A variational inequality perspective on generative adversarial networks", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Ian J. Goodfellow" ], "title": "NIPS 2016 tutorial", "venue": "Generative adversarial networks. CoRR,", "year": 2017 }, { "authors": [ "Ian J. Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron C. Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In NIPS,", "year": 2014 }, { "authors": [ "Jackson Gorham", "Lester Mackey" ], "title": "Measuring sample quality with stein’s method", "venue": "In Advances in Neural Information Processing Systems, pp", "year": 2015 }, { "authors": [ "Will Grathwohl", "Kuan-Chieh Wang", "Jörn-Henrik Jacobsen", "David Duvenaud", "Richard S. Zemel" ], "title": "Cutting out the middle-man: Training and evaluating energy-based models without sampling", "venue": null, "year": 2002 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martı́n Arjovsky", "Vincent Dumoulin", "Aaron C. Courville" ], "title": "Improved training of wasserstein gans", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Tuomas Haarnoja", "Haoran Tang", "Pieter Abbeel", "Sergey Levine" ], "title": "Reinforcement learning with deep energy-based policies", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Geoffrey E. Hinton" ], "title": "Product of experts", "venue": "In ICANN’99 Artificial Neural Networks,", "year": 1999 }, { "authors": [ "Geoffrey E. Hinton", "Simon Osindero", "Yee Whye Teh" ], "title": "A fast learning algorithm for deep belief nets", "venue": "Neural Computation,", "year": 2006 }, { "authors": [ "Aapo Hyvärinen" ], "title": "Estimation of non-normalized statistical models by score matching", "venue": "J. Mach. Learn. Res.,", "year": 2005 }, { "authors": [ "Taesup Kim", "Yoshua Bengio" ], "title": "Deep directed generative models with energy-based probability estimation", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "Durk P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Yann LeCun", "Sumit Chopra", "Raia Hadsell", "Marc’Aurelio Ranzato", "Fu Jie Huang" ], "title": "A tutorial on energy-based learning", "venue": null, "year": 2006 }, { "authors": [ "Yingzhen Li", "Richard E. Turner" ], "title": "Gradient estimators for implicit models", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Tengyuan Liang", "James Stokes" ], "title": "Interaction matters: A note on non-asymptotic local convergence of generative adversarial networks", "venue": "In AISTATS,", "year": 2019 }, { "authors": [ "Qiang Liu", "Dilin Wang" ], "title": "Stein variational gradient descent: A general purpose bayesian inference algorithm", "venue": "In NIPS,", "year": 2016 }, { "authors": [ "Qiang Liu", "Dilin Wang" ], "title": "Learning deep energy models: Contrastive divergence vs. amortized MLE", "venue": null, "year": 2017 }, { "authors": [ "Qiang Liu", "Jason D. Lee", "Michael I. Jordan" ], "title": "A kernelized stein discrepancy for goodness-of-fit tests", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Vaishnavh Nagarajan", "J. Zico Kolter" ], "title": "Gradient descent GAN optimization is locally stable", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Radford M. Neal" ], "title": "Stochastic relaxation, gibbs distributions, and the bayesian restoration of images", "venue": "Handbook of Markov Chain Monte Carlo,", "year": 2011 }, { "authors": [ "Jiquan Ngiam", "Zhenghao Chen", "Pang Wei Koh", "Andrew Y. Ng" ], "title": "Learning deep energy models", "venue": "In ICML, pp", "year": 2011 }, { "authors": [ "Anh Nguyen", "Jeff Clune", "Yoshua Bengio", "Alexey Dosovitskiy", "Jason Yosinski" ], "title": "Plug & play generative networks: Conditional iterative generation of images in latent space", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Erik Nijkamp", "Mitch Hill", "Song-Chun Zhu", "Yingnian Wu" ], "title": "On learning non-convergent nonpersistent short-run mcmc toward energy-based model", "venue": null, "year": 1904 }, { "authors": [ "Chris J Oates", "Mark Girolami", "Nicolas Chopin" ], "title": "Control functionals for monte carlo integration", "venue": "Journal of the Royal Statistical Society,", "year": 2017 }, { "authors": [ "George Papamakarios", "Theo Pavlakou", "Iain Murray" ], "title": "Masked autoregressive flow for density estimation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Kevin Roth", "Aurelien Lucchi", "Sebastian Nowozin", "Thomas Hofmann" ], "title": "Stabilizing training of generative adversarial networks through regularization", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Ruslan Salakhutdinov", "Geoffrey E. Hinton" ], "title": "Deep boltzmann machines", "venue": "In AISTATS,", "year": 2009 }, { "authors": [ "Alexander Shapiro", "Darinka Dentcheva", "Andrzej Ruszczyński" ], "title": "Lectures on stochastic programming: modeling and theory", "venue": null, "year": 2009 }, { "authors": [ "Chenyang Tao", "Shuyang Dai", "Liqun Chen", "Ke Bai", "Junya Chen", "Chang Liu", "Ruiyi Zhang", "Georgiy V. Bobashev", "Lawrence Carin" ], "title": "Variational annealing of gans: A langevin perspective", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Cédric Villani" ], "title": "Optimal transport: old and new, volume 338", "venue": "Springer Science & Business Media,", "year": 2008 }, { "authors": [ "David Warde-Farley", "Yoshua Bengio" ], "title": "Improving generative adversarial networks with denoising feature matching", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Ying Nian Wu", "Song Chun Zhu", "Xiuwen Liu" ], "title": "Equivalence of julesz ensembles and FRAME models", "venue": "International Journal of Computer Vision,", "year": 2000 }, { "authors": [ "Jianwen Xie", "Yang Lu", "Song-Chun Zhu", "Ying Nian Wu" ], "title": "Cooperative training of descriptor and generator networks", "venue": "CoRR, abs/1609.09408,", "year": 2016 }, { "authors": [ "Jianwen Xie", "Yang Lu", "Song-Chun Zhu", "Ying Nian Wu" ], "title": "A theory of generative convnet", "venue": "In ICML, pp", "year": 2016 }, { "authors": [ "Jianwen Xie", "Yang Lu", "Ruiqi Gao", "Ying Nian Wu" ], "title": "Cooperative learning of energy-based model and latent variable model via MCMC teaching", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Shuangfei Zhai", "Yu Cheng", "Weining Lu", "Zhongfei Zhang" ], "title": "Deep structured energy based models for anomaly detection", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Guojun Zhang", "Yaoliang Yu" ], "title": "Convergence of gradient methods on bilinear zero-sum games", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Junbo Jake Zhao", "Michaël Mathieu", "Yann LeCun" ], "title": "Energy-based generative adversarial networks", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Song Chun Zhu", "Ying Nian Wu", "David Mumford" ], "title": "Minimax entropy principle and its application to texture modeling", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "Hu" ], "title": "2018) leverages Stein discrepancy to design a neural sampler from unnormalized densities. The fundamental disadvantage of explicit model is that the energy-based learning is difficult to accurately capture the distribution of true samples due to the low manifold of real-world instances", "venue": "IMPLICIT GENERATIVE MODELS", "year": 2017 }, { "authors": [ "Du" ], "title": "2018) proposes to regard the explicit model as a teacher net who guides the training of implicit generator as a student net to produce samples that could overcome the mode-collapse issue. The main drawback of cooperative training is that they indirectly optimize the discrepancy between the generator and data distribution via the energy model as a ‘mediator’, which leads to a fact that once the energy model gets stuck in a local optimum (e.g., mode-collapse or mode", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep generative model, as a powerful unsupervised framework for learning the distribution of highdimensional multi-modal data, has been extensively studied in recent literature. Typically, there are two types of generative models: explicit and implicit (Goodfellow et al., 2014). Explicit models define a density function of the distribution, while implicit models learn a mapping that generates samples by transforming an easy-to-sample random variable.\nBoth models have their own power and limitations. The density form in explicit models endows them with convenience to characterize data distribution and infer the sample likelihood. However, the unknown normalizing constant often causes computational intractability. On the other hand, implicit models including generative adversarial networks (GANs) can directly generate vivid samples in various application domains including images, natural languages, graphs, etc. (Goodfellow et al., 2014; Radford et al., 2016; Arjovsky et al., 2017; Brock et al., 2019). Nevertheless, one important challenge is to design a training algorithm that do not suffer from instability and mode collapse. In view of this, it is natural to build a unified framework that takes full advantages of the two models and encourages them to compensate for each other.\nIntuitively, an explicit density estimator and a flexible implicit sampler could help each other’s training given effective information sharing. On the one hand, the density estimation given by explicit models can be a good metric that measures quality of samples (Dai et al., 2017), and thus can be used for scoring generated samples given by implicit model or detecting outliers as well as noises in input true samples (Zhai et al., 2016). On the other hand, the generated samples from implicit models could augment the dataset and help to alleviate mode collapse especially when true samples are insufficient that would possibly make explicit model fail to capture an accurate distribution. We refer to Appendix A for a more comprehensive literature review.\nMotivated by the discussions above, in this paper, we propose a joint learning framework that enables mutual calibration between explicit and implicit generative models. In our framework, an explicit model is used to estimate the unnormalized density; in the meantime, an implicit generator model is exploited to minimize certain statistical distance (such as the Wasserstein metric or Jensen-Shannon divergence) between the distributions of the true and the generated samples. On top of these two models, a Stein discrepancy, acting as a bridge between generated samples and estimated densities, is introduced to push the two models to achieve a consensus. Unlike flow-based models (Nguyen et al., 2017; Kingma & Dhariwal, 2018; Papamakarios et al., 2017), our formulation does not impose\ninvertibility constraints on the generative models and thus is flexible in utilizing general neural network architectures. Our main contribution are as follows.\n• Theoretically, we prove that our method allows the two generative models to impose novel mutual regularization on each other. Specifically, our formulation penalizes large kernel Sobolev norm of the critic in the implicit (WGAN) model, which ensures the critic not to change suddenly on the high-density regions and thus preventing the critic of the implicit model being to strong during training. In the mean time, our formulation also smooths the function given by the Stein discrepancy through Moreau-Yosida regularization, which encourages the explicit model to seek more modes in the data distribution and thus alleviates mode collapse.\n• In addition, we also show that the joint training helps to stabilize the training dynamics. Compared with other common regularization approaches for GAN models that may shift original optimum, our method can facilitate convergence to unbiased model distribution.\n• Extensive experiments on synthetic and image datasets justify our theoretical findings and demonstrate that joint training can help two models achieve better performance. On the one hand, the energy model can detect complicated modes in data more accurately and distinguish out-ofdistribution samples. On the other hand, the implicit model can generate higher-quality samples, especially when the training samples are contaminated or limited." }, { "heading": "2 BACKGROUND", "text": "We briefly provide some technical background related to our model.\nEnergy Model. The energy model assigns each data x ∈ Rd with a scalar energy value Eφ(x), where Eφ(·) is called energy function and is parameterized by φ. The model is expected to assign low energy to true samples according to a Gibbs distribution pφ(x) = exp{−Eφ(x)}/Zφ, where Zφ is a normalizing constant dependent of φ. The normalizing termZφ is often hard to compute, making the training intractable, and various methods are proposed to detour such term (see Appendix A). Stein Discrepancy. Stein discrepancy (Gorham & Mackey, 2015; Liu et al., 2016; Chwialkowski et al., 2016; Oates et al., 2017; Grathwohl et al., 2020) is a measure of closeness between two probability distributions and does not require knowledge for the normalizing constant of one of the compared distributions. Let P and Q be two probability distributions on X ⊂ Rd, and assume Q has a (unnormalized) density q. The Stein discrepancy S(P,Q) is defined as\nS(P,Q) := sup f∈F Ex∼P[AQf(x)] := sup f∈F {Γ(Ex∼P[∇x log q(x)f(x)> +∇xf(x)])}, (1)\nwhere F is often chosen to be a Stein class (see, e.g., Definition 2.1 in (Liu et al., 2016)), f : Rd → Rd′ is a vector-valued function called Stein critic and Γ is an operator that transforms a d× d′ matrix into a scalar value. One common choice of Γ is trace operator when d′ = d. One can also use other forms for Γ, like matrix norm when d′ 6= d (Liu et al., 2016). If F is a unit ball in some reproducing kernel Hilbert space (RKHS) with a positive definite kernel k, it induces Kernel Stein Discrepancy (KSD). More details are provided in Appendix B. Wasserstein Metric. Wasserstein metric is suitable for measuring distances between two distributions with non-overlapping supports (Arjovsky et al., 2017). The Wasserstein-1 metric between distributions P and Q is defined as\nW(P,Q) := min γ E(x,y)∼γ [‖x− y‖],\nwhere the minimization with respect to γ is over all joint distributions with marginals P and Q. By Kantorovich-Rubinstein duality,W(P,Q) has a dual representation\nW(P,Q) := max D {Ex∼P[D(x)]− Ey∼Q[D(y)]} , (2)\nwhere the maximization is over all 1-Lipschitz continuous functions. Sobolev space and Sobolev dual norm. Let L2(P) be the Hilbert space on Rd equipped with an inner product 〈u, v〉L2(P) := ∫ Rd uvdP(x). The (weighted) Sobolev space H\n1 is defined as the closure of C∞0 , a set of smooth functions on Rd with compact support, with respect to norm ‖u‖H1 := ( ∫ Rd(u 2 + ‖∇u‖22)dP(x) )1/2 , where P has a density. For v ∈ L2, its Sobolev dual norm\n‖v‖H−1 is defined by (Evans, 2010)\n‖v‖H−1 := sup u∈H1\n{ 〈v, u〉L2 : ∫ Rd ‖∇u‖22 dP(x) ≤ 1, ∫ Rd u(x)dP(x) = 0 } .\nThe constraint ∫ Rd u(x)dx = 0 is necessary to guarantee the finiteness of the supremum, and the supermum can be equivalently taken over C∞0 ." }, { "heading": "3 PROPOSED MODEL: STEIN BRIDGING", "text": "In this section, we formulate our model Stein Bridging. A scheme of our framework is illustrated in Figure 1. Denote by Preal the underlying real distribution from which the data {x} are sampled. The formulation simultaneously learns two generative models – one explicit and one implicit – that represent estimates of Preal. The explicit generative model has a distribution PE on X with explicit probability density proportional to exp(−E(x)), x ∈ X , where E is referred to as an energy function. We focus on energy-based explicit model in model formulation as it\ndoes not enforce any constraints or assume specific density forms. For specifications, one can also consider other explicit models, like autoregressive models or directly using some density forms such as Gaussian distribution with given domain knowledge. The implicit model transforms an easy-tosample random noise z with distribution P0 via a generatorG to a sample x̃ = G(z) with distribution PG. Note that for distribution PE , we have its explicit density without normalizing term, while for PG and Preal, we have samples from two distributions. Hence, we can use the Stein discrepancy (that does not require the normalizing constant) as a measure of closeness between the explicit distribution PE and the real distribution Preal, and use the Wasserstein metric (that only requires only samples from two distributions) as a measure of closeness between the implicit distribution PG and the real data distribution Preal. To jointly learn the two generative models PG and PE , arguably the most straightforward way is to minimize the sum of the Stein discrepancy and the Wasserstein metric:\nmin E,G W(Preal,PG) + λS(Preal,PE),\nwhere λ ≥ 0. However, this approach appears no different than learning the two generative models separately. To achieve information sharing between two models, we incorporate another term S(PG,PE) – called the Stein bridge – that measures the closeness between the explicit distribution PE and the implicit distribution PG:\nmin E,G W(Preal,PG) + λ1S(Preal,PE) + λ2S(PG,PE), (3)\nwhere λ1, λ2 ≥ 0. The Stein bridge term in (3) pushes the two models to achieve a consensus. Remark 1. Our formulation is flexible in choosing both the implicit and explicit models. In (3), we can choose statistical distances other than the Wasserstein metricW(Preal,PG) to measure closeness between Preal and PG, such as Jensen-Shannon divergence, as long as its computation requires only samples from the involved two distributions. Hence, one can use GAN architectures other than WGAN to parametrize the implicit model. In addition, one can replace the first Stein discrepancy term S(Preal,PE) in (3) by other statistical distances as long as its computation is efficient and hence other explicit models can be used. For instance, if the normalizing constant of PE is known or easy to calculate, one can use Kullback-Leibler (KL) divergence.\nRemark 2. The choice of the Stein discrepancy for the bridging term S(PG,PE) is crucial and cannot be replaced by other statistical distances such as KL divergence, since the data-generating distribution does not have an explicit density form (not even up to a normalizing constant). This\nis exactly one important reason why Stein bridging was proposed, which requires only samples from the data distribution and only the log-density of the explicit model without the knowledge of normalizing constant as estimated in MCMC or other methods.\nIn our implementation, we parametrize the generator in implicit model and the density estimator in explicit model as Gθ(z) and pφ(x), respectively. The Wasserstein term in (3) is implemented using its equivalent dual representation in (2) with a parametrized critic Dψ(x). The two Stein terms in (3) can be implemented using (1) with either a Stein critic (parametrized as a neural network, i.e., fw(x)), or the non-parametric Kernel Stein Discrepancy. Our implementation iteratively updates the explicit and implicit models. Details for model specifications and optimization are in Appendix E.2. We also compare with some related works that attempt to combine both of the worlds (such as energy-based GAN, contrastive learning and cooperative learning) in Appendix A.3." }, { "heading": "4 THEORETICAL ANALYSIS", "text": "In this section, we theoretically show that the Stein bridge allows the two models to facilitate each other’s training by imposing certain regularizations on both the implicit and the explicit models, as well as stabilizing the training dynamics." }, { "heading": "4.1 REGULARIZATION VIA STEIN BRIDGE", "text": "We first show the regularization effect of the Stein bridge on the Wasserstein critic. Define the kernel Sobolev dual norm as\n‖D‖H−1(P;k) := sup u∈C∞0 {〈D,u〉L2(P) : Ex,x′∼P[∇u(x)>k(x,x′)∇u(x′)] ≤ 1, EP[u] = 0}.\nwhich can be viewed as a kernel generalization of the Sobolev dual norm defined in Section 2, which reduces to the Sobolev dual norm when k(x,x′) = I(x = x′) and P is the Lebesgue measure. Theorem 1. Assume that {PG}G exhausts all continuous probability distributions and S is chosen as kernel Stein discrepancy. Then problem (3) is equivalent to\nmin E max D { Ey∼PE [D(y)]− Ex∼Preal [D(x)]− 14λ2 ‖D‖ 2 H−1(PE ;k) + λ1S(Preal,PE) } .\nThe kernel Sobolev norm regularization penalizes large variation of the Wasserstein critic D. Particularly, observe that (Villani, 2008) if k(x,x′) = I(x = x′) and EPE [D] = 0, and then\n‖D‖H−1(PE ;k) = lim →0 W2((1 + D)PE ,PE) ,\nwhereW2 denotes the 2-Wasserstein metric. Hence, the Sobolev dual norm regularization ensures D not to change suddenly on high-density region of PE , and thus reinforces the learning of the Wasserstein critic. Stein bridge penalizes large variation of the Wasserstein critic, in the same spirit but of different form comparing to gradient-based penalty (e.g., (Gulrajani et al., 2017; Roth et al., 2017)). It prevents Wasserstein critic from being too strong during training and thus encourages mode exploration of sample generator. To illustrate this, we conduct a case study where we train a generator over the data sampled from a mixture of Gaussian (µ1 = [−1,−1], µ2 = [1, 1] and Σ = 0.2I). In Fig. 2(a) we compare gradient norms of the Wasserstein critic when training the generator with and without the Stein bridge. As we can see, Stein bridge can help to reduce gradient norms through training, with a similar effect as WGAN-GP.\nMoreover, the Stein bridge also plays a part in smoothing the output from Stein discrepancy and we show the result in the following theorem. Theorem 2. Assume {PG}G exhausts all continuous probability distributions, and the Stein class defining the Stein discrepancy is compact (in some linear topological space). Then problem (3) is equivalent to\nmin E\n{ λ1S(Preal,PE) + λ2 max\nf Ex∼Preal [(APE f)λ2(x)]\n} ,\nwhere (APE f)λ2(·) denotes the (generalized) Moreau-Yosida regularization of the function APE f with parameter λ2, i.e., (APE f)λ2(x) = miny∈X {APE f(y) + 1λ2 ||x− y||}.\nNote that (APE f)λ2 is Lipschitz continuous with constant 1/λ2. Hence, the Stein bridge, together with the Wasserstein metric W(Preal,PG), plays as a Lipschitz regularization on the output of the Stein operatorAPE f via Moreau-Yosida regularization. This suggests a novel regularization scheme for Stein-based GAN. By smoothing the Stein critic, the Stein bridge encourages the energy model to seek more modes in data instead of focusing on some dominated modes, thus alleviating modecollapse issue. To illustrate this, we consider a case where we have an energy model initialized with one mode center and data sampled from distribution of another mode, as depicted in Fig. 2(b). Fig. 2(c) and 2(d) compare the Stein critics when using Stein bridge and not, respectively. The Stein bridge helps to smooth the Stein critic, as indicated by a less rapidly changing contour in Fig. 2(c) compared to Fig. 2(d), learned from the data and model distributions plotted in Fig. 2(b).\n4.2 TRAINING STABILITY\nIn this subsection, we further show that Stein Bridging could help stabilize adversarial training between generator and Wasserstein critic with a local convergence guarantee. As is known, the training for minimax game in GAN is difficult. When using traditional gradient methods, the training would suffer from some oscillatory behaviors (Goodfellow, 2017; Liang & Stokes, 2019; Zhang & Yu, 2020). In order to better understand the optimization behaviors, we first compare the behaviors of WGAN, likelihood- and entropy-regularized WGAN, and our Stein Bridging under SGD via an easy to comprehend toy example in one-dimensional case. Fig. 3 shows numerical results that compare the optimization behaviors of above methods. As we can see, Stein Bridging achieves good convergence to the optimum point, while\nWGAN suffers from an oscillation instead of converging. Entropy regularization (ER) can encourage the generator to seek more modes but would make the model diverge in this case. By contrast, likelihood regularization (LR) can help for training stability but it changes the converging point to a biased distribution. A recently proposed variational annealing strategy (VA) (Tao et al., 2019) for regularized GAN introduces a trade-off between convergence and unbiased result. The detailed discussions and proofs are presented in Appendix D.1. We also generalize the convergence results to multi-dimensional bilinear system F (ψ,θ) = θ>Aψ − b>θ − c>ψ in Appendix D.2. Our theoretical results indicate that Stein Bridging could stabilize the minimax training of GAN without changing its optimum. In the experiments, we will empirically validate our analysis." }, { "heading": "5 EXPERIMENTS", "text": "In this section, we conduct experiments1 to verify the effectiveness of proposed method from multifaceted views. We consider two synthetic datasets with mixtures of Gaussian distributions: TwoCircle and Two-Spiral. The first one is composed of 24 Gaussian mixtures that lie in two circles. Such dataset is extended from the 8-Gaussian-mixture scenario widely used in previous papers, so\n1The experiment codes will be released.\nFigure 5: Comparison for generated sample quality. (a) True samples from real distribution and (b)∼(f) generated samples produced by the generators of different methods on Two-Circle (upper line) and Two-Spiral (bottom line) datasets.\nthat we can use it to test the quality of generated samples and mode coverage of learned energy. The second dataset consists of 100 Gaussian mixtures whose centers are densely arranged on two centrally symmetrical spiral-shaped curves. This dataset can be used to examine the power of generative model on complicated data distributions. The ground-truth distributions and samples are shown in Fig. 4(a) and Fig. 5(a). Furthermore, we also apply the method to MNIST and CIFAR datasets which require the model to deal with high-dimensional image data. In each dataset, we use observed samples as input of the model and leverage them to train our model. The details for each dataset are reported in Appendix E.1.\nWe term the model Joint-W if using Wasserstein metric in (3) and Joint-JS if using JS divergence in this section. We consider several competitors. For implicit generative models, we basically consider the counterparts without joint training with energy model, which are equivalently valina GAN and WGAN with gradient penalty (Gulrajani et al., 2017), for ablation study. Also, as comparison to the new regularization effects by Stein Bridging, we consider a recently proposed variational annealing regularization (Tao et al., 2019) for GANs (short as GAN+VA/WGAN+VA). We employ denoising auto-encoder to estimate the gradient for regularization penalty, which is proposed by (Alain & Bengio, 2014). For explicit models, we also consider the counterparts without joint training with generator model, i.e., directly training Deep Energy Model (DEM) using Stein discrepancy (Grathwohl et al., 2020). Besides we compare with energy calibrated GAN (EGAN) (Dai et al., 2017) and Deep Directed Generative (DGM) Model (Kim & Bengio, 2017) which adopt contrastive divergence to train a sample generator with an energy estimator. See Appendix A for brief introduction of these methods and Appendix E.3 for implementation details." }, { "heading": "5.1 DENSITY ESTIMATION OF EXPLICIT MODEL", "text": "Mode Coverage for Complicated Distributions. One advantage of joint learning is that the generator could help the density estimator to capture more accurate distribution. As shown in Two-Circle case in Fig 5, both Joint-JS and Joint-W manage to capture all Gaussian components while other methods miss some of modes. In Two-Spiral case in Fig 4, Joint-JS and Joint-W exactly fit the ground-truth distribution. Nevertheless, DEM misses one spiral while EGAN degrades to a uniformlike distribution. DGM manages to fit two spirals but allocate high densities to regions that have low densities in the groung-truth distribution. As quantitative comparison, we study three evaluation metrics: KL & JS divergence and Area Under the Curve (AUC). The detailed information and results are given in Appendix E.4 and Table 5 respectively. The values show that Joint-W and Joint-JS provide better density estimation than all competitors over a large margin .\nDensity Rankings for High-Dimensional Digits. We also rank generated digits (and true digits) on MNIST w.r.t densities given by the energy model in Fig. 11, Fig. 12 and Fig. 13. As depicted in the figures, the digits with high densities (or low densities) given by Joint-JS possess enough diversity (the thickness, the inclination angles as well as the shapes of digits diverses). By constrast, all the digits with high densities given by DGM tend to be thin and digits with low densities are very thick.\nAlso, as for EGAN, digits with high (or low) densities appear to have the same inclination angle (for high densities, ‘1’ keeps straight and ‘9’ ’leans’ to the left while for low densities, just the opposite), which indicates that DGM and EGAN tend to allocate high (or low) densities to data with certain modes and miss some modes that possess high densities in ground-truth distributions. By contrast, our method manages to capture these complicated features in data distributions.\nDetection for Out-of-distribution Samples. We further study model performance on detection for out-of-distribution samples. We consider CIFAR-10 images as positive samples and construct negative samples by (I) flip images, (II) add random noise, (III) overlay two images and (IV) use images from LSUN dataset, respectively. A good density models trained on CIFAR-10 are expected to give high densities to positive samples and low densities to negative samples, with exception for case (I) (flipping images are not exactly negative samples and the model should give high densities). We use the density values rank samples and calculate AUC of false positive rate v.s. true positive rate, reported in Table 2. Our model Joint-W manages to distinguish samples for (II), (III), (IV) and is not fooled by flipping images, while DEM and EGAN fail to detect out-of-distribution samples and DGM recognizes flipping images as negative samples." }, { "heading": "5.2 SAMPLE QUALITY OF IMPLICIT MODEL", "text": "" }, { "heading": "Generated Samples over Synthetic Datasets.", "text": "Calibrating explicit (unnormalized) density model with implicit generator is expected to improve the quality of generated samples. In Fig. 5 we show the results of different generators in Two-Circle and Two-Spiral datasets. In Two-Circle, there are a large number of generated samples given by GAN, WGAN-GP and DGM locating between two Gaussian components, and the boundary for each component is not distinguishable. Since the ground-truth\ndensities of regions between two components are very low, such generated samples possess lowquality, which depicts that these models capture the combinations of two dominated features (i.e., modes) in data but such combination makes no sense in practice. By contrast, Joint-JS and Joint-W could alleviate such issue, reduce the low-quality samples and produce more distinguishable boundaries. In Two-Spiral, similarly, the generated samples given by GAN and WGAN-GP form a circle instead of two spirals while the samples of DGM ‘link’ two spirals. Joint-JS manages to focus more on true high densities compared to GAN and Joint-W provides the best results. To quantitatively measure the sample quality, we adopt Maximum Mean Discrepancy (MMD) and High-quality Sample Rate (HSR). The details are in Appendix E.4 and we report results in Table 5 where our models significantly outperform the competitors over a large margin.\nSample Quality for Generated Images. We calculate the Inception Score (IS) and Fréchet Inception Distance (FID) to measure the sample quality on CIFAR-10. As shown in Table 1, Joint-W outperforms other competitors by 0.2 and achieves 5.6% improvement over WGAN-GP w.r.t IS. As for FID, Joint-W slightly outperforms WGAN-GP and beats energy-based GAN and variational annealing regularized WGAN over a large margin. One possible reason is that these methods both consider entropy regularization which encourages diversity of generated samples but will have a negative effect on sample quality. Stein Bridging can overcome this issue via joint training with explicit model. The performance of DGM tends to be much worse than others. In practice, DGM is hard for convergence and suffers from severe instability in training.\nModel Performance in Contaminated or Limited Data. As further discussions, we highlight that Stein Bridging has promising power in some extreme cases where the training sample are contaminated or limited. We consider noised data scenario and randomly add n noise points sampled from Gaussian distribution N (0, σ0I) where σ0 = 2 to the original true samples in TwoCircle dataset. The results on noised dataset are presented in Fig. 8(a) where we set noise ratio n = [40, 100, 160, 300, 400, 600, 800, 1000] and report the HSRs of Joint-W and WGAN-GP. The noise ratio in data impacts the performance of WGAN-GP and Joint-W, but comparatively, the performance decline of Joint-W is less insignificant than WGAN-GP, which indicates better robustness of joint training w.r.t. noised data.\nTable 2: AUCs for out-of-distribution sample detection on CIFAR-10. We use negative samples from (I) flip images, (II) add random noise, (III) overlay two images and (IV) use images from LSUN dataset.\nAUC I II III IV Joint-W 0.50 0.92 0.95 0.85\nDEM 0.50 0.52 0.51 0.56 DGM 1.00 1.00 1.00 0.82 EGAN 0.50 0.42 0.30 0.52\nTo study the impact of insufficient data, in Fig. 8(b), we consider sample size N2 as [100, 200, 300, 500, 700, 1000, 2000] in Two-Spiral dataset and report the AUC of Joint-W and DEM. When sample size decreases from 2000 to 100, the AUC value of DEM declines dramatically, showing its dependency on sufficient training samples. By contrast, the AUC of Joint-W exhibits a small decline when the sample size is more than 500 and suffers from an obvious decline when it is less than 300. Such phenomenon demonstrates its lower sensitivity to data size." }, { "heading": "5.3 ENHANCING THE STABILITY OF GAN", "text": "Joint training also helps to stabilize training dynamics. In Fig. 6 we present the learning curves of Joint-W (resp. Joint-JS) compared with WGAN (resp. GAN) and likelihood- and entropyregularized WGAN (resp. GAN). The curves depict that joint training could reduce the variance of metric values especially during the second half of training. Furthermore, we visualize generated digits given by the same noise z in adjacent epochs in Fig. 7. The results show that Joint-W gives more stable generation in adjacent epochs while generated samples given by WGAN-GP and WGAN+VA exhibit an obvious variation. Especially, some digits generated by WGAN-GP and WGAN+VA change from one class to another, which is quite similar to the oscillation without convergence discussed in Section 3.2. To quantify the evaluation of bias in model distributions, we calculate distances between the means of 50000 generated digits (resp. images) and 50000 true digits (resp. images) in MNIST (reps. CIFAR-10). The results are reported in Table 4. We can see that the model distributions of other competitors are more biased from true data distribution, compared with Joint-W." }, { "heading": "6 CONCLUSIONS", "text": "In this paper, we aim at uniting the training for implicit generative model (represented by GAN or WGAN) and explicit generative model (represented by a deep energy-based model) via an bridging term of Stein discrepancy between the generator and the energy-based density estimator. Theoretically, we show that joint training could i) enforce dual regularization effects on both models and thus encourage mode exploration, and ii) help to facilitate the convergence of minimax training dynamics. We also conduct extensive experiments on different tasks and applications to verify our theoretical findings as well as demonstrate the superiority of our method compared with training generator models or energy-based models alone. Our formulation is flexible in handling various implicit or explicit models. As such, for future works, one can try other generative models such as VAE or flowed-based model as replacement for our GAN and energy-based models. It would also be interesting to exploit our formulation in the context of few-shot learning in generative models." }, { "heading": "A LITERATURE REVIEWS", "text": "We discuss some of related literature and shed lights on the relationship between our work with others." }, { "heading": "A.1 EXPLICIT GENERATIVE MODELS", "text": "Explicit generative models are interested in fitting each instance with a scalar (unnormalized) density expected to explicitly capture the distribution behind data. Such densities are often up to a constant and called as energy functions which are common in undirected graphical models (LeCun et al., 2006). Hence, explicit generative models are also termed as energy-based models. An early version of energy-based models is the FRAME (Filters, Random field, And Maximum Entropy) model (Zhu et al., 1997; Wu et al., 2000). Later on, some works leverage deep neural networks to model the energy function (Ngiam et al., 2011; Xie et al., 2016b) and pave the way for researches on deep energy model (DEM) (e.g., (Liu & Wang, 2017; Kim & Bengio, 2017; Zhai et al., 2016; Haarnoja et al., 2017; Du & Mordatch, 2019; Nijkamp et al., 2019)). Apart from DEM, there are also some other forms of deep explicit models based on restricted Boltzmann machines like deep belief networks (Hinton et al., 2006) and deep Boltzmann machines (Salakhutdinov & Hinton, 2009).\nThe normalized constant under the energy function requires an intractable integral over all possible instances, which makes the model hard to learn via Maximum Likelihood Estimation (MLE). To solve this issue, some works propose to approximate the constant by MCMC methods (Geman & Geman, 1984; Neal, 2011). However, MCMC requires an inner-loop samples in each training, which induces high computational costs. Another solution is to optimize an alternate surrogate loss function. For example, contrastive divergence (CD) (Liu & Wang, 2017) is proposed to measure how much KL divergence can be improved by running a small numbers of Markov chain steps towards the intractable likelihood, while score matching (SM) (Hyvärinen, 2005) detours the constant by minimizing the distance for gradients of log-likelihoods. A recent study (Grathwohl et al., 2020) uses Stein discrepancy to train unnormalized model. The Stein discrepancy does not require the normalizing constant and makes the training tractable. Moreover, the intractable normalized constant makes it hard to sample from. To obtain an accurate samples from unnormalized densities, many studies propose to approximate the generation by diffusion-based processes, like generative flow (Nguyen et al., 2017) and variational gradient descent ((Liu & Wang, 2016)). Also, a recent work (Hu et al., 2018) leverages Stein discrepancy to design a neural sampler from unnormalized densities. The fundamental disadvantage of explicit model is that the energy-based learning is difficult to accurately capture the distribution of true samples due to the low manifold of real-world instances (Liu & Wang, 2017).\nA.2 IMPLICIT GENERATIVE MODELS\nImplicit generative models focus on a generation mapping from random noises to generated samples. Such mapping function is often called as generator and possesses better flexibility compared with explicit models. Two typical implicit models are Variational Auto-Encoder (VAE) (Kingma & Welling, 2014) and Generative Adversarial Networks (GAN) (Goodfellow et al., 2014). VAE introduces a latent variable and attempts to maximize the variational lower bound for likelihood of joint distribution of latent variable and observable variable, while GAN targets an adversarial game between the generator and a discriminator (or critic in WGAN) that aims at discriminating the generated and true samples. In this paper, we focus on GAN and its variants (e.g., WGAN (Arjovsky et al., 2017), WGAN-GP (Gulrajani et al., 2017), DCGAN (Radford et al., 2016), etc.) as the implicit generative model and we leave the discussions on VAE as future work.\nTwo important issues concerning GAN and its variants are instability of training and local optima. The typical local optima for GAN can be divided into two categories: mode-collapse (the model fails to capture all the modes in data) and mode-redundance (the model generates modes that do not exist in data). Recently there are many attempts to solve these issues from various perspectives. One perspective is from regularization. Two typical regularization methods are likelihood-based and entropy-based regularization with the prominent examples (Warde-Farley & Bengio, 2017) and (Li & Turner, 2018) that respectively leverage denoising feature matching and implicit gradient approximation to enforce the regularization constraints. The likelihood and entropy regularizations could\nrespectively help the generator to focus on data distribution and encourage more diverse samples, and a recent work (Tao et al., 2019) uses Langevin dynamics to indicate that i) the entropy and likelihood regularizations are equivalent and share an opposite relationship in mathematics, and ii) both regularizations would make the model converge to a surrogate point with a bias from original data distribution. Then (Tao et al., 2019) proposes a variational annealing strategy to empirically unite two regularizations and tackle the biased distributions.\nTo deal with the instability issue, there are also some recent literatures from optimization perspectives and proposes different algorithms to address the non-convergence of minimax game optimization (for instance, (Gemp & Mahadevan, 2018; Liang & Stokes, 2019; Gidel et al., 2019)). Moreover, the disadvantage of implicit models is the lack of explicit densities over instances, which disables the black-box generator to characterize the distributions behind data." }, { "heading": "A.3 ATTEMPTS TO COMBINE BOTH OF THE WORLDS", "text": "Recently, there are several studies that attempt to combine explicit and implicit generative models from different ways. For instance, (Zhao et al., 2017) proposes energy-based GAN that leverages energy model as discriminator to distinguish the generated and true samples. The similar idea is also used by (Kim & Bengio, 2017) and (Dai et al., 2017) which let the discriminator estimate a scaler energy value for each sample. Such discriminator is optimized to give high energy to generated samples and low energy to true samples while the generator aims at generating samples with low energy. The fundamental difference is that (Zhao et al., 2017) and (Dai et al., 2017) both aim at minimizing the discrepancy between distributions of generated and true samples while the motivation of (Kim & Bengio, 2017) is to minimize the KL divergence between estimated densities and true samples. (Kim & Bengio, 2017) adopts contrastive divergence (CD) to link MLE for energy model over true data with the adversarial training of energy-based GAN. However, both CD-based method and energy-based GAN have limited power for both generator and discriminator. Firstly, if the generated samples resemble true samples, then the gradients for discriminator given by true and generated samples are just the opposite and will counteract each other, and the training will stop before the discriminitor captures accurate data distribution. Second, since the objective boils down to minimizing the KL divergence (for (Kim & Bengio, 2017)) or Wasserstein distance (for (Dai et al., 2017)) between model and true distributions, the issues concerning GAN (or WGAN) like training instability and mode-collapse would also bother these methods.\nAnother way for combination is by cooperative training. (Xie et al., 2016a) (and its improved version (Xie et al., 2018)) leverages the samples of generator as the MCMC initialization for energy-based model. The synthesized samples produced from finite-step MCMC are closer to the energy model and the generator is optimized to make the finite-step MCMC revise its initial samples. Also, a recent work (Du et al., 2018) proposes to regard the explicit model as a teacher net who guides the training of implicit generator as a student net to produce samples that could overcome the mode-collapse issue. The main drawback of cooperative training is that they indirectly optimize the discrepancy between the generator and data distribution via the energy model as a ‘mediator’, which leads to a fact that once the energy model gets stuck in a local optimum (e.g., mode-collapse or moderedundance) the training for the generator would be affected. In other words, the training for two models would constrain rather than exactly compensate each other. Different from existing methods, our model considers three discrepancies simultaneously as a triangle to jointly train the generator and the estimator, enabling them to compensate and reinforce each other." }, { "heading": "B BACKGROUND FOR STEIN DISCREPANCY", "text": "Assume q(x) to be a continuously differentiable density supported on X ⊂ Rd and f : Rd → Rd′ a smooth vector function. DefineAq[f(x)] = ∇x log q(x)f(x)>+∇xf(x) as a Stein operator. If f is a Stein class (satisfying some mild boundary conditions) then we have the following Stein identity property: Ex∼q[Aq[f(x)]] = Ex∼q[∇x log q(x)f(x)> +∇xf(x)] = 0. Such property induces Stein discrepancy between distributions P : p(x) and Q : q(x), x ∈ X :\nS(Q,P) = sup f∈F {Ex∼q[Ap[f(x)]] = sup f∈F {Γ(Ex∼q[∇x log p(x)f(x)> +∇xf(x)])}, (4)\nwhere f is what we call Stein critic that exploits over function space F and if F is large enough then S(Q,P) = 0 if and only if Q = P. Note that in (1), we do not need the normalized constant for p(x) which enables Stein discrepancy to deal with unnormalized density.\nIfF is a unit ball in a Reproducing Kernel Hilbert Space (RKHS) with a positive definite kernel function k(·, ·), then the supremum in (1) would have a close form (see (Liu et al., 2016; Chwialkowski et al., 2016; Oates et al., 2017) for more details):\nSK(Q,P) = Ex,x′∼q[up(x,x′)], (5)\nwhere up(x,x′) = ∇x log p(x)>k(x,x′)∇x log p(x′) + ∇x log p(x)>∇xk(x,x′) + ∇xk(x,x′)>∇x log p(x′) + tr(∇x,x′k(x,x′)). The (5) gives the Kernel Stein Discrepancy (KSD)." }, { "heading": "C PROOFS OF RESULTS IN SECTION 4.1", "text": "" }, { "heading": "C.1 PROOF OF THEOREM 1", "text": "Proof. Applying Kantorovich’s duality onW(PG,Pr) and using the exhaustiveness assumption on the generator, we rewrite the problem as\nmin E,P max D {EP[D]− EPreal [D] + λ1S(Preal,PE) + λ2S(P,PE)}, (6)\nwhere the minimization with respect toE is over all energy functions, the minimization with respect to P is over all probability distributions with continuous density, and the maximization with respect to D is over all 1-Lipschitz continuous functions. Recall the definition of kernel Stein discrepancy\nS(P,PE) = Ex,x′∼P[(∇x log dP/dPE(x))>k(x,x′)∇x log dP/dPE(x′)],\nwhere dP/dPE is the Radon-Nikodym derivative. Observe that S(P,PE) is infinite if P is not absolutely continuous with respect to PE . Hence, to minimize the objective of (6), it suffices to consider those P’s that are absolutely continuous with respect to PE . Introducing a variable replacement h(x) := dP/dPE(x)− 1, then problem (6) becomes\nmin E,h max D\n{ EPE [(1 + h)D]− EPreal [D] + λ1S(Preal,PE)\n+ λ2 · Ex,x′∼P[∇x log(1 + h(x))>k(x,x′)∇x log(1 + h(x′))] } ,\n(7)\nwhere the minimization with respect to h is over all L1(PE) functions with PE-expectation zero.\nFixing E, we claim that we can swap minh and maxD. Indeed, without loss of generality, we can restrict D to be such that D(x0) = 0 for some element x0, as a constant shift does not change the value of EPE [(1 + h)D] − EPreal [D]. The set of Lipschitz functions that vanish at x0 is a Banach space, and the set of 1-Lipschitz functions is compact (Weaver, 1999). Moreover, L1(PE) is also a Banach space and the objective function is linear in both h and D. The above verifies the condition of Sion’s minimax theorem, and thus the claim is proved.\nSwapping minh and maxD in (7) and fixing E and D, we consider\nmin h:EPE [h]=0\n{EPE [hD] + λ2 · Ex,x′∼P[∇x log(1 + h(x))>k(x,x′)∇x log(1 + h(x′))]}\n= min h:EPE [h]=0\n{ EPE [hD] + λ2 · Ex,x′∼P [ ∇xh(x)>\n1 + h(x) k(x,x′) ∇xh(x′) 1 + h(x′) ]} = min h:EPE [h]=0 { EPE [hD] + λ2 · Ex,x′∼PE [ ∇xh(x)>k(x,x′)∇xh(x′) ]} ,\nwhere the first equality follows from the chain rule of the derivative, and the second equality follows from a change of measure dP = (1 + h)dPE . Introducing an auxiliary variable r so that r2 is an\nupper bound of Ex,x′∼PE [ ∇xh(x)>k(x,x′)∇xh(x′) ] , we have that\nmin h:EPE [h]=0\n{ EPE [hD] + λ2 · Ex,x′∼PE [ ∇xh(x)>k(x,x′)∇xh(x′) ]} = min\nr≥0 min\nh:EPE [h]=0\n{ EPE [hD] + λ2r2 : Ex,x′∼PE [ ∇xh(x)>k(x,x′)∇xh(x′) ] ≤ r2 } = min\nr≥0 min\nh:EPE [h]=0\n{ rEPE [hD] + λ2r2 : Ex,x′∼PE [ ∇xh(x)>k(x,x′)∇xh(x′) ] ≤ 1 }\n= min r≥0\n{ λ2r 2 − r ‖D‖H−1(PE ;k) }\n=− 1 4λ2 ‖D‖2H−1(PE ;k) ,\nwhere the first equality holds because Ex,x′∼PE [ ∇x(rh)(x)>k(x,x′)∇x(rh)(x′) ] =\nr2Ex,x′∼PE [ ∇xh(x)>k(x,x′)∇xh(x′) ] for all r ≥ 0 and by introducing an auxiliary vari-\nable r2 = Ex,x′∼PE [ ∇xh(x)>k(x,x′)∇xh(x′) ] ; the second equality follows from a change of variable from h to rh; and the third equality follows from the definition of the kernel Sobolev dual norm. Plugging back in (7) yields the ideal result." }, { "heading": "C.2 PROOF FOR THEOREM 2", "text": "Proof. Applying the definition of Stein discrepancy on S(PE ,PG) and under the exhaustiveness assumption of G, we rewrite the problem as\nmin E,P max f {λ1S(Preal,PE) + λ2Ey∼P[APE f(y)] +W(Preal,P)},\nwhere the minimization with respect to E is over the set of all engergy functions; the minimization with respect to P is over all continuous distributions; and the maximization with respect to f is over the Stein class for PE . Let us fix E. Using a similar argument as in the proof of Theorem 1, it suffices to restrict P on the set of distributions that are absolutely continuous with respect to PE , which can be identified as the set of L1(PE) functions with PE-mean zero and is thus Banach. Together with the compactness assumption of the Stein class, using Sion’s minimax theorem, we can swap the minimization over P and the maximization over f . Now, fixing further f , consider\nmin P {λ2Ey∼P[APE f(y)] +W(Preal,P)}. (8)\nRecall the definition of Wasserstein metric\nW(Preal,P) = min γ E(x,y)∼γ [‖x− y‖],\nwhere the minimization is over all joint distributions of (x,y) with x-marginal Preal and y-marginal P. We rewrite problem (8) as\nmin P,γ {E(x,y)∼γ [λ2 APE f(y) + ||x− y||]},\nwhere γ has marginals Preal and P. Since P is unconstrained, the above problem is further equivalent to\nmin γ {E(x,y)∼γ [λ2APE f(y)] + ||x− y||]},\nwhere the minimization is over all joint distributions of (x,y) with x-marginal being Preal. Using the law of total expectation, the problem above is equivalent to\nmin {γx}x∈suppPreal\nEx∼Preal [ Ey∼γx [λ2APE f(y) + ||x− y|| | x] ] = Ex∼Preal [ min γx { Ey∼γx [λ2APE f(y) + ||x− y|| | x]\n}] = Ex∼Preal [ min y∈X {λ2APE f(y) + ||x− y||}\n] where the minimization in the first line of the equation is over γx, the set of all conditional distributions of y given x where x is over the support supp Preal of Preal; the exchanging of min and E\nin the first equality follows from the interchangebability principle (Shapiro et al., 2009); the second equality holds because the infimum can be restricted to the set of point masses. Finally, the original problem is equivalent to\nmin E max f\n{ λ1S(Preal,PE) + Ex∼Preal [ min y∈X {λ2APE f(y) + ||x− y||} ]} .\nTherefore, the proof is completed using the definition of Moreau-Yosida regularization." }, { "heading": "D DETAILS AND PROOFS IN SECTION 4.2", "text": "" }, { "heading": "D.1 DISCUSSIONS ON ONE-DIMENSIONAL CASE", "text": "The training for minimax game in GAN is difficult. When using traditional gradient methods, the training would suffer from some oscillatory behaviors (Goodfellow, 2017; Liang & Stokes, 2019). In order to better understand the optimization behaviors, we first study a one-dimension linear system that provides some insights on this problem. Such toy example (or a similar one) is also utilized by (Gidel et al., 2019; Nagarajan & Kolter, 2017) to shed lights on the instability of WGAN training2. Consider a linear critic Dψ(x) = ψx and generator Gθ(z) = θz. Then the Wasserstein GAN objective can be written as a constrained bilinear problem: minθ max|ψ|≤1 ψE[x]− ψθE[z], which could be further simplified as an unconstrained version (the behaviors can be generalized to multidimensional cases (Gidel et al., 2019)):\nmin θ max ψ\nψ − ψ · θ. (9)\nUnfortunately, such simple objective cannot guarantee convergence by traditional gradient methods like SGD with alternate updating3: θk+1 = θk+ηψk,, ψk+1 = ψk+η(1−θk+1). Such optimization would suffer from an oscillatory behavior, i.e., the updated parameters go around the optimum point ([ψ∗, θ∗] = [0, 1]) forming a circle without converging to the centrality, which is shown in Fig. 3(a). A recent study in (Liang & Stokes, 2019) theoretically show that such oscillation is due to the interaction term in (9).\nOne solution to the instability of GAN training is to add (likelihood) regularization, which has been widely studied by recent literatures (Warde-Farley & Bengio, 2017; Li & Turner, 2018). With regularization term, the objective changes into minθ max|ψ|≤1 ψE[x]− ψθE[z]− λE[logµ(θz)], where µ(·) denotes the likelihood function and λ is a hyperparameter. A recent study (Tao et al., 2019) proves that when λ < 0 (likelihood-regularization), the extra term is equivalent to maximizing sample evidence, helping to stabilize GAN training; when λ > 0 (entropy-regularization), the extra term maximizes sample entropy, which encourages diversity of generator. Here we consider a Gaussian likelihood function for generated sample x′, µ(x′) = exp(− 12 (x\n′ − b)2) which is up to a constant. Its parameter can be estimated by b = E[x]. Then for generated sample x′ = θz, we have E(logµ(θz)) = − 12E[z 2]θ2 + E[z]E[x]θ − 12E[x] 2. Like the case in WGAN, we consider E[x] = E[z] = 1. Assume Var[z] = 1 and we have E[z2] = 1 + E[z]. Hence, for the analysis on likelihood- (and entropy-) regularized WGAN, we can study the following system:\nmin θ max ψ\nψ − ψ · θ − λ(θ2 − θ). (10)\nWhen λ = 1, the above objective degrades to (9); when λ < 0 (likelihood-regularization), the the gradient of regularization term pushes θ to shrink, which helps for convergence; when λ > 0 (entropy-regularization), the added term forms an amplifiying strength on θ and leads to divergence. Another issue of likelihood-regularization is that the extra term changes the optimum point and makes the model converge to a biased distribution, as proved by (Tao et al., 2019). In this case, one can verify that the optimum point becomes [ψ∗, θ∗] = [−λ, 1], resulting in a bias. To avoid this issue, (Tao et al., 2019) proposes to temporally decrease |λ| through training. However, such method would also be stuck in oscillation when |λ| gets close to zero as is shown in Fig. 3(a). Finally, consider our proposed model. We also simplify the density estimator as a basic energy model pφ(x) = exp(− 12x\n2 − φx) whose score function ∇x log pφ(x) = −x − φ. Then 2Our theoretical discussions focus on WGAN, and we also compare with original GAN in the experiments. 3Here, we adopt the most widely used alternate updating strategy. The simultaneous updating, i.e., θk+1 =\nθk + ηψk and ψk+1 = ψk + η(1− θk), would diverge in this case.\nif we specify the two Stein discrepancies in (3) as KSD with kernel k(x1, x2) = I(x1 = x2), then S(Preal,PE) = Ex1,x2 [(∇x1 log pφ(x1) − ∇x1 logµ(x1))k(x1, x2)(∇x2 log pφ(x2) − ∇x2 logµ(x2))] = Ex[(∇x log pφ(x) − ∇x logµ(x))2] = (φ + E[x])2. Similarly, one can obtain S(PG,PE) = (φ+ θE[z])2. Therefore we arrive at the objective in (11)\nmin θ max ψ min φ ψ − ψ · θ + λ1 2 (1 + φ)2 + λ2 2 (θ + φ)2. (11)\nInterestingly, for ∀λ1, λ2, the optimum remains the same [ψ∗, θ∗, φ∗] = [0, 1,−1]. Then we show that the optimization guarantees convergence to [ψ∗, θ∗, φ∗]. Proposition 1. Using alternate SGD for (11) geometrically decreases the square normNt = |ψt|2+ |θ − 1|2 + |φ+ 1|2, for any 0 < η < 1 with λ1 = λ2 = 1,\nNt+1 = (1− η2(1− η)2)Nt. (12)\nProof. Instead of directly studying the optimization for (11), we first prove the following problem will converge to the unique optimum,\nmin θ max ψ min φ θψ + θφ+\n1 2 θ2 + φ2. (13)\nApplying alternate SGD we have the following iterations:\nψt+1 = ψt + η ∗ θt, φt+1 = φt − η ∗ (θt + 2φt) = (1− 2η)φt − ηθt,\nθt+1 = θt − η(ψt+1 + φt+1 + θt) = −η(1− 2η)φt + (1− η)θt − ηψt. Then we obtain the relationship between adjacent iterations:[\nψt+1 φt+1 θt+1\n] = [ 1 0 η 0 1− 2η −η −η −η(1− 2η) 1− η ] · [ ψt φt θt ] = M · [ ψt φt θt ] We further calculate the eigenvalues for matrix M and have the following equations (assume the eigenvalue as λ):\n(λ− 1)3 + 3η(λ− 1)2 + 2η2(1 + η)(λ− 1) + 2η3 = 0. One can verify that the solutions to the above equation satisfy |λ| < √ (1− η + η2)(1 + η − η2).\nThen we have the following relationship∥∥∥∥∥ [ ψt+1 φt+1 θt+1 ]∥∥∥∥∥ 2\n2\n= ∥∥∥∥∥[ψt φt θt] ·M>M · [ ψt φt θt ]∥∥∥∥∥ 2\n2\n≤ λ2m · ∥∥∥∥∥ [ ψt φt θt ]∥∥∥∥∥ 2\n2\nwhere λm denotes the eigenvalue with the maximum absolute value of matrix M . Hence, we have\nψ2t+1 + φ 2 t+1 + θ 2 t+1 ≤ (1− η + η2)(1 + η − η2)[ψ2t + φ2t + θ2t ].\nWe proceed to replace ψ, φ and θ in (13) by ψ′, φ′ and θ′ respectively and conduct a change of variable: let θ′ = 1− θ and φ′ = −1− φ. Then we get the conclusion in the proposition.\nAs shown in Fig. 3(a), Stein Bridging achieves a good convergence to the right optimum. Compared with (9), the objective (11) adds a new bilinear term φ · θ, which acts like a connection between the generator and estimator, and two other quadratic terms, which help to penalize the increasing of values through training. The added terms and original terms in (11) cooperate to guarantee convergence to a unique optimum. In fact, the added terms λ12 (1+φ) 2+ λ22 (θ+φ) 2 in (11) and the original terms ψ − ψ · θ in WGAN play both necessary roles to guarantee the convergence to the unique optimum points [ψ∗, θ∗, φ∗] = [0, 1,−1]. If we remove the critic and optimize θ and φwith the remaining loss terms, we would find that the training would converge but not necessarily to [ψ∗, θ∗] = [0, 1] (since the optimum points are not unique in this case). On the other hand, if we remove the estimator, the system degrades to (9) and would not converge to the unique optimum point [ψ∗, θ∗] = [0, 1]. If we consider both of the world and optimize three terms together, the training would converge to a unique global optimum [ψ∗, θ∗, φ∗] = [0, 1,−1]." }, { "heading": "D.2 GENERALIZATION TO BILINEAR SYSTEMS", "text": "Our analysis in the one-dimension case inspires us that we can add affiliated variable to modify the objective and stabilize the training for general bilinear system. The bilinear system is of wide interest for researchers focusing on stability of GAN training ((Goodfellow, 2017; Liang & Stokes, 2019; Gidel et al., 2019; Gemp & Mahadevan, 2018; Zhang & Yu, 2020)). The general bilinear function can be written as F (ψ,θ) = θ>Aψ − b>θ − c>ψ, (14) where ψ,θ are both r-dimensional vectors and the objective is min\nθ max ψ F (ψ,θ) which can be\nseen as a basic form of various GAN objectives. Unfortunately, if we directly use simultaneous (resp. alternate) SGD to optimize such objectives, one can obtain divergence (resp. fluctuation). To solve the issue, some recent papers propose several optimization algorithms, like extrapolation from the past ((Gidel et al., 2019)), crossing the curl ((Gemp & Mahadevan, 2018)) and consensus optimization ((Liang & Stokes, 2019)). Also, (Liang & Stokes, 2019) shows that it is the interaction term which generates non-zero values for∇θψF and∇ψθF that leads to such instability of training. Different from previous works that focused on algorithmic perspective, we propose to add new affiliated variables which modify the objective function and allow the SGD algorithm to achieve convergence without changing the optimum points.\nBased on the minimax objective of (14) we add affiliated r-dimensional variable φ (corresponding to the estimator in our model) the original system and tackle the following problem:\nmin θ max ψ min φ F (ψ,θ) + αH(φ,θ), (15)\nwhere H(φ,θ) = 12 (θ + φ) >B(θ + φ), B = (AA>) 1 2 and α is a non-negative constant. Theoretically, the new problem keeps the optimum points of (14) unchanged. Let L(ψ,φ,θ) = F (ψ,θ) + αG(φ,θ).\nProposition 2. Assume the optimum point of min θ max ψ\nF (ψ,θ) are [ψ∗,θ∗], then the optimum\npoints of (15) would be [ψ∗,θ∗,φ∗] where φ∗ = −θ∗.\nProof. The condition tells us that ∇θF (ψ∗,θ) = 0 and ∇ψF (ψ,θ∗) = 0. Then we derive the gradients for L(ψ, φ, θ),\n∇ψL(ψ∗,φ,θ) = ∇θF (ψ∗,θ) = 0, (16)\n∇θL(ψ,φ,θ∗) = ∇θF (ψ,θ∗) +∇θH(φ,θ∗) = 1\n2 (B + B>)(θ∗ + φ), (17)\n∇φL(ψ,φ,θ) = ∇φH(φ,θ) = 1\n2 (B + B>)(φ+ θ), (18)\nCombining (17) and (18) we getφ∗ = −θ∗. Hence, the optimum point of (15) is [ψ∗,θ∗,φ∗] where φ∗ = −θ∗.\nThe advantage of the new problem is that it can be solved by SGD algorithm and guarantees convergence theoretically. We formulate the results in the following theorem.\nTheorem 3. For problem min θ max ψ min φ L(ψ,φ,θ) using alternate SGD algorithm, i.e.,\nψt+1 = ψt + η∇ψL(θt,ψt,φt), φt+1 = φt − η∇φL(θt,ψt+1,φt), θt+1 = θt − η∇θL(θt,ψt+1,φt+1),\n(19)\nwe can achieve convergence to [ψ∗,θ∗,φ∗] where φ∗ = −θ∗ with at least linear rate of (1− η1 + η22)(1 + η2 − η21) where η1 = ησmin, η2 = ησmax and σmin (resp. σmax) denotes the maximum (resp. minimum) singular value of matrix A.\nTo prove Theorem 3, we can prove a more general argument.\nLemma 1. If we consider any first-order optimization method on (15), i.e., ψt+1 ∈ ψ0 + span(L(ψ0,φ,θ), · · · , F (ψt,φ,θ)),∀t ∈ N, φt+1 ∈ ψ0 + span(L(ψ,φ0,θ), · · · , L(ψ,φt,θ)),∀t ∈ N, θt+1 ∈ ψ0 + span(L(ψ,φ,θ0), · · · , L(ψ,φ,θt)),∀t ∈ N,\nThen we have\nψ̃t = V >(ψt −ψ∗), φ̃t = U>(φt − φ∗), θ̃t = U>(θt − θ∗),\nwhere U and V are the singular vectors decomposed by matrix A using SVD decomposition, i.e., A = UDV> and the triple ([ψ̃t]i, [φ̃t]i, [θ̃t]i)1≤i≤r follows the update rule with step size σiη as the same optimization method on a unidimensional problem\nmin θ max ψ min φ θψ + θφ+\n1 2 θ2 + 1 2 φ2, (20)\nwith step size η, where σi denotes the i-th singular value on the diagonal of D.\nProof. The proof is extended from the proof of Lemma 3 in (Gidel et al., 2019). The general class of first-order optimization methods derive the following updations:\nψt+1 = ψ0 + t+1∑ s=0 ρst(A >θs − c) = ψ0 + t+1∑ s=0 ρstA >(θs − θ∗),\nφt+1 = φ0 + 1\n2 t+1∑ s=0 δst(B + B >)(θs + φs),\nθt+1 = θ0 + t+1∑ s=0 µst[A(ψs −ψ∗) + 1 2 (B + B>)(θs + φs)],\nwhere ρst, δst, µst ∈ R depend on specific optimization method (for example, in SGD, ρtt = δtt = µtt remain as a non-zero constant for ∀t and other coefficients are zero). Using SVD A = UDV> and the fact θ∗ = −φ∗, B = (UDD>U>) = D, we have\nV>(ψt+1 −ψ∗) = V>(ψ0 −ψ∗) + t+1∑ s=0 ρstD >U>(θs − θ∗)\nU>(φt+1 − φ∗) = U>(φ0 − φ∗) + t+1∑ s=0 δstU >D(θs − θ∗) + U>D(φs − φ∗),\nU>(θt+1−θ∗) = U>(θ0−θ∗) + t+1∑ s=0 ρst[DV >(ψs−ψ∗) +U>D(θs−θ∗) +U>D(φs−φ∗)], and equivalently,\nψ̃t+1 = ψ̃0 + t+1∑ s=0 ρstD >θ̃t, φ̃t = φ̃0 + t+1∑ s=0 δstD(θ̃t + φ̃t),\nθ̃t+1 = θ̃0 + t+1∑ s=0 ρstD(ψ̃t + θ̃t + φ̃t).\nNote that D is a rectangular matrix with non-zero elements on a diagonal block of size r. Hence, the above r-dimensional problem can be reduced to r unidimensional problems:\n[ψ̃t+1]i = [ψ̃0]i + t+1∑ s=0 ρstσi[θ̃t]i, [φ̃t]i = [φ̃0]i + t+1∑ s=0 δstσi([θ̃t]i + [φ̃t]i),\n[θ̃t+1]i = [θ̃0]i + t+1∑ s=0 ρstσi([ψ̃t]i + [θ̃t]i + [φ̃t]i).\nThe above iterations can be conducted independently in each dimension where the optimization in i-th dimension follows the same updating rule with step size σiη as problem in (20).\nFurthermore, since problem (20) can achieve convergence with a linear rate of (1−η+η2)(1+η−η2) using alternate SGD (the proof is similar to that of (13)), the multi-dimensional problem in (15) can achieve convergence by SGD with at least a rate of (1− η1 + η22)(1 + η2 − η21) where η1 = ησmax, η2 = ησmin and σmax (resp. σmin) denotes the maximum (resp. minimum) singular value of matrix A. We conclude the proof for Theorem 4.\nTheorem 3 suggests that the added term H(φ,θ) with affiliated variables φ could help the SGD algorithm achieve convergence to the the same optimum points as directly optimizing F (ψ,θ). Our method is related to consensus optimization algorithm ((Liang & Stokes, 2019)) which adds a regularization term ‖∇θF (ψ,θ)‖ + ‖∇ψF (ψ,θ)‖ to (14) resulting extra quadratic terms for θ and ψ. The disadvantage of such method is the requirement of Hessian matrix of F (ψ,θ) which is computational expensive for high-dimensional data. By contrast, our solution only requires the first-order derivatives." }, { "heading": "E DETAILS FOR IMPLEMENTATIONS", "text": "" }, { "heading": "E.1 SYNTHETIC DATASETS", "text": "We provide the details for two synthetic datasets. The Two-Circle dataset consists of 24 Gaussian mixtures where 8 of them are located in an inner circle with radius r1 = 4 and 16 of them lie in an outer circle with radius r2 = 8. For each Gaussian component, the covariance matrix is(\n0.2 0 0 0.2 ) = σ1I and the mean value is [r1 cos t, r1 sin t], where t = 2π·k8 , k = 1, · · · , 8, for the inner circle, and [r2 cos t, r2 sin t], where t = 2π·k16 , k = 1, · · · , 16 for the outer circle. We sample N1 = 2000 points as true observed samples for model training.\nThe Two-Spiral dataset contains 100 Gaussian mixtures whose centers locate on two spiral-shaped curves. For each Gaussian component, the covariance matrix is (\n0.5 0 0 0.5\n) = σ2I and the mean\nvalue is [−c1 cos c1, c1 sin c1], where c1 = 2π3 + linspace(0, 0.5, 50) · 2π, for one spiral, and [c2 cos c2,−c2 sin c2], where c2 = 2π3 + linspace(0, 0.5, 50) · 2π for another spiral. We sample N2 = 5000 points as true observed samples." }, { "heading": "E.2 MODEL SPECIFICATIONS AND TRAINING ALGORITHM", "text": "In different tasks, we consider different model specifications in order to meet the demand of capacify as well as test the effectiveness under various settings. Our proposed framework (3) adopts Wasserstein distance for the first term and two Stein discrepancies for the second and the third terms. We can write (3) as a more general form\nmin θ,φ D1(Preal,PG) + λ1D2(Preal,PE) + λ2D3(PG,PE), (21)\nwhere D1, D2, D3 denote three general discrepancy measures for distributions. As stated in our remark, D1 can be specified as arbitrary discrepancy measures for implicit generative models. Here we also use JS divergence, the objective for valina GAN. To well distinguish them, we call the model using Wasserstein distance (resp. JS divergence) as Joint-W (resp. Joint-JS) in our experiments. On the other hand, the two Stein discrepancies in (3) can be specified by KSD (as defined by Sk in (5)) or general Stein discrepancy with an extra critic (as defined by S in (1)). Hence, the two specifications for D1 and the two for D2 (D3) compose four different combinations in total, and we organize the objectives in each case in Table 3.\nIn our experiments, we use KSD with RBF kernels for D2 and D3 in Joint-W and Joint-JS on two synthetic datasets. For MNIST with conditional training (given the digit class as model input), we also use KSD with RBF kernels. For MNIST and CIFAR with unconditional training (the class is not given as known information), we find that KSD cannot provide desirable results so we adopt general Stein discrepancy for higher model capacity.\nThe objectives in Table 3 appear to be comutationally expensive. In the worst case (using general Stein discrepancy), there are two minimax operations where one is from GAN or WGAN and one is from Stein discrepancy estimation. To guarantee training efficiency, we alternatively update the\ngenerator, estimator, Wasserstein critic and Stein critic over the parameters θ, φ, ψ and π respectively. Specifically, in one iteration, we optimize the generator over θ and the estimator over φ with one step respectively, and then optimize the Wasserstein critic over ψ with nd steps and the Stein critic over π with nc steps. Such training approach guarantees the same time complexity order of proposed method as that of GAN or WGAN, and the training time for our model can be bounded within constant times the time for training GAN model. In our experiment, we set nd = nc = 5 and empirically find that our model Stein Bridging would be two times slower than WGAN on average. We present the training algorithm for Stein Bridging in Algorithm 1.\nE.3 IMPLEMENTATION DETAILS\nWe give the information of network architectures and hyper-parameter settings for our model as well as each competitor in our experiments.\nThe energy function is often parametrized as a sum of multiple experts ((Hinton, 1999)) and each expert can have various function forms depending on the distributions. If using sigmoid distribution, the energy function becomes (see section 2.1 in (Kim & Bengio, 2017) for details)\nEφ(x) = ∑ i log(1 + e−(Win(x)+bi)), (22)\nwhere n(x) maps input x to a feature vector and could be specified as a deep neural network, which corresponds to deep energy model ((Ngiam et al., 2011))\nWhen not using KSD, the implementation for Stein critic f and operation function φ in (1) has still remained an open problem. Some existing studies like (Hu et al., 2018) set d′ = 1 in which situation f reduces to a scalar-function from d-dimension input to one-dimension scalar value. Such setting can reduce computational cost since large d′ could lead to heavy computation for training. Empirically, in our experiments on image dataset, we find that setting d′ = 1 can provide similar performance to d′ = 10 or d′ = 100. Hence, we set d′ = 1 in our experiment in order for efficiency. Besides, to further reduce computational cost, we let the two Stein critics share the parameters, which empirically provide better performance than two different Stein critics.\nAnother tricky point is how to design a proper Γ given d′ 6= d where the trace operation is not applicable. One simple way is to set Γ as some matrix norms. However, the issue is that using matrix norm would make it hard for SGD learning. The reason is that the Γ and the expectation in (1) cannot exchange the order, in which case there is no unbiased estimation by mini-batch samples for the gradient. Here, we specify Γ as max-pooling over different dimensions of Apφ [fπ(x)], i.e. the gradient would back-propagate through the dimension with largest absolute value at one time. Theoretically, such setting can guarantee the value in each dimension reduces to zero through training and we find it works well in practice.\nAlgorithm 1: Training Algorithm for Stein Bridging 1 REQUIRE: observed training samples {x} ∼ Preal. 2 REQUIRE: θ0, φ0, ψ0, π0, initial parameters for generator, estimator, Wasserstein critic and\nStein critic models respectively. αE = 0.0002, βE1 = 0.9, β E 2 = 0.999, Adam hyper-parameters for explicit models. αI = 0.0002, βI1 = 0.5, β I 2 = 0.999, Adam hyper-parameters for implicit models. λ1 = 1, λ2, weights for D2 and D3 (we suggest increasing λ2 from 0 to 1 through training). nd = 5, nc = 5 number of iterations for Wasserstein critic and Stein critic, respectively, before one iteration for generator and estimator. B = 100, batch size.\n3 while not converged do 4 for n = 1, · · · , nd do 5 Sample B true samples {xi}Bi=1 from {x}; 6 Sample B random noise {zi}Bi=1 ∼ P0 and obtain generated samples x̃i = Gθ(zi) ; 7 Ldis = 1B ∑B i=1 dψ(xi)− dψ(x̃i)− λ(‖∇x̂idψ(x̂i)‖ − 1)2 // the last term is for gradient penalty in WGAN-GP where x̂i = ixi + (1− i)x̃i, i ∼ U(0, 1); 8 ψk+1 ← Adam(−Ldis, ψk, αI , βI1 , βI2)// update the Wasserstein critic; 9 for n = 1, · · · , nc do\n10 Sample B true samples {xi}Bi=1 from {x}; 11 Sample B random noise {zi}Bi=1 ∼ P0 and obtain generated samples x̃i = Gθ(zi) ; 12 Lcritic = 1B ∑B i=1 λ1Apφ [fπ(x)] + λ2Apφ [fπ(x̃i)]; 13 πk+1 ← Adam(−Lcritic, πk, αE , βE1 , βE2 )// update the Stein critic; 14 Sample B random noise {zi}Bi=1 ∼ P0 and obtain generated samples x̃i = Gθ(zi) ; 15 Lest = 1B ∑B i=1 λ1Apφ [fπ(x)] + λ2Apφ [fπ(x̃i)]; 16 φk+1 ← Adam(Lest, φk, αE , βE1 , βE2 )// update the density estimator; 17 Lgen = 1B ∑B i=1−dψ(x̃i) + λ2Apφ [fπ(x̃i)]; 18 θk+1 ← Adam(Lgen, θk, αI , βI1 , βI2)// update the sample generator; 19 OUTPUT: trained sample generator Gθ(z) and density estimator pφ(x).\nFor synthetic datasets, we set the noise dimension as 4. All the generators are specified as a threelayer fully-connected (FC) neural network with neuron size 4−128−128−2, and all the Wasserstein critics (or the discriminators in JS-divergence-based GAN) are also a three-layer FC network with neuron size 2 − 128 − 128 − 1. For the estimators, we set the expert number as 4 and the feature function n(x) is a FC network with neuron size 2 − 128 − 128 − 4. Then in the last layer we sum the outputs from each expert as the energy value E(x). The activation units are searched within [LeakyReLU, tanh, sigmoid, softplus]. The learning rate [1e − 6, 1e − 5, 1e − 4, 1e − 3, 1e − 2] and the batch size [50, 100, 150, 200]. The gradient penalty weight for WGAN is searched in [0, 0.1, 1, 10, 100].\nFor MNIST dataset, we set the noise dimension as 100. All the critics/discriminators are implemented as a four-layer network where the first two layers adopt convolution operations with filter size 5 and stride [2, 2] and the last two layers are FC layers. The size for each layer is 1 − 64 − 128 − 256 − 1. All the generators are implemented as a four-layer networks where the first two layers are FC and the last two adopt deconvolution operations with filter size 5 and stride [2, 2]. The size for each layer is 100 − 256 − 128 − 64 − 1. For the estimators, we consider the expert number as 128 and the feature function is the same as the Wasserstein critic except that the size of last layer is 128. Then we sum the outputs from each expert as the energy value. The activation units are searched within [ReLU,LeakyReLU, tanh]. The learning rate [2e− 5, 2e− 4, 2e− 3, 2e− 2] and the batch size [32, 64, 100, 128]. The gradient penalty weight for WGAN is searched in [1, 10, 100, 1000].\nFor CIFAR dataset, we adopt the same architecture as DCGAN for critics and generators. As for the estimator, the architecture of feature function is the same as the critics except the last year where we set the expert number as 128 and sum each output as the output energy value. The architectures for Stein critic are the same as Wasserstein critic for both MNIST and CIFAR datasets. In other words,\nwe consider d′ = 1 in (1) and further simply φ as an average of each dimension of Ex∼P[AQf(x)]. Empirically we found this setting can provide efficient computation and decent performance." }, { "heading": "E.4 EVALUATION METRICS", "text": "We adopt some quantitative metrics to evaluate the performance of each method on different tasks. In section 4.1, we use two metrics to test the sample quality: Maximum Mean Discrepancy (MMD) and High-quality Sample Rate (HSR). MMD measures the discrepancy between two distributionsX and Y , MMD(X,Y ) = ‖ 1n ∑n i=1 Φ(xi)− 1 m ∑m j=1 Φ(yi)‖ where xi and yj denote samples from X and Y respectively and Φ maps each sample to a RKHS. Here we use RBF kernel and calculate MMD between generated samples and true samples. HSR statistics the rate of high-quality samples over all generated samples. For Two-Cirlce dataset, we define the generated points whose distance from the nearest Gaussian component is less than σ1 as high-quality samples. We generate 2000 points in total and statistic HSR. For Two-Spiral dataset, we set the distance threshold as 5σ2 and generate 5000 points to calculate HSR. For CIFAR, we use the Inception V3 Network in Tensorflow as pre-trained classifier to calculate inception score.\nIn section 4.2, we use three metrics to characterize the performance for density estimation: KL divergence, JS divergence and AUC. We divide the map into a 300 meshgrid, calculate the unnormalized density values of each point given by the estimators and compute the KL and JS divergences between estimated density and ground-truth density. Besides, we select the centers of each Gaussian components as positive examples (expected to have high densities) and randomly sample 10 points within a circle around each center as negative examples (expected to have relatively low densities) and rank them according to the densities given by the model. Then we obtain the area under the curve (AUC) for false-positive rate v.s. true-positive rate." } ]
2,020
null
SP:057cf13c9fd038dc102253838b888580acc6e2b6
[ "This paper considers the problem of counterfactual regret minimization and proposes an algorithm that does not use the importance sampling procedure. The claim is that this helps in reducing the variance usually introduced by the IS procedure. They propose a new algorithm that uses the previously used policies as a buffer and replays those policies to learn a new policy. The algorithm is also claimed to be highly scalable for games with large state-action pairs." ]
Regret minimization has played a key role in online learning, equilibrium computation in games, and reinforcement learning (RL). In this paper, we describe a general model-free RL method for no-regret learning based on repeated reconsideration of past behavior: Advantage Regret-Matching Actor-Critic (ARMAC). Rather than saving past state-action data, ARMAC saves a buffer of past policies, replaying through them to reconstruct hindsight assessments of past behavior. These retrospective value estimates are used to predict conditional advantages which, combined with regret matching, produces a new policy. In particular, ARMAC learns from sampled trajectories in a centralized training setting, without requiring the application of importance sampling commonly used in Monte Carlo counterfactual regret (CFR) minimization; hence, it does not suffer from excessive variance in large environments. In the single-agent setting, ARMAC shows an interesting form of exploration by keeping past policies intact. In the multiagent setting, ARMAC in self-play approaches Nash equilibria on some partially-observable zero-sum benchmarks. We provide exploitability estimates in the significantly larger game of betting-abstracted no-limit Texas Hold’em.
[]
[ { "authors": [ "Yasin Abbasi-Yadkori", "Peter Bartlett", "Kush Bhatia", "Nevena Lazic", "Csaba Szepesvari", "Gellert Weisz" ], "title": "Politex: Regret bounds for policy iteration using expert prediction", "venue": "In Proceedings of the 36th International Conference on Machine Learning, volume 97 of PMLR,", "year": 2019 }, { "authors": [ "Adriá Puigdoménech Badia", "Pablo Sprechmann" ], "title": "Never give up: Learning directed exploration strategies", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "A. Blum", "Y. Mansour" ], "title": "Learning, regret minimization, and equilibria", "venue": "Algorithmic Game Theory, chapter 4. Carnegie Mellon University,", "year": 2007 }, { "authors": [ "Michael Bowling", "Neil Burch", "Michael Johanson", "Oskari Tammelin" ], "title": "Heads-up Limit Hold’em", "venue": "Poker is solved. Science,", "year": 2015 }, { "authors": [ "Noam Brown", "Tuomas Sandholm" ], "title": "Superhuman AI for heads-up no-limit poker", "venue": "Libratus beats top professionals. Science,", "year": 2017 }, { "authors": [ "Noam Brown", "Tuomas Sandholm" ], "title": "Solving imperfect-information games via discounted regret minimization", "venue": "CoRR, abs/1809.04040,", "year": 2021 }, { "authors": [ "Noam Brown", "Tuomas Sandholm" ], "title": "Superhuman AI for multiplayer", "venue": "poker. Science,", "year": 2019 }, { "authors": [ "Neil Burch", "Michael Johanson", "Michael Bowling" ], "title": "Solving imperfect information games using decomposition", "venue": "In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence (AAAI),", "year": 2014 }, { "authors": [ "Andrea Celli", "Alberto Marchesi", "Tommaso Bianchi", "Nicola Gatti" ], "title": "Learning to correlate in multi-player general-sum sequential games, 2019", "venue": null, "year": 2019 }, { "authors": [ "Andrea Celli", "Alberto Marchesi", "Gabriele Farina", "Nicola Gatti" ], "title": "No-regret learning dynamics for extensive-form correlated and coarse correlated equilibria, 2020", "venue": null, "year": 2020 }, { "authors": [ "Gabriele Farina", "Tommaso Bianchi", "Tuomas Sandholm" ], "title": "Coarse correlation in extensive-form games, 2019", "venue": null, "year": 2019 }, { "authors": [ "Richard Gibson", "Neil Burch", "Marc Lanctot", "Duane Szafron" ], "title": "Efficient monte carlo counterfactual regret minimization in games with many player actions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "S. Hart", "A. Mas-Colell" ], "title": "A simple adaptive procedure leading to correlated equilibrium", "venue": "Econometrica, 68(5):1127–1150,", "year": 2000 }, { "authors": [ "Johannes Heinrich", "Marc Lanctot", "David Silver" ], "title": "Fictitious self-play in extensiveform games", "venue": "In Proceedings of the 32nd International Conference on Machine Learning", "year": 2015 }, { "authors": [ "Johannes Heinrich", "David Silver" ], "title": "Deep reinforcement learning from self-play in imperfect-information", "venue": "games. CoRR,", "year": 2016 }, { "authors": [ "Peter H. Jin", "Sergey Levine", "Kurt Keutzer" ], "title": "Regret minimization for partially observable deep reinforcement learning", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Vojtech Kovarík", "Martin Schmid", "Neil Burch", "Michael Bowling", "Viliam Lisý" ], "title": "Rethinking formal models of partially observable multiagent decision making", "venue": null, "year": 1906 }, { "authors": [ "Marc Lanctot", "Edward Lockhart", "Jean-Baptiste Lespiau", "Vinicius Zambaldi", "Satyaki Upadhyay", "Julien Pérolat", "Sriram Srinivasan", "Finbarr Timbers" ], "title": "OpenSpiel: A framework for reinforcement learning in games", "venue": "CoRR, abs/1908.09453,", "year": 2019 }, { "authors": [ "Marc Lanctot", "Kevin Waugh", "Martin Zinkevich", "Michael Bowling" ], "title": "Monte Carlo sampling for regret minimization in extensive games", "venue": "Advances in Neural Information Processing Systems", "year": 2009 }, { "authors": [ "Marc Lanctot", "Vinicius Zambaldi", "Audrunas Gruslys", "Angeliki Lazaridou", "Karl Tuyls", "Julien Perolat", "David Silver", "Thore Graepel" ], "title": "A unified game-theoretic approach to multiagent reinforcement learning", "venue": null, "year": 2017 }, { "authors": [ "Viliam Lisý", "Michael H. Bowling" ], "title": "Eqilibrium approximation quality of current no-limit poker", "venue": "bots. CoRR,", "year": 2016 }, { "authors": [ "Edward Lockhart", "Marc Lanctot", "Julien Pérolat", "Jean-Baptiste Lespiau", "Dustin Morrill", "Finbarr Timbers", "Karl Tuyls" ], "title": "Computing approximate equilibria in sequential adversarial games by exploitability descent", "venue": null, "year": 1903 }, { "authors": [ "Matej Moravčík", "Martin Schmid" ], "title": "DeepStack: Expert-level artificial intelligence in heads-up no-limit poker", "venue": "Science, 358(6362),", "year": 2021 }, { "authors": [ "D. Precup", "R.S. Sutton", "S. Singh" ], "title": "Eligibility traces for off-policy policy evaluation", "venue": "ICML,", "year": 2000 }, { "authors": [ "Martin Schmid", "Neil Burch", "Marc Lanctot", "Matej Moravcik", "Rudolf Kadlec", "Michael Bowling" ], "title": "Variance reduction in monte carlo counterfactual regret minimization (VR- MCCFR) for extensive form games using baselines", "venue": "In Proceedings of the The Thirty- Third AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Wenling Shang", "Kihyuk Sohn", "Diogo Almeida", "Honglak Lee" ], "title": "Understanding and improving convolutional neural networks via concatenated rectified linear units", "venue": null, "year": 2016 }, { "authors": [ "Y. Shoham", "K. Leyton-Brown" ], "title": "Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations", "venue": "Cambridge University Press,", "year": 2009 }, { "authors": [ "Sriram Srinivasan", "Marc Lanctot", "Vinicius Zambaldi", "Julien Pérolat", "Karl Tuyls", "Rémi Munos", "Michael Bowling" ], "title": "Actor-critic policy optimization in partially observable multiagent environments", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Eric Steinberger", "Adam Lerer", "Noam Brown" ], "title": "Dream: Deep regret minimization with advantage baselines and model-free learning, 2020", "venue": null, "year": 2020 }, { "authors": [ "R. Sutton", "A. Barto" ], "title": "Reinforcement Learning: An Introduction", "venue": "MIT Press, 2nd edition,", "year": 2018 }, { "authors": [ "Oriol Vinyals", "Igor Babuschkin", "Wojciech M. Czarnecki", "Michaël Mathieu", "Andrew Dudzik", "Junyoung Chung", "David H. Choi", "Richard Powell" ], "title": "Grandmaster level in StarCraft II using multi-agent reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Kevin Waugh", "Dustin Morrill", "J. Andrew Bagnell", "Michael Bowling" ], "title": "Solving games with functional regret estimation", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "M. Zinkevich", "M. Johanson", "M. Bowling", "C. Piccione" ], "title": "Regret minimization in games with incomplete information", "venue": "Advances in Neural Information Processing Systems 20,", "year": 2008 } ]
[ { "heading": "1 Introduction", "text": "The notion of regret is a key concept in the design of many decision-making algorithms. Regret minimization drives most bandit algorithms, is often used as a metric for performance of reinforcement learning (RL) algorithms, and for learning in games (3). When used in algorithm design, the common application is to accumulate values and/or regrets and derive new policies based on these accumulated values. One particular approach, counterfactual regret (CFR) minimization (35), has been the core algorithm behind super-human play in Computer Poker research (4; 25; 6; 8). CFR computes an approximate Nash equilibrium by having players minimize regret in self-play, producing an average strategy that is guaranteed to converge to an optimal solution in two-player zero-sum games and single-agent games.\nWe investigate the problem of generalizing these regret minimization algorithms over large state spaces in the sequential setting using end-to-end function approximators, such as deep networks. There have been several approaches that try to predict the regret, or otherwise, simulate the regret minimization: Regression CFR (RCFR) (34), advantage regret minimization (17), regret-based policy gradients (30), Deep Counterfactual Regret minimization (5), and Double Neural CFR (22). All of these approaches have focused either on the multiagent or single-agent problem exclusively, some have used expert features, while others tree search to scale. Another common approach is based on fictitious play (15; 16; 21; 24), a simple iterative self-play algorithm based on best response. A common technique is to use reservoir sampling to maintain a buffer that represents a uniform sample over past data, which is used to train a classifier representing the average policy. In Neural Fictitious Self-Play (NFSP), this produced competitive policies in limit Texas Hold’em (16), and in Deep CFR this method was shown to approach an approximate equilibrium in a large subgame of Hold’em poker. A generalization of fictitious play, policy-space response oracles (PSRO) (21), stores past policies and a meta-distribution over them, replaying policies against other policies, incrementally adding new best responses to the set, which can be\nseen as a population-based learning approach where the individuals are the policies and the distribution is modified based on fitness. This approach only requires simulation of the policies and aggregating data; as a result, it was able to scale to a very large real-time strategy game (33). In this paper, we describe an approximate form of CFR in a training regime that we call retrospective policy improvement. Similar to PSRO, our method stores past policies. However, it does not store meta-distributions or reward tables, nor do the policies have to be approximate best responses, which can be costly to compute or learn. Instead, the policies are snapshots of those used in the past, which are retrospectively replayed to predict a conditional advantage, which used in a regret matching algorithm produces the same policy as CFR would do. In the single-agent setting, ARMAC is related to Politex (1), except that it is based on regret-matching (14) and it predicts average quantities rather than explicitly summing over all the experts to obtain the policy. In the multiagent setting, it is a sample-based, model-free variant of RCFR with one important property: it uses trajectory samples to estimate quantities without requiring importance sampling as in standard Monte Carlo CFR (20), hence it does not suffer from excessive variance in large environments. This is achieved by using critics (value estimates) of past policies that are trained off-policy using standard policy evaluation techniques. In particular, we introduce a novel training regime that estimates a conditional advantageWi(s, a), which is the cumulative counterfactual regret Ri(s, a), scaled by factor B(s) that depends on the information state s only; hence, using regret-matching over this quantity yields the policy that CFR would compute when applying regret-matching to the same (unscaled) regret values. By doing this entirely from sampled trajectories, the algorithm is model-free and can be done using any black-box simulator of the environment; hence, ARMAC inherits the scaling potential of PSRO without requiring a best-response training regime, driven instead by regret minimization.\nProblem Statement. CFR is a tabular algorithm that enumerates the entire state space, and has scaled to large games through domain-specific (hand-crafted) state space reductions. The problem is to define a model-free variant of CFR using only sampled trajectories and general (domain-independent) generalization via functional approximation, without the use of importance sampling commonly used in Monte Carlo CFR, as it can cause excessive variance in large domains." }, { "heading": "2 Background", "text": "In this section, we describe the necessary terminology. Since we want to include the (partiallyobservable) multiagent case and we build on algorithms from regret minimization we use extensive-form games notations (29). A single-player game represents the single-agent case where histories are aggregated appropriately based on the Markov property.\nA game is a tuple (N ,A,S,H,Z, u, τ), where N = {1, 2, · · · , n} is the set of players. By convention we use i ∈ N to refer to a player, and −i for the other players (N − {i}). There is a special player c called chance (or nature) that plays with a fixed stochastic strategy (chance’s fixed strategy determines the transition function). A is a finite set of actions. Every game starts in an initial state, and players sequentially take actions leading to histories of actions h ∈ H. Terminal histories, z ∈ Z ⊂ H, are those which end the episode. The utility function ui(z) denotes the player i′s return over episode z. The set of states S is a partition of H where histories are grouped into information states s = {h, h′, . . .} such that the player to play at s, τ(s), cannot distinguish among the possible histories (world states) due to private information only known by other players 1. Let ∆(X) represent all distributions over X: each player’s (agent’s) goal is to learn a policy πi : Si → ∆(A), where Si = {s | s ∈ S, τ(s) = i}. For some state s, we denote A(s) ⊆ A as the legal actions at state s, and all valid state policies π(s) assign probability 0 to illegal actions a 6∈ A(s). We now show a diagram to illustrate the key ideas. Kuhn poker, shown in Figure 1 is a poker game with a 3-card deck: Jack (J), Queen (Q), and King (K). Each player antes a single chip and has one more chip to bet with, then gets a single priavte card at random and one is left face down, and players proceed to bet (b) or pass (p). Initially the game\n1Information state is the belief about the world that a given player can infer based on her limited observations and may correspond to many possible histories (world states)\nstarts in the empty history h0 = ∅ where no actions have been taken, and it is chance’s turn to play. Suppose chance samples, according to a fixed distribution, one of its six actions, which corresponding to one of the size-2 permutations of deals (one card to each player). For example, suppose outcome 1Q2J is sampled, corresponding to the first player getting the queen and second player getting the jack. This would correspond to a new history h = (1Q2J). Label the information state corresponding to this history as s depicted by the grey joined circles: h′ = (1Q2K). At this information state s = {h, h′}, it is the fist player’s turn (τ(s) = 1) and it includes every history consistent with their information (namely, that they were dealt the jack).\nThe legal actions are now A(s) = {p, b}. Suppose the first player chooses p and the second player chooses b, then the history is part of s′, the second information state shown in the figure. Finally, suppose the first player chooses to bet (call), then the first player would win gaining 2 chips since they have the higher ranking card. Each player i’s goal is to compute πi that achieves maximal reward in expectation, where the expectation is taken over all players’ policies, even though player i controls only their own policy. Hence, ideally, the player would learn a safe policy that guarantees the best worst-case scenario.\nLet π denote a joint policy. Define the state-value vπ,i(s) as the expected (undiscounted) return for player i given that state s is reached and all players follow π. Let qπ,i be defined similarly except also conditioned on player τ(s) taking action a at s. Formally, vπ,i(s) =∑\n(h,z)∈Z(s) η π(h|s)ηπ(h, z)ui(z), where Z(s) are all terminal histories paired with their\nprefixes that pass through s, ηπ(h|s) = η π(h) ηπ(s) , where η π(s) = ∑ h′∈s η\nπ(h′), and ηπ(h, z) is the product of probabilities of each action taken by the players’ policies along h to z. The stateaction values qπ,i(s, a) are defined analogously. Standard value-based RL algorithms estimate these quantities for policy evaluation. Regret minimization in zero-sum games uses a different notion of value, the counterfactual value: vcπ,i(s) = ∑ (h,z)∈Z(s) η π −i(h)η\nπ(h, z)ui(z), where ηπ−i(h) is the product of opponents’ policy probabilities along h. We also write ηπi (h) the product of player i’s own probabilities along h. Under the standard assumption of perfect recall, we have that for any h, h′ ∈ s, ηπi (h) = ηπi (h′). Thus counterfactual values are formally related to the standard values (30): vπ,i(s) = vcπ,i(s)\nβ−i(π,s) , where β−i(π, s) = ∑ h∈s η π −i(h). Also,\nqcπ,i(s, a) is defined similarly except over histories (ha, z) ∈ Z(s), where ha is history h concatenated with action a.\nCounterfactual regret minimization (CFR) is a tabular policy iteration algorithm that has facilitated many advances in Poker AI (35). On each iteration t, CFR computes counterfactual values qcπ,i(s, a) and vcπ,i(s) for each state s and action a ∈ A(s) and the\nregret of not choosing action a (or equivalently the advantage of choosing action a at state s, rt(s, a) = qcπt,i(s, a)− vcπt,i(s). CFR tracks the cumulative regrets for each state and action, RT (s, a) = ∑T t=1 r\nt(s, a). Define (x)+ = max(0, x); regret-matching then updates the policy of each action a ∈ A(s) as follows (14):\nπT+1(s, a) = NormalizedReLU(RT , s, a) =\n{ RT,+(s,a)∑\nb∈A(s) R T,+(s,b)\nif ∑ b∈A(s)R\nT,+(s, b) > 0 1\n|A(s)| otherwise ,\n(1) In two-player zero-sum games, the mixture policy π̄T converges to the set of Nash equilibria as T →∞. Traditional (off-policy) Monte Carlo CFR (MCCFR) is a generic family of sampling variants (20). In outcome sampling MCCFR, a behavior policy µi is used by player i, while players −i use π−i, a trajectory ρ ∼ (µi, π−i) is sampled, and the sampled counterfactual value is computed:\nq̃cπ,i(s, a | ρ) = 1\nη (µi,π−i) i (z)\nη (µi,π−i) i (ha, z)ui(z), (2)\nif (s, a) ∈ ρ, or 0 otherwise. q̃cπ,i(s, a | ρ) is an unbiased estim. of qcπ,i(s, a) (20, Lemma 1).\nHowever, since these quantities are divided by η(µ,π−i)i (z), the product of player i’s probabilities, (i) there can be significant variance introduced by sampling, especially in problems involving long sequences of decisions, and (ii) the ranges of the ṽci can vary wildly (and unboundedly if the exploration policy is insufficiently mixed) over iterations and states, which could make approximating the values in a general way particularly challenging (34). Deep CFR and Double Neural CFR are successful large-scale implementations of CFR with function approximation, and they get around this variance issue by using external sampling or a robust sampling technique, both of which require a perfect game model and enumeration of the tree. This is unfeasible in very large environments or in the RL setting where full trajectories are generated from beginning to the end without having access to a generative model which could be used to generate transitions from any state." }, { "heading": "2.1 Equilibria, Exploitability, and NashConv", "text": "In two-player zero-sum games (and, trivially, single-agent games) a Nash equilibrium policy is optimal because it maximizes a player’s worst-case payoff (29). Success in Poker AI, leading to super-human ability, has largely been driven by computing approximate equilibria and playing the strategies against humans.\nA Nash equilibrium is a joint policy π∗ = (π∗1 , π∗2) such that no player has incentive to deviate from their respective policy because there is no policy that can achieve higher utility against the opponent’s policy. A best response for player i is bi(π−i) = argmaxπ′i ui(π ′ i, π−i). Finally define player i’s incentive to deviate (to a best response) as δi(π) = ui(bi(π−i), π−i)− ui(π). Then, π is a Nash equilibrium if and only if deviating to a best response does not raise a player’s utility:\n∀i, δi(π) = 0.\nHere, the zero on the right-hand side represents not having any incentive to deviate. However, how about if there is a small amount of incentive? The definition naturally extends to the approximate case where the right-hand size is non-zero. An empirical metric to compute how far an aribtrary policy is to a Nash equilibrium is then the sum over players:\nNashConv(π) = ∑ i δi(π) ≥ 0.\nNote that the maximal value for NashConv is twice the utility range (this would occur if each player uses a policy achieving the minimum utility, and there exists a best response which gets the maximum utility). In the poker literature there is a commonly metric called\nexploitability which computes the average rather than the sum: Exploitability(π) =∑ i δi(π)\n2 .\nThese metrics measure the empirical distance to equilibrium over time leading to an assessment of an algorithm’s convergence rate in practice." }, { "heading": "3 The Advantage Regret-Matching Actor-Critic", "text": "Algorithm 1: Advantage Regret-Matching Actor-Critic\ninput : initial set of parameters θ0, num. players n Set initial learning player i← 1 for epoch t ∈ {0, 1, 2, · · · } do\nreset D ← ∅ Let πt(s) = NormalizedReLU(W̄θt(s)) Let vθt(h) = ∑ a∈A(h) π t(h, a)qθt(h, a) Let µti be a behavior policy for player i for episode k ∈ {1, . . . ,Kact} do\ni← (i+ 1) mod n Sample j ∼ Unif({0, 1, · · · , t− 1}) Sample trajectory ρ ∼ (µi, πj−i) let d← (i, j, {ui(ρ)}i∈N ) for history h ∈ ρ where player i acts do\nlet s be the state containing h let ~r = {qθj (h, a′)− vθj (h)}a′∈A(s) let a be the action that was taken in ρ append (h, s, a, ~r, πj(s)) to d\nend add d to D\nend for learning step k ∈ {1, . . . ,Klearn} do\nSample a random episode/batch d ∼ Unif(D): for history and corresponding state (h, s) ∈ d do Use TB(λ) to train critic qθt(h, a) If τ(s) = i: train W̄θt to predict A(h, a) If τ(s) ∈ −i: train π̄θt to predict πt(s)\nend end Save θt for future retrospective replays; θt+1 ← θt\nend ARMAC is a model-free RL algorithm motivated by CFR. Like algorithms in the CFR framework, ARMAC uses a centralized training setup and operates in epochs that correspond to CFR iterations. Like RCFR, ARMAC uses function approximation to generate policies. ARMAC was designed so that as the number of samples per epoch increases and the expressiveness of the function approximator approaches a lookup table, the generated sequence of policies approaches that of CFR. Instead of accumulating cumulative regrets– which is problematic for a neural network– the algorithm learns a conditional advantage estimate W̄ (s, a) by regression toward a history-dependent advantage A(h, a), for h ∈ s, and uses it to derive the next set of joint policies that CFR would produce. Indeed we show that W̄ (s, a) is an estimate of the cumulative regret R(s, a) up to a multiplicative factor which is a function of the information state s only, and thus cancels out during the regret-matching step. ARMAC is a Monte Carlo algorithm in the same sense as MCCFR: value estimates are trained from full episodes. It uses off-policy learning for training the value estimates (i.e. critics), which we show is sufficient to derive W̄ . However, contrary to MCCFR, it does not use importance sampling. ARMAC is summarized in Algorithm 1.\nARMAC runs over multiple epochs t and produces a joint policies πt+1 at the end of each epoch. Each epoch starts with an empty data set D and simulates a variety of joint policies executing multiple training iterations of relevant function approximators. ARMAC trains several estimators which can be either heads on the same neural network, or separate neural networks. The first one estimate the history-action values qπt,i(h, a) =∑ z∈Z(h,a) η πt(h, z)ui(z). This estimator2 can be trained on all previous data by using\n2In practice, rather than using h as input to our approximators, we use a concatenation of all players’ observations, i.e. an encoding of the augmented information states or action-observation histories (9; 18). In some games this is sufficient to recover a full history. In others there is hidden state from all players, we can consider any chance event to be delayed until the first observation\nany off-policy policy evaluation algorithm from experiences stored in replay memory (we use Tree-Backup(λ) (26)). If trained until zero error, this quantity would produce the same history value estimates as recursive CFR computes in its tree pass. Secondly, the algorithm also trains a state-action network W̄ ti (s, a) that estimates the expected advantage Aµt,i(h, a) = qµt,i(h, a)− vµt,i(h) conditioned on h ∈ s when following some mixture policy µt (which will be precisely defined in Section B). It happens that W̄ ti (s, a) is an estimate of the cumulative regret Rt(s, a) multiplied by a (non-negative) function which depends on the information state s only, thus does not impact the policy improvement step by regret-matching (see Lemma 1). Once W̄ ti (s, a) is trained, the next joint policy πt+1(s, a) can be produced by normalizing the positive part as in Eq. 1. After each training epoch the joint policy πt is saved into a past policy reservoir, as it will have to be loaded and played during future epochs. Lastly, an average policy head π̄t is also trained via a classification loss to predict the policy πt ′ over all time steps t′ ≤ t. We explain its use in Section 4.\nUsing a history-based critic allows ARMAC to avoid using importance weight (IW) based offpolicy correction as is the case in MCCFR, but at the cost of higher bias due to inaccuracies that the critic has. Using IW may be especially problematic for long games. For large games the critic will inevitably rely on generalization to produce history-value estimates.\nTo save memory, reservoir sampling with buffer of size of 1024 was used to prune past policies.\nThe algorithm also works in a single agent case by treating all opponent reach probabilities as 1. More details and results are given in Appendix in Sections C.1 and D.\nOur main theoretical result is that ARMAC learns a function WT which is a stand-in replacement for the cumulative regrets of CFR, RT . See Appendix B for an analysis of ARMAC’s theoretical properties.\nA worked out example is given in Appendix in Section A." }, { "heading": "3.1 Adaptive Policy Selection", "text": "ARMAC dynamically switches between what policy to use based on estimated returns. For every t there is a pool of candidate policies, all based on the following four policies: (i) random uniform policy. (ii) several policies defined by applying Eq 1 over the current epoch’s regret only (qθt(h, a)− vθt(h)), with different levels of random uniform exploration: ∈ 0.0, 0.01, 0.05 . (iii) several policies defined by the mean regret, πt as stated in Algorithm 1, also with the same level of exploration. (iv) the average policy π̄t trained via classification. ARMAC generates experiences using those policies is to facilitate the problem of exploration and to help produce meaningful data at initial stages of learning before average regrets\nof its effects by any of the players in the game. Thus, the critics represent an expectation over those hidden outcomes. Since this does not affect the theoretical results, we choose this notation for simplicity. Importantly, ARMAC remains model-free: we never enumerate chance moves explicitly nor evaluate their probabilities which may be complex for many practical applications.\nare learnt. Each epoch, the candidate policies are ranked by cumulative return against an opponent playing π̄θt . The one producing highest rewards is used half of the times. When sub-optimal policies are run for players −i, they are not used to train mean regrets for player i, but are used to train the critic. Typically, (ii) produces the best policy initially and allows to bootstrap the learning process with the best data (Fig. 2). In later stages of learning, (iii) with the smallest of yields better policies and gets consistently picked over other policies. The more complex the game is, the longer it takes for (iii) to take over (ii).\nExploratory policy µTi is constructed by taking the most recent neural network with 50% probability or otherwise sampling one of the past neural networks uniformly and modulating it by the above described method." }, { "heading": "3.2 Network architecture", "text": "ARMAC can be used with both feed-forward (FF) and recurrent neural networks (RNN) (Fig. 6(a)). For small games where information states can be easily represented, FF networks were used. For larger games, where consuming observations rather than information states is more natural, RNNs were used. More details can be found in Appendix in Section F." }, { "heading": "4 Empirical Evaluation", "text": "For partially-observable multiagent environments, we investigate Imperfect Information (II-) Goofspiel, Liar’s Dice, and Leduc Poker and betting-abstracted no-limit Texas Hold’em poker (in Section 4.1). Goofspiel is a bidding card game where players spend bid cards collect points from a deck of point cards. Liar’s dice is a 1-die versus 1-die variant of the popular game where players alternate bidding on the dice values. Leduc poker is a two-round poker game with a 6-card deck, fixed bet amounts, and a limit on betting. Longer descriptions of each games can be found in (24). We use OpenSpiel (19) implementations with default parameters for Liar’s Dice and Leduc poker, and a 5-card deck and descending points order for II-Goofspiel. To show empirical convergence, we use NashConv, the sum over each player’s incentive to deviate to their best response unilaterally (21), which can be interpreted as an empirical distance from Nash equilibrium (reaching Nash at 0).\nWe compare empirical convergence to approximate Nash equilibria using a model-free sampled form of regression CFR (34) (MC-RCFR). Trajectories are obtained using outcome sampling MCCFR (20), which uses off-policy importance sampling to obtain unbiased estimates of immediate regrets r̂, and average strategy updates ŝ, and individual (learned) state-action baselines (27) to reduce variance. A regressor then predicts ˆ̄R and a policy is obtained via Eq. 1, and similarly for the average strategy. Each episode, the learning player i plays with an -on-policy behavior policy (while opponent(s) play on-policy) and adds every datum (s, r̂, π( ˆ̄R)) to a data set, D, with a retention rule based on reservoir sampling so it approximates a uniform sample of all the data ever seen. MC-RCFR is related, but not equivalent to, a variant of DeepCFR (5) based on outcome sampling (OS-DeepCFR) (31). Oufar results differ significantly from the OS-DeepCFR results reported in (31), and we discuss differences in assumptions and experimental setup from previous work in Appendix C.\nAs with ARMAC, the input is raw bits with no expert features. We use networks with roughly the same number of parameters as the ARMAC experiments: feed-forward with 4 hidden layers of 128 units with concatenated ReLU (28) activations, and train using the Adam optimizer. We provide details of the sweep over hyper-parameters in Appendix C.\nNext we compare ARMAC to NFSP (16), which combines fictitious play with deep neural network function approximators. Two data sets, DRL and DSL, store transitions of sampled experience for reinforcement learning and supervised learning, respectively. DRL is a sliding window used to train a best response policy to π̄−i via DQN. DSL uses reservoir sampling to train π̄i, an average over all past best response policies. During play, each agent mixes between its best response policy and average policy. This stabilizes learning and enables the average policies to converge to an approximate Nash equilibrium. Like ARMAC and MC-RCFR, NFSP does not use any expert features.\nConvergence plots for MC-RCFR and NFSP are shown in Figure 3, and for ARMAC in Figure 4. NashConv values of ARMAC are lower (Liar’s Dice) and higher (Goofspiel) than NFSP, but significantly lower than MC-RCFR in all cases. MC-RCFR results are consistent with the outcome sampling results in DNCFR (22). Both DNCFR and Deep CFR compensate for this problem by instead using external and robust sampling, which require a forward model. So, next we investigate the performance of ARMAC in a much larger game." }, { "heading": "4.1 No-Limit Texas Hold’em", "text": "We ran ARMAC on no-limit Texas Hold’em poker, using the common { Fold, Call, Pot, All-in } (FCPA) action/betting abstraction. This game is orders of magnitude larger than games used above (≈ 4.42 · 1013 information states). Action abstraction techniques were used by all of the state-of-the-art Poker AI bots up to 2017. Modern search-based techniques of DeepStack (25) and Libratus (6) still include action abstraction in the search tree.\nComputing the NashConv requires traversing the whole game and querying the network at each information state. This becomes infeasible for large games. Instead, we use local best-response (LBR) (23). LBR is an exploiter agent that produces a lower-bound on the exploitability: given some policy π−i it does a shallow search using the policy at opponent nodes, and a poker-specific heuristic evaluation at the frontier of the search. LBR found that previous competition-winning abstraction-based Poker bots were far more exploitable than first expected. In our experiments, LBR was limited to the betting abstractions: FCPA, and FC. We used three versions of LBR: LBR-FCPA, which uses all 4 actions within the abstraction, LBR-FC, which uses a more limited action set of { Fold, Call } and LBR-FC12-FCPA34 which uses { Fold, Call } for the first two rounds and FCPA for the rest.\nWe first computed the average return that an ARMAC-trained policy achieves against uniform random. Over 200000 episodes, the mean value was 516 (chips) ± 25 (95% c.i.). Similarly, we evaluated the policy against LBR-FCPA; it won 519 ± 81 (95% c.i.) per episode. Hence, LBR-FCPA was unable to exploit the policy. ARMAC also beat LBR-FC12-FCPA34 by 867 ± 87 (95% c.i.) . Interestingly, ARMAC learned to beat those two versions of LBR surprisingly quickly. A randomly initialized ARMAC network lost against LBR-FCPA by -704 ± 191 (95% c.i.) and against LBR-FC12-FCPA34 by -230 ± 222 (95% c.i.), but was beating both after a mere 1 hour of training by 561 ± 163 (95% c.i.) and 427 ± 140 (95% c.i.) respectively ( 3 million acting steps, 11 thousand learning steps).\nCounter-intuitively, ARMAC was exploited by LBR-FC which uses a more limited action set. ARMAC scored -46 ± 26 (95% c.i.) per episode after 18 days of training on a single GPU, 1.3 billion acting steps (rounds), 5 million learning steps, 50000 CFR epochs (Figure 5). To the best of our knowledge, this is the first time LBR has been used to approximate exploitability in any form of no-limit Texas Hold’em among this class of algorithms." }, { "heading": "5 Conclusion and Future Work", "text": "ARMAC was demonstrated to work on both single agent and multi-agent benchmarks. It is brings back ideas from computational game theory to address exploration issues while at the same time being able to handle learning in non-stationary environments. As future work, we intend to apply it to more general classes of multiagent games; ARMAC has the appealing property that it already stores the joint policies and history-based critics, which may be sufficient for convergence one of the classes of extensive-form correlated equilibria (10; 12; 11)." }, { "heading": "A Worked-out Example", "text": "We now show an example of how ARMAC works on the simple game of Kuhn poker, shown in Figure 1.\nSuppose ARMAC has already run for t = 50 epochs, so 50 networks have been saved, and the exploring player is the first player i = 1. The first player uses an exploratory behavior policy µti as described above. The second player uses network j = 17 sampled from Unif({0, 1, · · · , 49}). For this episode, chance samples 1Q2K. This happens with probability one sixth, so η−i(h) = 16 (chance is always seen as an opponent with a fixes policy) whereas player 1 has not taken any actions so ηi(h) = 1. Along this episode ρ, the first player samples actions according to µi and the second player according to π17−i. Suppose then player 1 samples bet and player 2 samples bet (call) leading to u1(ρ) = −2 for player 1 and u2(ρ) = 2 for player 2. There are two histories traversed, call them h and h′ respectively. For each one, the regret vectors ~r are determined by the critics qθj (h, a′)− vθj (h), where a′ is one the two legal actions. Trajectory ρ is added to the buffer D and many similar episodes take place. Finally, in the learning phase: ARMAC uses all the data collected to train the critics using standard `2 regression losses on the TD error defined by TB(λ); all the data can be used because TB is off-policy, allowing the exploratory behavior µ50i . Suppose examples from the first trajectory ρ are sampled: only data from the first player (history h) are used to train W̄θt on the advantage A(h, a) using standard regression loss; this is because to asymptotically approach CFR only the exploring player can train regrets leading to a scaling constant that is a function only of the information state (for more detail, see Appendix B). Finally, only the second player’s actions are used to train the average network π̄ using a classification loss, as the second player in ρ was playing according to CFR’s average policy across 50 epochs (due to sampling j uniformly and then playing πj−i without exploration)." }, { "heading": "B Theoretical Properties", "text": "Each epoch t estimates qπt,i(h, a) = ∑ z∈Z(h,a) η\nπt(h, z)ui(z) and value vπt,i(h) =∑ a π\nt(h, a)qπt,i(h, a) for the current policies (πt). Let us write the advantages Aπt,i(h, a) = qπt,i(h, a)− vπt,i(h). Notice that we learn functions of the history h and not state s.\nAt epoch T , in order to deduce the next policy, πT+1, CFR applies regret-matching using the cumulative counterfactual regret RTi (s, a). As already discussed, directly estimating RTi using sampling suffers from high variance due to the inverse probability η (µ,π−i) i (z) in (2). Instead, ARMAC trains a network W̄Ti (s, a) that estimates a conditional advantage along trajectories generated in the following way: For player i we select a behavior policy µTi providing a good state-space coverage, e.g. a mixture of past policies (πti)t≤T , with some added exploration (Section 3.1 provides more details). For the other players −i, for every trajectory, we choose one of the previous opponent policies πj−i played at some epoch j chosen uniformly at random from {1, 2, · · ·T}. Thus at epoch T , several trajectories ρj are generated by following policy (µTi , π j −i), where j ∼ U({1, 2, · · ·T}).\nThen at each step (h, a) along these trajectory ρj , the neural network estimate W̄Ti (s, a) (where s 3 h) is trained to predict the advantage Aπj ,i(h, a) using the empirical `2 loss: L̂ = [ W̄Ti (s, a)−Aπj ,i(h, a) ]2. Thus the corresponding average loss is L = 1\nT T∑ j=1 Eρj∼(µTi ,πj−i) [ L̂ ] = 1 T T∑ j=1 ∑ s∈Si ∑ h∈s η(µ T i ,π j −i)(h)µTi (s, a) [ W̄Ti (s, a)−Aπj ,i(h, a) ]2 .\nIf the network has sufficient capacity, it will minimize this average loss, and W̄Ti (s, a) will converge (when the number of trajectories goes to ∞) in each state-action pair (s, a), such\nthat the reach probability 1T ∑ t η (µTi ,π t −i)(s)µTi (s, a) > 0, to the conditional expectation\nWTi (s, a) = ∑ h∈s 1 T\n∑T j=1 η (µTi ,π j −i)(h)Aπk,i(h, a)\n1 T ∑T j=1 η (µTi ,π t −i)(s)\n=︸︷︷︸ perfect recall ∑ h∈s 1 T\n∑T j=1 η πj −i(h)Aπk,i(h, a)\n1 T ∑T j=1 η πt −i(s)\n(3)\nNotice that WTi does not depend on the exploratory policy µTi for player i chosen in round T . After several trajectories ρj our network W̄Ti provides us with a good approximation of the WTi values and we use it in a regret matching update to define the next policy, πT+1i (s) = NormalizedReLU(W̄ T i ), i.e. Equation 1. Lemma 1 shows that if W̄Ti (s, a) is sufficiently close to the WTi (s, a) values, then this is equivalent to CFR, i.e., doing regret-matching using the cumulative counterfactual regret RT . Lemma 1. The policy defined by NormalizedReLU(WTi ) is the same as the one produced by CFR when regret matching is employed as the information-state learner:\nπT+1i (s, a) = RT,+i (s, a)∑ bR T,+ i (s, b) = WT,+i (s, a)∑ bW T,+ i (s, b) . (4)\nProof. First, let us notice that WTi (s, a) = ∑ h∈s ∑T t=1 η πt(h)∑T t=1 η πt(s) Aπt,i(h, a), (5)\n= ∑ h∈s ∑T t=1 η πt −i(h)∑T t=1 η πt −i(s) Aπt,i(h, a) (6)\n= 1\nwT (s) T∑ t=1 ∑ h∈s ηπ t −i(h)Aπt,i(h, a), (7)\nwhere we used the perfect recall assumption in the first derivation, and we define wT (s) =∑ t η πt\n−i(s). Notice that wT (s) depends on the state only (and not on h). Now the cumulative regret is:\nRTi (s, a) = K∑ t=1 qcπt,i(s, a)− vcπt,i(s)\n= T∑ t=1 ηπ t −i(s) ( qπt,i(s, a)− vπk,i(s) ) =\nT∑ t=1 ηπ t −i(s) ∑ h∈s ηπ t −i(h) ηπ t −i(s) ( qπt,i(h, a)− vπt,i(h) ) =\nT∑ t=1 ∑ h∈s ηπ t −i(h)Aπt,i(h, a)\n= wT (s)WTi (s, a).\nFinally, noticing that regret matching is not impacted by multiplying the cumulative regret by a positive function of the state, we deduce\nRT,+i (s, a)∑ bR T,+ i (s, b) =\n( wT (s)WTi (s, a) )+∑ b ( wT (s)WTi (s, b) )+ = WT,+i (s, a)∑ bW T,+ i (s, b) .\nThe W̄T (s, a) estimates the expected advantages 1T ∑T j=1Aπj (h, a) conditioned on h ∈ s. Thus ARMAC does not suffer from the variance of estimating the cumulative regret RT (s, a), and in the case of infinite capacity, from any (s, a), the estimate W̄T (s, a) is unbiased as soon as the (s, a) has been sampled at least once:\nLemma 2. Consider the case of a tabular representation and define the estimate ŴTi (s, a) as the minimizer (over W ) of the empirical loss defined over N trajectories\nL̂(s,a)(W ) = 1\nN N∑ n=1 [ W −Aπjn ,i(h, a) ]2I{(h, a) ∈ ρjn and h ∈ s}, where ρjn is the n-th trajectory generated by the policy (µTi , π jn −i) where jn ∼ U({1, . . . , T}).\nDefine N(s, a) = ∑N n=1 I{(h, a) ∈ ρjn and h ∈ s} to be the number of trajectories going through (s, a). Then ŴTi (s, a) is an unbiased estimate of WTi (s, a) conditioned on (s, a) being traversed at least once:\nE [ ŴTi (s, a)|N(s, a) > 0 ] = WTi (s, a).\nProof. The empirical loss being quadratic, under the event {N(s, a) > 0}, its minimum is well defined and reached for\nŴTi (s, a) = 1\nN(s, a) N(s,a)∑ n=1 Aπjn ,i(hn, a),\nwhere hn ∈ s is the history of the n-th trajectory traversing s. Let us use simplified notations and write An = Aπjn ,i(h, a)I{(h, a) ∈ ρjn and h ∈ s} and bn = I{(h, a) ∈ ρjn and h ∈ s}. Thus\nE [ ŴTi (s, a)I { N∑ m=1 bm > 0 }] = E [∑N n=1AnI {∑N m=1 bm > 0 }∑N m=1 bm ]\n= N∑ n=1 E\n[ E [ AnI {∑N m=1 bm > 0 }∑N m=1 bm ∣∣∣ N∑ m=1 bm ]]\n= N∑ n=1 E\n[ E [ An ∣∣∣ N∑ m=1 bm ] I{∑Nm=1 bm > 0}∑N m=1 bm ] .\nNow, E [ An ∣∣∑N m=1 bm ] = E [ An|bn ] E [ bn| ∑N m=1 bm ] since given bn, An is independent of∑N\nm=1 bm. Thus\nE [ ŴTi (s, a)I { N∑ m=1 bm > 0 }] = N∑ n=1 E [ An|bn ] E [ E [E[bn∣∣∑Nm=1 bm]I{∑Nm=1 bm > 0}∑N m=1 bm ]\n= N∑ n=1 E [ An|bn ] E\n[ bnI {∑N m=1 bm > 0 }∑N\nm=1 bm\n]\nSince ∑N n=1 E\n[ bnI {∑N m=1 bm>0 }\n∑N m=1 bm\n] = E [ ∑N n=1 bn∑N m=1 bm I {∑N m=1 bm > 0 }] = P (∑N m=1 bm > 0 ) , by\na symmetry argument we deduce E [ bnI {∑N m=1 bm>0 }\n∑N m=1 bm\n] = 1N P (∑N m=1 bm > 0 ) for each n.\nThus\nE [ ŴTi (s, a) ∣∣∣N(s, a) > 0] = E[ŴTi (s, a)∣∣∣ N∑ m=1 bm > 0 ]\n= E [ ŴTi (s, a)I {∑N m=1 bm > 0 }] P (∑N m=1 bm > 0 )\n= 1\nN N∑ n=1 E[An|bn] = E[A1|b1]\nwhich is the expectation of the advantage Aπj ,i(h, a) conditioned on the trajectory ρj going through h ∈ s, i.e. WTi (s, a) as defined in (3)." }, { "heading": "C Baseline Details and Hyperparameters", "text": "For MC-RCFR, we sweep over all combinations of the exploration parameter, using a (learned) state-action baseline (27), and learning rate ( , b, α) ∈ {0.25, 0.5, 0.6, 0.7, 0.9, 0.95, 1.0} × {True,False} × {0.0001, 0.00005, 0.00001}, where each combination is averaged over five seeds. We found that higher exploration values worked consistently better, which matches the motivation of the robust sampling technique (corresponding to = 1) presented in (22) as it leads to reduced variance since part of the correction term is constant for all histories in an information state. The baseline helped significantly in the larger game with more variable-length episodes.\nFor NFSP, we keep a set of hyperparameters fixed, in line with (21) and (16): anticipatory parameter η = 0.1, -greedy decay duration 20M steps, reservoir buffer capacity 2M entries, replay buffer capacity 200k entries, while sweeping over a combination of the following hyperparameters: -greedy starting value {0.06, 0.24}, RL learning rate 0.1, 0.01, 0.001, SL learning rate {0.01, 0.001, 0.005}, DQN target network update period of {1000, 19200} steps (the later is equivalent to 300 network-parameter updates). Each combination was averaged over three seeds. Agents were trained with the ADAM optimizer, using MSE loss for DQN and one gradient update step using mini-batch size 128, every 64 steps in the game.\nFinally, note that there are at least four difference in the results, experimental setup, and assumptions between MC-RCFR and OS-DeepCFR reported in (31):\n1. (31) uses domain expert input features which do not generalize outside of poker. The neural network architecture we use is a basic MLP with raw input representations, whereas (31) uses a far larger network. Our empirical results on benchmark games compare the convergence properties of knowledge-free algorithms across domains.\n2. The amount of training per iteration is an order of magnitude larger in OS-DeepCFR than our training. In (31), every 346 iterations, the Q-network is trained using 1000 minibatches of 512 samples (512000 examples), whereas every 346 iterations we train 346 batches of 128 samples, 44288 examples.\n3. MC-RCFR uses standard outcome sampling rather than Linear CFR (7).\n4. MC-RCFR’s strategy is approximated by predicting the OS’s average strategy increment rather than sampling from a buffer of previous models.\nOur NFSP also does not use any extra enhancements.\nC.1 Single-Agent Environments\nDespite ARMAC being based on commonly-used multiagent algorithms, it has properties that may be desirable in the single-agent setting. First, similar to policy gradient algorithms in the common “short corridor example” (32, Example 13.1), stochastic policies are representable by definition, since they are normalized positive mean regrets over the actions. This could have a practical effect that entropy bonuses typically have in policy gradient methods, but rather than simply adding arbitrary entropy, the relative regret over the set of past policies is taken into account.\nSecond, a retrospective agent uses a form of directed exploration of different exploration policies (2). Here, this is achieved by the simulation (µTi , πt−i), which could be desirable whenever there is overlapping structure in successive tasks. µTi here is an exploratory\npolicy, which consists of a mixture of all past policies (plus random uniform) played further modulated with different amounts of random uniform exploration (more details are given in Section 3.1). Consider a gridworld illustrated in Fig. 6(b). Green squares illustrate positions where the agent i gets a reward and the game terminates. Most of RL algorithms would find the reward of +1 first as it is the closest to the origin S. Once this reward is found, a policy would quickly learn to approach it, and finding reward +2 would be problematic. ARMAC, in the meantime, would keep re-running old policies, some of which would pre-date finding reward +1, and thus would have a reasonable chance of finding +2 by random exploration. This behaviour may also be useful if instead of terminating the game, reaching one of those two rewards would start next levels, both of which would have to be explored.\nThese properties are not necessarily specific to ARMAC. For example, Politex (another retrospective policy improvement algorithm (1)) has similar properties by keeping its past approximators intact. Like Politex, we show an initial investigation of ARMAC in Atari in Appendix D. Average strategy sampling MCCFR (13) also uses exploration policies that are a mixture of previous policies and uniform random to improve performance over external and outcome sampling variants. However, this exact sampling method cannot be used directly in ARMAC as it requires a model of the game.\nD Initial Investigation of ARMAC in the Atari Learning Environment\nWhile performance on Atari is not the main contribution, it should be treated as a health check of the algorithm. Unlike previously tested multiplayer games, many Atari games have a long term credit assignment problem. Some of them, like Montezuma’s Revenge, are well-known hard exploration problems. It is interesting to see that ARMAC was able to consistently score 2500 points on Montezuma’s Revenge despite not using any auxiliary rewards, demonstrations, or distributional RL as critic. We hypothesize that regret matching may be advantageous for exploration, as it provides naturally stochastic policies which stay stochastic until regrets for other actions becomes negative. We also tested the algorithm on Breakout, as it is a fine control problem. We are not claiming that out results on Atari are state of art - they should be interpreted as a basic sanity check showing that ARMAC could in principle work in this domain." }, { "heading": "E Training", "text": "Training is done by processing a batch of 64 of trajectories of length 32 at a time. In order to implement a full recall, all unfinished episodes will be continued on the next training iteration by propagating recurrent network states forward. Each time when one episode finishes at a particular batch entry, a new one is sampled and started to be unrolled from the beginning.\nAdam optimized with β1 = 0.0 and β2 = 0.999 was used for optimization. Hyperparameter selection was done by trying only two learning rates: 5 · 10−5 and 2 · 10−4. The results reported use 5 · 10−5 in all games, including Atari." }, { "heading": "F Neural Network Architecture", "text": "The following recurrent neural network was used for no-limit Texas Hold’em experiments. Two separate recurrent networks with shared parameters were used, consuming observations of each player respectively. Each of those networks consisted of a single linear layer mapping input representation to a vector of size 256. This was followed by a double rectified linear unit, producing a representation of size 512 then followed by LSTM with 256 hidden units. This produced an information state representation for each player a0 and a1.\nDefine architecture B(x), which will be reused several times. It consumes one of the information state representations produced by the previously mentioned RNN: h1 = Linear(128)(x), h2 = DoubleReLU(h1), h3 = h1 + Linear(128)(h2), B(a) = DoubleReLU(h3).\nThe immediate regret head is formed by applying B(s) on the information state representation followed by a single linear layer of the size of the number of actions in the game. The same is done for an average regret head and mean policy head. All those B(s) do not share weights between themselves, but share weights with respective heads for another player.\nThe global critic q(h) is defined in the following way. nA = Linear(128), nB = Linear(128), a0 = nA(s0) + nB(s1), a0 = nB(s0) + nA(s1), h1 = Concat(a0, a1), h2 = B(h1) and finally q0(s1, s2) and q1(s1, s2) are evaluated by a two linear layers on top of h2. B(x) shares architecture but does not share parameters with the ones used previously." } ]
2,020
null
SP:87af788d5bd4c486de01969a93d8b49a9f494da1
[ "The paper presents a meta-learning method for learning Hamiltonian dynamic systems from data. More specifically, the novelty is incorporating Hamiltonian Neural Networks (HNNs) within known meta-learning methods (MAML and ANIL) in order to model new dynamical systems (with previously known structures but unknown parameters) from partially observed data. The results from the experimental evaluation (on three well-known systems) show that such an approach, and in particular HNNs w/ ANIL (HANIL), leads to more accurate models of unseen dynamics compared to other benchmarks methods such as \"vanilla\" HNNs and HNNs w/ MAML (HAMAML)." ]
Hamiltonian mechanics is an effective tool to represent many physical processes with concise yet well-generalized mathematical expressions. A well-modeled Hamiltonian makes it easy for researchers to analyze and forecast many related phenomena that are governed by the same physical law. However, in general, identifying a functional or shared expression of the Hamiltonian is very difficult. It requires carefully designed experiments and the researcher’s insight that comes from years of experience. We propose that meta-learning algorithms can be potentially powerful data-driven tools for identifying the physical law governing Hamiltonian systems without any mathematical assumptions on the representation, but with observations from a set of systems governed by the same physical law. We show that a well meta-trained learner can identify the shared representation of the Hamiltonian by evaluating our method on several types of physical systems with various experimental settings.
[ { "affiliations": [], "name": "TEMS VIA META-LEARNING" }, { "affiliations": [], "name": "Seungjun Lee" }, { "affiliations": [], "name": "Haesang Yang" }, { "affiliations": [], "name": "Woojae Seong" } ]
[ { "authors": [ "Antreas Antoniou", "Harrison Edwards", "Amos Storkey" ], "title": "How to train your MAML", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Vladimir Igorevich Arnol’d" ], "title": "Mathematical methods of classical mechanics, volume 60", "venue": "Springer Science & Business Media,", "year": 2013 }, { "authors": [ "Steven Atkinson", "Waad Subber", "Liping Wang", "Genghis Khan", "Philippe Hawi", "Roger Ghanem" ], "title": "Data-driven discovery of free-form governing differential equations", "venue": null, "year": 1910 }, { "authors": [ "Peter Battaglia", "Razvan Pascanu", "Matthew Lai", "Danilo Jimenez Rezende" ], "title": "Interaction networks for learning about objects, relations and physics", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Peter W Battaglia", "Jessica B Hamrick", "Victor Bapst", "Alvaro Sanchez-Gonzalez", "Vinicius Zambaldi", "Mateusz Malinowski", "Andrea Tacchetti", "David Raposo", "Adam Santoro", "Ryan Faulkner" ], "title": "Relational inductive biases, deep learning, and graph networks", "venue": "arXiv preprint arXiv:1806.01261,", "year": 2018 }, { "authors": [ "Luca Bertinetto", "Joao F. Henriques", "Philip Torr", "Andrea Vedaldi" ], "title": "Meta-learning with differentiable closed-form solvers", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Josh Bongard", "Hod Lipson" ], "title": "Automated reverse engineering of nonlinear dynamical systems", "venue": "Proceedings of the National Academy of Sciences,", "year": 2007 }, { "authors": [ "Gert-Jan Both", "Subham Choudhury", "Pierre Sens", "Remy Kusters" ], "title": "Deepmod: Deep learning for model discovery in noisy data", "venue": null, "year": 1904 }, { "authors": [ "Steven L Brunton", "Joshua L Proctor", "J Nathan Kutz" ], "title": "Discovering governing equations from data by sparse identification of nonlinear dynamical systems", "venue": "Proceedings of the National Academy of Sciences,", "year": 2016 }, { "authors": [ "Ricky TQ Chen", "Yulia Rubanova", "Jesse Bettencourt", "David K Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "In Advances in Neural Information Processing systems,", "year": 2018 }, { "authors": [ "Zhengdao Chen", "Jianyu Zhang", "Martin Arjovsky", "Léon Bottou" ], "title": "Symplectic recurrent neural networks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Miles Cranmer", "Alvaro Sanchez-Gonzalez", "Peter Battaglia", "Rui Xu", "Kyle Cranmer", "David Spergel", "Shirley Ho" ], "title": "Discovering symbolic models from deep learning with inductive biases", "venue": null, "year": 2006 }, { "authors": [ "Miles D Cranmer", "Rui Xu", "Peter Battaglia", "Shirley Ho" ], "title": "Learning symbolic physics with graph networks", "venue": "arXiv preprint arXiv:1909.05862,", "year": 2019 }, { "authors": [ "Kang Feng", "Mengzhao Qin" ], "title": "Symplectic geometric algorithms for Hamiltonian systems", "venue": null, "year": 2010 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Sebastian Flennerhag", "Andrei A. Rusu", "Razvan Pascanu", "Francesco Visin", "Hujun Yin", "Raia Hadsell" ], "title": "Meta-learning with warped gradient descent", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Samuel Greydanus", "Misko Dzamba", "Jason Yosinski" ], "title": "Hamiltonian neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "E. Hairer", "S.P. Nørsett", "G. Wanner" ], "title": "Solving Ordinary Differential Equations I (2nd Revised", "venue": "Ed.): Nonstiff Problems. Springer-Verlag, Berlin, Heidelberg,", "year": 1993 }, { "authors": [ "Timothy Hospedales", "Antreas Antoniou", "Paul Micaelli", "Amos Storkey" ], "title": "Meta-learning in neural networks: A survey", "venue": "arXiv preprint arXiv:2004.05439,", "year": 2020 }, { "authors": [ "Khurram Javed", "Martha White" ], "title": "Meta-learning representations for continual learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Pengzhan Jin", "Zhen Zhang", "Aiqing Zhu", "Yifa Tang", "George Em Karniadakis" ], "title": "Sympnets: Intrinsic structure-preserving symplectic networks for identifying hamiltonian systems", "venue": "Neural Networks,", "year": 2020 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Thomas N. Kipf", "Ethan Fetaya", "Kuan-Chieh Wang", "Max Welling", "Richard S. Zemel" ], "title": "Neural relational inference for interacting systems", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Yoonho Lee", "Seungjin Choi" ], "title": "Gradient-based meta-learning with learned layerwise metric and subspace", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Zhenguo Li", "Fengwei Zhou", "Fei Chen", "Hang Li" ], "title": "Meta-sgd: Learning to learn quickly for fewshot learning", "venue": "arXiv preprint arXiv:1707.09835,", "year": 2017 }, { "authors": [ "Niall M Mangan", "J Nathan Kutz", "Steven L Brunton", "Joshua L Proctor" ], "title": "Model selection for dynamical systems via sparse regression and information criteria", "venue": "Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences,", "year": 2017 }, { "authors": [ "Alex Nichol", "Joshua Achiam", "John Schulman" ], "title": "On first-order meta-learning algorithms", "venue": "arXiv preprint arXiv:1803.02999,", "year": 2018 }, { "authors": [ "Aniruddh Raghu", "Maithra Raghu", "Samy Bengio", "Oriol Vinyals" ], "title": "Rapid learning or feature reuse? towards understanding the effectiveness of maml", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Linda E Reichl" ], "title": "A modern course in statistical physics", "venue": "American Association of Physics Teachers,", "year": 1999 }, { "authors": [ "Samuel H Rudy", "Steven L Brunton", "Joshua L Proctor", "J Nathan Kutz" ], "title": "Data-driven discovery of partial differential equations", "venue": "Science Advances,", "year": 2017 }, { "authors": [ "Andrei A. Rusu", "Dushyant Rao", "Jakub Sygnowski", "Oriol Vinyals", "Razvan Pascanu", "Simon Osindero", "Raia Hadsell" ], "title": "Meta-learning with latent embedding optimization", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Subham Sahoo", "Christoph Lampert", "Georg Martius" ], "title": "Learning equations for extrapolation and control", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Jun John Sakurai", "Eugene D Commins" ], "title": "Modern quantum mechanics, revised edition", "venue": "American Association of Physics Teachers,", "year": 1995 }, { "authors": [ "Rick Salmon" ], "title": "Hamiltonian fluid mechanics", "venue": "Annual review of fluid mechanics,", "year": 1988 }, { "authors": [ "Alvaro Sanchez-Gonzalez", "Nicolas Heess", "Jost Tobias Springenberg", "Josh Merel", "Martin A. Riedmiller", "Raia Hadsell", "Peter Battaglia" ], "title": "Graph networks as learnable physics engines for inference and control", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Alvaro Sanchez-Gonzalez", "Victor Bapst", "Kyle Cranmer", "Peter Battaglia" ], "title": "Hamiltonian graph networks with ode integrators", "venue": "arXiv preprint arXiv:1909.12790,", "year": 2019 }, { "authors": [ "Alvaro Sanchez-Gonzalez", "Jonathan Godwin", "Tobias Pfaff", "Rex Ying", "Jure Leskovec", "Peter W Battaglia" ], "title": "Learning to simulate complex physics with graph networks", "venue": "arXiv preprint arXiv:2002.09405,", "year": 2020 }, { "authors": [ "Adam Santoro", "Sergey Bartunov", "Matthew Botvinick", "Daan Wierstra", "Timothy Lillicrap" ], "title": "Metalearning with memory-augmented neural networks", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Adam Santoro", "David Raposo", "David G Barrett", "Mateusz Malinowski", "Razvan Pascanu", "Peter Battaglia", "Timothy Lillicrap" ], "title": "A simple neural network module for relational reasoning", "venue": "In Advances in Neural Information Processing systems,", "year": 2017 }, { "authors": [ "Hayden Schaeffer" ], "title": "Learning partial differential equations via data discovery and sparse optimization", "venue": "Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences,", "year": 2016 }, { "authors": [ "Michael Schmidt", "Hod Lipson" ], "title": "Distilling free-form natural laws from experimental data", "venue": null, "year": 2009 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Flood Sung", "Yongxin Yang", "Li Zhang", "Tao Xiang", "Philip HS Torr", "Timothy M Hospedales" ], "title": "Learning to compare: Relation network for few-shot learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Peter Toth", "Danilo J. Rezende", "Andrew Jaegle", "Sébastien Racanière", "Aleksandar Botev", "Irina Higgins" ], "title": "Hamiltonian generative networks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Silviu-Marian Udrescu", "Max Tegmark. Ai" ], "title": "feynman: A physics-inspired method for symbolic regression", "venue": "Science Advances,", "year": 2020 }, { "authors": [ "Zhongwen Xu", "Hado P van Hasselt", "David Silver" ], "title": "Meta-gradient reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yaofeng Desmond Zhong", "Biswadip Dey", "Amit Chakraborty" ], "title": "Symplectic ode-net: Learning hamiltonian dynamics with control", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Yaofeng Desmond Zhong", "Biswadip Dey", "Amit Chakraborty" ], "title": "Dissipative symoden: Encoding hamiltonian dynamics with dissipation and control into deep learning", "venue": "arXiv preprint arXiv:2002.08860,", "year": 2020 }, { "authors": [ "Allan Zhou", "Tom Knowles", "Chelsea Finn" ], "title": "Meta-learning symmetries by reparameterization", "venue": "arXiv preprint arXiv:2007.02933,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Hamiltonian mechanics, a reformulation of Newtonian mechanics, can be used to describe classical systems by focusing on modeling continuous-time evolution of system dynamics with a conservative quantity called Hamiltonian (Goldstein et al., 2002). Interestingly, the formalism of the Hamiltonian provides both geometrically meaningful interpretation (Arnol’d et al., 2001) and efficient numerical schemes (Feng & Qin, 2010) representing the state of complex systems in phase space with symplectic structure. Although formalism was originally developed for classical mechanics, it has been applied to various fields of physics, such as fluid mechanics (Salmon, 1988), statistical mechanics (Reichl, 1999), and quantum mechanics (Sakurai & Commins, 1995).\nWhile it has many useful mathematical properties, establishing an appropriate Hamiltonian of the unknown phenomena is a challenging problem. A Hamiltonian for a system can be modeled by a shared expression of the Hamiltonian and physical parameters. For instance, the Hamiltonian of an ideal pendulum is described as H = p 2\n2ml2 + mgl(1 − cos q) (shared expression), with mass m, pendulum length l, and gravity constant g (physical parameters), whereas q and p are the angle of the pendulum and the corresponding conjugate momentum (state of the system), respectively. Once an appropriate functional of the Hamiltonian is established from observing several pendulums, a new pendulum-like system can be readily recognized by adapting new physical parameters on the expression. Therefore, identifying an appropriate expression of the Hamiltonian is an important yet extremely difficult problem in most science and engineering areas where there still remain numerous unknown processes where it is even uncertain whether a closed-form solution or mathematically clear expression exists.\nIn the recent era of deep learning, we can consider the use of learning-based algorithms to identify an appropriate expression of the Hamiltonian with sufficient data. To determine the Hamiltonian underlying the unknown physical process, the Hamiltonian should satisfy two fundamental conditions: (1) it should fit well on previously observed data or motions, (2) it should generalize well on newly observed data from new systems if the systems share the same physical law with previous ones. The first condition has been mitigated by explicitly incorporating symplectic structure or conservation laws on neural networks, called Hamiltonian neural networks (HNN) (Greydanus et al., 2019) for learning Hamiltonian dynamics. HNN and its variants have been shown to be effective in learning\nmany useful properties of the Hamiltonian (Toth et al., 2020; Chen et al., 2020; Zhong et al., 2020a; Sanchez-Gonzalez et al., 2019; Jin et al., 2020). In their experiments, it has been shown that HNN and its variants work well on learning conservation laws or continuous-time translational symmetry, enable the learning of complex systems stably by incorporating numerical integrators and generalize on multiple initial conditions or controls for the given system. However, there is limited work regarding a trained model that works well on totally new systems governed by the same physical law with novel physical parameters.\nTo consider the second condition, we propose that meta-learning, which aims to train a model well generalized on novel data from observing a few examples, can be a potential key to learning a functional of Hamiltonian as a data-driven method. There have been several representative categories of meta-learning algorithms, such as the metric-based method (Snell et al., 2017; Sung et al., 2018), black-box method (Santoro et al., 2016; Bertinetto et al., 2019), and gradient-based method (Rusu et al., 2019; Flennerhag et al., 2020). Among these methods, we especially focus on the gradientbased method, which is readily compatible with any differentiable model and flexibly applicable to a wide variety of learning problems (Finn et al., 2017; Xu et al., 2018; Hospedales et al., 2020).\nOne of the most successful algorithms of the gradient-based method is Model-Agnostic MetaLearning (MAML) (Finn et al., 2017), which consists of a task-specific adaptation process and a meta-optimization process. The key observations supporting its potential are the resemblance between these processes and the identification of the physical laws of the Hamiltonian. The schematic is shown in Figure 1. The task-adaptation process, which adapts the initial model parameters to a task-specific train set, resembles the process of adapting hypothesized governing equations to observations of several physical systems. The meta-optimization process, which updates the initial model parameters by validating each task-specific adapted parameters to a task-specific test set, is similar to correcting the hypothesized governing equations by validating each system-specific Hamiltonian on new data from the corresponding physical systems. In addition, (Raghu et al., 2020) proposed that the recent success behind these meta-learning algorithms was due to providing qualitative shared representation across tasks rather than learning initial model parameters that encourage rapid adaptation (Finn et al., 2017). This hypothesis may support our suggestion that a meta-learner can be efficient in identifying the shared representation of a Hamiltonian. From this point of view, we experiment on several types of physical systems to verify whether these meta-learning algorithms are beneficial to our desired learning problems. Our contributions are summarized as follows:\n• We formulate the problem of identifying the shared representation of unknown Hamiltonian as a meta-learning problem.\n• For learning to identify the Hamiltonian representations, we incorporate the HNN architecture on meta-learning algorithms.\n• After meta-training the meta-learner, we adapt the model on new systems by learning the data of partial observations and predict the dynamics of the systems as a vector field in phase space.\n• We evaluate our method on several types of physical systems to explore the efficiency of our methods with various experimental settings." }, { "heading": "2 PRELIMINARIES", "text": "" }, { "heading": "2.1 HAMILTONIAN NEURAL NETWORKS", "text": "In Hamiltonian mechanics, the state of a system can be described by the vector of canonical coordinates, x = (q,p), which consist of position, q = (q1, q2, ..., qn) and its conjugate momentum, p = (p1, p2, ..., pn) in phase space, where n is the degree of freedom of the system. Then, the time evolution of the system is governed by Hamilton’s equations ẋ = ( ∂H ∂p ,− ∂H ∂q ) = Ω∇xH(x), where\nH(x) : R2n → R is the Hamiltonian that is conservative during the process and Ω = [\n0 In −In 0\n] is\na 2n× 2n skew-symmetric matrix. From the Hamiltonian equations, the Hamiltonian vector field in phase space, which is interpreted as the time evolution of the system ẋ, is the symplectic gradient of the Hamiltonian Ω∇xH(x), which is determined by the Hamiltonian function and the state of the system itself. Then, the trajectory of the state can be computed by integrating the symplectic gradient of the Hamiltonian. If the Hamiltonian does not depend on the time variable, the Hamiltonian remains constant during the time evolution, because moving along the direction of the symplectic gradient keeps the Hamiltonian constant (Arnol’d, 2013). In (Greydanus et al., 2019), the Hamiltonian function can be approximated by neural networks, Hθ, called HNN. To make the Hamiltonian function constant in motion, the loss of HNN is defined by the distance between the true vector field and the symplectic gradient of Hθ,\nLHNN = ‖ẋ− Ω∇xHθ(x)‖22 . (1)" }, { "heading": "2.2 MODEL-AGNOSTIC META-LEARNING AND FEATURE REUSE HYPOTHESES", "text": "A key assumption behind MAML is that separately trained models for each task share meta-initial parameters θ that could be improved rapidly for any task (Finn et al., 2017). Suppose that each given task, Ti, composed of Di = {Dtri ,Dtei }, is drawn from a task distribution, Ti ∼ p(T ). The learning algorithms usually consist of bi-level optimization processes; (inner-loop) the task-specific adaptation to each train set,\nθ′i = θ − α∇θLTi ( Dtri ; θ ) , (2)\nand (outer-loop) the meta-optimization on each test set,\nθ ← θ − β∇θ ∑\nTi∼p(T )\nLTi(Dtei ; θ′i), (3)\nwhere θ could be any differentiable model’s parameters that are expected to learn the shared representations of various tasks, and α and β are the step sizes of the inner and outer loops, respectively.\nMeanwhile, (Raghu et al., 2020) observed that during the inner-loop process, the task-specific distinction of the model parameters θ is mostly from the last layer of the networks, whereas the entire body of the model hardly changed. Therefore, they hypothesized that the body of the model behaves as a shared representation across the different tasks, whereas the head of the model behaves as a task-specific parameter, which is called the feature reuse hypothesis. From this hypothesis, they proposed a gradient-based meta-learning algorithm called Almost No Inner Loop (ANIL) by slightly modifying MAML by freezing all but updating the last layer of the networks during the inner-loop process. They showed that ANIL performs on par or better than MAML on several benchmarks, and has a computational benefit compared to its counterpart. For the algorithm, when the meta-learner consists of l layers θ = (θ(1), ..., θ(l−1), θ(l)), the inner-loop update is modified as\nθ′i = (θ (1), ..., θ(l−1), θ(l) − α∇θ(l)LTi ( Dtri ; θ ) ). (4)\nAs many physical processes could be expressed as an invariant shared expression of Hamiltonian and variable physical parameters, such meta-learning scheme, which encourages to separate the invariant part and varying part, can be expected to be more efficient to learn new systems by the relatively small number of parameter update." }, { "heading": "3 METHOD", "text": "" }, { "heading": "3.1 IDENTIFYING SHARED REPRESENTATION OF HAMILTONIAN VIA META-LEARNER", "text": "The main goal of our study is to train a model to identify the shared representation of the Hamiltonian using observations of dynamics from several systems that are assumed to be governed by the same physical law with different physical parameters. From a meta-learning point of view, each system is regarded as a task Ti, where the physical parameters of the system are drawn from the distribution of p(T ). The observations of the system Ti can be split into Di = {Dtri ,Dtei }, where Dtri and Dtei denote the task-specific train and test sets, respectively. The observations of both Dtri and Dtei are given by a set of tuples of canonical coordinates x = (q,p) and their time-derivatives ẋ = (q̇, ṗ) as the ground truth.\nFor each system, the task-specific model parameters are obtained from Equation 2 or Equation 4 by computing the task-specific loss using Equation 1 on each train set Dtri ,\nLTi(Dtri ; θ) = ∑\n(x,ẋ)∼Dtri\n‖ẋ− Ω∇xHθ(x)‖22 , (5)\nand the meta-optimization can be operated on the batch of systems as Equation 3 by minimizing the loss over the batch of physical parameters sampled from p(T ). Each loss is computed by evaluating each task-specific adapted model parameters θ′i to each test set Dtri ,∑\nTi∼p(T )\nLTi(Dtei ; θ′i) = ∑\nTi∼p(T ) ∑ (x,ẋ)∼Dtei ∥∥ẋ− Ω∇xHθ′i(x)∥∥22 . (6) Depending on the inner-loop methods, we call the algorithms Hamiltonian Model-Agnostic MetaLearning (HAMAML) when using Equation 2, and Hamiltonian Almost No Inner-Loop (HANIL) when using Equation 4." }, { "heading": "3.2 LEARNING NEW SYSTEM DYNAMICS FROM PARTIAL OBSERVATIONS", "text": "It can be expected that if the learner is efficient in identifying the underlying physical nature of an unknown process, the model can appropriately predict the dynamics of novel systems from their partial observations. To evaluate whether a learner is efficient in identifying the representation of an unknown Hamiltonian, we set up the following validating scheme on several types of physical systems. After meta-training, novel systems are given as meta-test sets Dnew = {Dtrnew,Dtenew} generated from the system’s Hamiltonian with novel physical parameters Tnew ∼ p(T ). The train set Dtrnew is used to adapt the learner on the new system dynamics (θ → θ′new). The new observed systems are given as K trajectories, which are reminiscent settings of the K-shot learning problem (Snell et al., 2017). Starting from the K initial states, each trajectory is obtained by integrating the time-derivatives of the systems from 0s to T s, with sampling rate LT which yields L rolled out sequence of states. The test set Dtenew is for validating the performance of the adapted model θ′new." }, { "heading": "4 RELATED WORKS", "text": "" }, { "heading": "4.1 LEARNING DYNAMICS WITH NEURAL NETWORKS", "text": "Several works on learning dynamical systems usually use various forms of graph neural networks (Battaglia et al., 2016; Santoro et al., 2017; Sanchez-Gonzalez et al., 2018; Kipf et al., 2018; Battaglia et al., 2018; Sanchez-Gonzalez et al., 2020), where each node usually represents the state of individual objects and each edge between two arbitrary nodes usually represents the relation of the nodes. (Chen et al., 2018) introduced a new type of deep architecture, which is considered as the differentiable ordinary differential equations (ODE) solver parametrized by neural networks, called neural ODEs. The most related to our work, (Greydanus et al., 2019) introduced Hamiltonian Neural Networks (HNNs) to learn the dynamics of Hamiltonian systems by parameterizing the Hamiltonian with neural networks. The HNNs have been developed through combining graph networks for learning interacting systems (Sanchez-Gonzalez et al., 2019) or generative networks for learning Hamiltonian from high-dimensional data (Toth et al., 2020). (Chen et al., 2020) proposed Symplectic recurrent neural networks to handle observed trajectories of complex Hamiltonian\nsystems by incorporating leapfrog integrator to recurrent neural networks. (Zhong et al., 2020a) considered additional control terms to learn the conservative Hamiltonian dynamics and the dissipative Hamiltonian dynamics (Zhong et al., 2020b). For learning Hamiltonian in an intrinsic way, (Jin et al., 2020) proposed Symplectic networks where the architecture consists of the compositional modules for preserving the symplectic structure. In most existing studies for learning Hamiltonian, one model is trained per one system, and the evaluations are restricted to the same physical system during training with new initial conditions. Our focus is learning the shared representation of Hamiltonian which can be reused to predict new related systems governed by the same physical law that appear throughout the previously observed systems." }, { "heading": "4.2 IDENTIFYING PHYSICAL LAWS OR GOVERNING EQUATIONS FROM DATA", "text": "In earlier works, symbolic regression was utilized to search the mathematical expressions of the governing equations automatically from observed data (Bongard & Lipson, 2007; Schmidt & Lipson, 2009). These methods have been developed by constructing a dictionary of candidate nonlinear functions (Brunton et al., 2016; Mangan et al., 2017) or partial differentiations (Rudy et al., 2017; Schaeffer, 2017) by incorporating a sparsity-promoting algorithm to extract the governing equations. Recently, they have been refined through combining with neural networks (Udrescu & Tegmark, 2020; Both et al., 2019; Atkinson et al., 2019; Sahoo et al., 2018) or graph structures (Cranmer et al., 2019; 2020). In contrast to our work, the existing methods of identifying the governing representations used the symbolic representation underlying the assumptions that the unknown physical laws are expressed by combinations of known mathematical terms." }, { "heading": "4.3 GRADIENT-BASED META-LEARNING", "text": "The core learning method of our work is related to gradient-based meta-learning algorithms (Finn et al., 2017). The learning method has been developed by training additional learning rate (Li et al., 2017), removing the second-order derivatives (Nichol et al., 2018) and stabilizing the training procedure (Antoniou et al., 2019). Meanwhile, in another direction of the research, the model parameters are separated by the shared representation part which is mainly updated in the outer-loop and taskspecific varying part which is mainly updated in the inner-loop (Lee & Choi, 2018; Javed & White, 2019; Flennerhag et al., 2020; Raghu et al., 2020; Zhou et al., 2020). Our study mainly focuses on whether a meta-learning could identify the shared expression of the Hamiltonian from several observed systems. MAML is chosen as the representative of the meta-learning algorithm. Also, for verifying whether the separative learning scheme improves the model to identify the shared expression of the Hamiltonian, we choose ANIL as the representative of the meta-learning with the separative learning scheme. Since, without any structural difference, it is sufficient to verify that the existence of the separative learning scheme would be beneficial." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 TYPES OF PHYSICAL SYSTEMS", "text": "Spring-Mass. Hamiltonian of the system is described by H = p 2 2m + k(q−q0)2\n2 , where the physical parameters m, k, and q0 are the mass, spring constant, and equilibrium position, respectively. q and p are the position and conjugate momentum of the system, respectively.\nPendulum. Hamiltonian of the system is described by H = p 2\n2ml2 + mgl(1 − cos(q − q0)), where the physical parameters m, l, and q0 are the mass, pendulum length, and equilibrium angle from the vertical, respectively. q and p are the pendulum angle from the vertical and conjugate momentum of the system, respectively. g denotes the gravitational acceleration.\nKepler problem. The system consists of two objects that are attracted to each other by gravitational force. Hamiltonian of the system is described by H = |p| 2\n2m − GMm |q−q0| , where the physical parameters\nM andm are the mass of the two bodies and we set the object ofM to be stationary at q0 = (qx, qy) of the coordinate system. The state is represented by q = (q1, q2) and p = (p1, p2) which denote the position and conjugate momentum of the object m in two-dimensional space, respectively. G denotes the gravitational constant." }, { "heading": "5.2 EXPERIMENTAL SETTINGS", "text": "Datasets. During the meta-training, we generate 10,000 tasks for meta-train sets of all systems. Each meta-train set consists of task-specific train set and test set given by 50 randomly sampled point states x and their time-derivatives ẋ in phase space with task-specific physical parameters. The distributions of the sampled states and physical parameters for each physical process are described in Appendix A.1. During the meta-testing, the distributions of sampled states and physical parameters are the same as in the meta-training stage. To evaluate learners on the efficacy of identifying Hamiltonian of the new systems, meta-test sets are constructed as the following ways.\n(1) Observing the new systems as point dynamics in phase space: Dtrnew consists of randomly sampled points in phase space with K = {25, 50} and L = 1, and Dtenew consists of equally fine-spaced points in phase space.\n(2) Observing the new systems as trajectories in phase space: Dtrnew consists of randomly sampled trajectories and Dtenew consists of equally fine-spaced points in phase space. The number of sampled trajectories with K = {5, 10} and L = 5 sequences during T = 1s. The trajectories are obtained by integrating the symplectic gradient of the Hamiltonian from initial states using the adaptive step-size Runge-Kutta method (Hairer et al., 1993). Fine-spaced test sets consist of equally spaced grids for each coordinate in the region of the phase space where we sampled the point states. More details about data sets are represented in the Appendix A.1.\nBaselines. We took several learners as baselines to assess the efficacy of our proposed methods, such as (1) training HNN on Dtrnew from scratch (random initialization), (2) pretrained HNN across all of the meta-train set, (3) meta-trained naive fully connected neural networks (Naive NN), which are given the inputs x and the outputs ẋ with MAML, and (4) with ANIL. The architectures and training details of the baselines and our methods are represented in the Appendix A.2.\nEvaluations. The evaluation metric is the average of the mean squared errors (MSE) between true vector fields ẋ and the symplectic gradients of predicted Hamiltonian Ω∇xHθ(x) across the test sets of new systems Dtenew by adapting the learners to the corresponding train set of the new systems Dtrnew. For all types of systems, the averaged MSE of randomly sampled 10 new systems are averaged for evaluating the learners after 10 gradient steps adapting to each Dtrnew. We also predict the state trajectories and the corresponding energies by integrating the output vector fields of the learners adapted to a new system by observing samples of point dynamics. Following (Greydanus et al., 2019), we evaluate the MSEs of the predicted trajectories and energies from their corresponding ground truth at each time step. The predicted values are computed by the learners adapted to 50 randomly sampled point dynamics of new systems in phase space after 50 gradient steps." }, { "heading": "5.3 RESULTS", "text": "Quantitative results. The quantitative results of meta-testing performance are shown in Table 1. It is shown that HANIL outperforms the others for all experimental settings in all types of physical systems. When observing 25-shot point dynamics and 5-shot trajectories, the number of given samples in the phase space is the same, and the same is true for observing 50-shot point dynamics and 10-shot trajectories (L = 5). Comparing the point dynamics and trajectories with the same number of given samples, learning from observing point dynamics is slightly more accurate than that from observing trajectories.\nPredicted vector fields. In Figure 2, predicted pendulum dynamics by adapting the learners from observing partial observations are represented as phase portraits by the corresponding gradient steps. Note that those of spring-mass systems are shown in Figure 5 in Appendix A.3. In Figure 2 (a), the initial outputs of vector fields from the learners are represented. During the adaptation to the given observations, the output vectors of each learner are evolved to fit on the observations based on their own prior belief or representation learned from the meta-train set. In detail, HNN from scratch fails to predict the dynamics of new systems from partial observations. At least hundreds of states should be given as train set, and thousands of gradient steps are required for training HNN for learning a system (Greydanus et al., 2019). However, in our meta-testing, up to 50 states are given to adapt to the new system with few gradient steps. Thus, the number of samples and gradient steps is too small to train HNN without any inductive bias. Pretrained HNN also fails, even though it is trained using the meta-train sets. A model simply pretrained across all tasks may output the averaged values of time-derivative at each state point. As the time-derivatives of each state would varying sensitive to the physical parameters of the systems, the simple averaged values are likely to have very different patterns from the actual vector fields. Therefore, such pretrained model would not be efficient to learn appropriate shared representation across the systems. Naive NNs, with MAML and ANIL also fail to predict the dynamics because naive NNs are hard to grasp the continuous and conservative structure of the vector fields where the number of given data are not sufficient to adapt to new\nsystems. Thus, the phase portraits of them are discontinuous or dissipative-like shape. HANIL can accurately predict the dynamics of new systems from partial observations with few gradient steps, while HAMAML is slower than HANIL to adapt the true vector fields because of the larger number of parameters to update in the adaptation process.\nTrajectory predictions. In Figure 3, we also evaluate the learners adapted to the new systems through their predictions of state and the corresponding energies starting from the initial states during 20s. HANIL (blue lines) adapted to new systems predicts the states and energy trajectories with relatively small errors from the ground truth (black dashed lines), whereas the others fail to predict the right trajectories and energies of the system at each time step.\nAblation study. We conduct ablation studies to verify the efficacy of the separative learning schemes by comparing the learning curves during task-adaptation, and the effects of the size of the meta-train sets. We evaluate the same evaluation of MSEs as described in Section 5.2 but varying gradient steps from 0 to 50 to see the learning curves and varying the number of tasks from 10 to 10,000 which are observed during meta-training to verify the effects of the size of the meta-train set. In addition, in order to see how the number of parameters updated during the inner-loop affects the performance, we add another comparison meta-learned learner called HANIL-Inverse (HANIL-INV). The method updates all but except the first layer of the network during the inner-loop, while HANIL only updates the last layer of the network. Therefore, during the inner-loop, the number of updated parameters of the HANIL-INV is between those of the HAMAML and HANIL. In Figure 4, the results of ablation studies on pendulum systems are represented where the dynamics of new systems are given as 50 point dynamics. HANIL-Inv converges to the error between HAMAML and HANIL, while the HNN from scratch and the pretrained HNN are hardly improved in both studies. As comparing the metatrained learners with the others, meta-learning improves the performance to predict new systems through their ability to learn the shared representation. As comparing HAMAML and meta-trained learners with the separative learning scheme such as HANIL-INV and HANIL, it seems beneficial to separately learn the shared representation and physical parameters through the separative learning scheme. In addition, since the updated parameters are limited to the last layer during the inner-loop, HANIL converges with the lowest errors, which would be most efficient to generalize the related systems with the same physical law, as fewer updated parameters are required for the new system. Meanwhile, in Figure 4 (b), the point where the effect of the size of tasks to the meta-trained learners begins to be noticeable is between 200 and 500 and slowly converges at around 10,000." }, { "heading": "6 CONCLUSIONS", "text": "By observing the resemblance between seemingly unrelated problems, identifying the Hamiltonian and meta-learning, we formulate the problem of identifying the Hamiltonian as a meta-learning problem. We incorporate HNN, which is an efficient architecture for learning Hamiltonian, with meta-learning algorithms in order to discover the shared representation of unknown Hamiltonian across observed physical systems. Comparing the baseline models with various experiments, we show that our proposed methods, especially HANIL, is efficient to learn totally new systems dynamics governed by the same underlying physical laws. The results state that our proposed methods have the ability to extract the meta-transferable knowledge, which can be considered as physical nature across the observed physical systems during meta-training." }, { "heading": "A APPENDIX", "text": "A.1 SYSTEM DETAILS\nSpring-Mass. The physical parameters are randomly sampled from (m, k, q0) ∈ ([0.5, 5], [0.5, 5], [−5, 5]). The initial states are randomly sampled from (q, p) ∈ ([−10, 10]2). Finespaced test sets of new systems consist of 50 equally spaced grids for each coordinate in the region of the phase space where we sampled the point states. Therefore, there are 2,500 grids points in the test sets.\nPendulum. The physical parameters are randomly sampled from (m, l, q0) ∈ ([0.5, 5], [0.5, 5], [−π, π]). We fix the gravitational acceleration as g = 1. The initial states are randomly sampled from (q, p) ∈ ([−2π, 2π], [−20, 20]). Fine-spaced test sets of new systems consist of 50 equally spaced grids for each coordinate in the region of the phase space where we sampled the point states. Therefore, there are 2,500 grids points in the test sets.\nKepler Problem. The physical parameters are randomly sampled from (M,m, qx, qy) ∈ ([0.5, 2.5]2, [−2.5, 2.5]2). We fix the gravitational constant as G = 1. The initial states are randomly sampled from (q,p) ∈ ([−5, 5]4). Fine-spaced test sets of new systems consist of 10 equally spaced grids for each coordinate in the region of the phase space where we sampled the point states. Therefore, there are 10,000 grids points in the test sets.\nA.2 IMPLEMENTATION DETAILS\nFor all tasks, we took the baseline model as fully connected neural networks with the size of state dimensions - 64 Softplus - 64 Softplus - 64 Softplus - state dimensions and the HNN model as fully connected neural networks with the size of state dimensions - 64 Softplus - 64 Softplus - 64 Softplus - 1 dimension. We searched the hyperparameters, exploring the size of hidden dimensions from {32, 64, 128, 256}, the number of layers from {1, 2, 3, 4}, and the activation function from {Sigmoid, Tanh, Relu, Softplus}. During meta-training or pretraining, we use the Adam optimizer (Kingma & Ba, 2015) on outer-loop with learning rate of 0.001 and use gradient descent on innerloop with learning rate of 0.002. For all systems, we set the number of task batches of 10, inner gradient updates of 5, and episodes of outer loop of 100 for meta-optimization. During the metatesting, we also use the Adam optimizer with a learning rate 0.002. There is no weight decay for all.\nA.3 ADDITIONAL RESULTS\nMore results and video could be available at https://github.com/7tl7qns7ch/ Identifying-Physical-Law." } ]
2,021
null
SP:49e648763ccbfd619a4ee8286a36d85096176cc6
[ "The paper investigates the oft-overlooked aspect of knowledge distillation (KD) -- why it works. The paper highlights the ability of KD for transferring not just the soft labels, but the inductive bias (assumptions inherent in the method, e.g. LSTM's notion of sequentiality, and CNN's translational invariance/equivariance) from the student so that the student exhibits, to an extent, the teacher's generalization properties as well. The paper explores doing KD between LSTMs and several versions of Transformers (with varying structural constraints) on a subject-verb-agreement dataset, and between CNNs and MLPs on MNIST and corrupted MNIST. Compared to prior work showing that better teacher performance lead to better student performance, this paper also shows that the student's performance on different aspects becomes more similar to the teacher's -- (1) if the teacher is strong on metric A and weak on metric B compared to a student on its own, the student can become stronger on A and weaker on B when distilled using the teacher; (2) if the teacher can generalize well to a separate, previously unseen dataset but the student generalizes poorly on its own, after distillation the student can generalize much better than it can possibly learn to on its own." ]
Having the right inductive biases can be crucial in many tasks or scenarios where data or computing resources are a limiting factor, or where training data is not perfectly representative of the conditions at test time. However, defining, designing, and efficiently adapting inductive biases is not necessarily straightforward. Inductive biases of a model affect its generalization behavior and influence the solution it converges to from different aspects. In this paper, we investigate the power of knowledge distillation in transferring the effects of inductive biases of a teacher model to a student model, when they have different architectures. We consider different families of models: LSTMs vs. Transformers and CNNs vs. MLPs, in the context of tasks and scenarios with linguistics and vision applications, where having the right inductive biases is critical. We train our models in different setups: no knowledge distillation, self-distillation, and distillation using a teacher with a better inductive bias for the task at hand. We show that in the later setup, compared to no distillation and self-distillation, we can not only improve the performance of the students, but also the solutions they converge become similar to their teachers with respect to a wide range of properties, including different task-specific performance metrics, per sample behavior of the models, representational similarity and how the representational space of the models evolve during training, performance on out-of-distribution datasets, confidence calibration, and finally whether the converged solutions fall within the same basins of attractions.
[]
[ { "authors": [ "Samira Abnar", "Lisa Beinborn", "Rochelle Choenni", "Willem Zuidema" ], "title": "Blackbox meets blackbox: Representational similarity & stability analysis of neural language models and brains", "venue": "In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP,", "year": 2019 }, { "authors": [ "Sungsoo Ahn", "Shell Xu Hu", "Andreas Damianou", "Neil D Lawrence", "Zhenwen Dai" ], "title": "Variational information distillation for knowledge transfer", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Rohan Anil", "Gabriel Pereyra", "Alexandre Tachard Passos", "Robert Ormandi", "George Dahl", "Geoffrey Hinton" ], "title": "Large scale distributed neural network training through online distillation", "venue": "In Proceedings of the 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Tom B Brown", "Benjamin Mann", "Nick Ryder", "Melanie Subbiah", "Jared Kaplan", "Prafulla Dhariwal", "Arvind Neelakantan", "Pranav Shyam", "Girish Sastry", "Amanda Askell" ], "title": "Language models are few-shot learners", "venue": null, "year": 2005 }, { "authors": [ "Nadav Cohen", "Or Sharir", "Amnon Shashua" ], "title": "On the expressive power of deep learning: A tensor analysis", "venue": "In Conference on Learning Theory,", "year": 2016 }, { "authors": [ "Mark William Craven" ], "title": "Extracting Comprehensible Models from Trained Neural Networks", "venue": "PhD thesis, University of Wisconsin-Madison,", "year": 1996 }, { "authors": [ "Morris H DeGroot", "Stephen E Fienberg" ], "title": "The comparison and evaluation of forecasters", "venue": "Journal of the Royal Statistical Society: Series D (The Statistician),", "year": 1983 }, { "authors": [ "Mostafa Dehghani", "Stephan Gouws", "Oriol Vinyals", "Jakob Uszkoreit", "Lukasz Kaiser" ], "title": "Universal transformers", "venue": "In Proceedings of the 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2019 }, { "authors": [ "Jesse Dodge", "Gabriel Ilharco", "Roy Schwartz", "Ali Farhadi", "Hannaneh Hajishirzi", "Noah Smith" ], "title": "Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping", "venue": "arXiv preprint arXiv: 2002.06305,", "year": 2020 }, { "authors": [ "Jeffrey L. Elman" ], "title": "Finding structure in time", "venue": "Cognitive Science,", "year": 1990 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "In Proceedings of the 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Markus Freitag", "Yaser Al-Onaizan", "Baskaran Sankaran" ], "title": "Ensemble distillation for neural machine", "venue": "translation. CoRR,", "year": 2017 }, { "authors": [ "Nicholas Frosst", "Geoffrey E. Hinton" ], "title": "Distilling a neural network into a soft decision tree", "venue": "In Proceedings of the First International Workshop on Comprehensibility and Explanation in AI and ML 2017 co-located with 16th International Conference of the Italian Association for Artificial Intelligence,", "year": 2017 }, { "authors": [ "Tommaso Furlanello", "Zachary Chase Lipton", "Michael Tschannen", "Laurent Itti", "Anima Anandkumar" ], "title": "Born-again neural networks", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Chuan Guo", "Geoff Pleiss", "Yu Sun", "Kilian Q. Weinberger" ], "title": "On calibration of modern neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning - Volume 70,", "year": 2017 }, { "authors": [ "Michael Hahn" ], "title": "Theoretical limitations of self-attention in neural sequence models", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Jie Hao", "Xing Wang", "Baosong Yang", "Longyue Wang", "Jinfeng Zhang", "Zhaopeng Tu" ], "title": "Modeling recurrence for transformer", "venue": "arXiv preprint arXiv:1904.03092,", "year": 2019 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Nitish Shirish Keskar", "Bryan McCann", "Lav R Varshney", "Caiming Xiong", "Richard Socher" ], "title": "Ctrl: A conditional transformer language model for controllable generation", "venue": "arXiv preprint arXiv:1909.05858,", "year": 2019 }, { "authors": [ "Yoon Kim", "Alexander M. Rush" ], "title": "Sequence-level knowledge distillation", "venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "CoRR, abs/1412.6980,", "year": 2014 }, { "authors": [ "Adhiguna Kuncoro", "Chris Dyer", "Laura Rimell", "Stephen Clark", "Phil Blunsom" ], "title": "Scalable syntaxaware language models using knowledge distillation", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Adhiguna Kuncoro", "Lingpeng Kong", "Daniel Fried", "Dani Yogatama", "Laura Rimell", "Chris Dyer", "Phil Blunsom" ], "title": "Syntactic structure distillation pretraining for bidirectional encoders", "venue": "arXiv preprint arXiv:2005.13482,", "year": 2020 }, { "authors": [ "Aarre Laakso", "Garrison Cottrell" ], "title": "Content and cluster analysis: Assessing representational similarity in neural systems", "venue": "Philosophical Psychology,", "year": 2000 }, { "authors": [ "Phong Le", "Willem Zuidema" ], "title": "The inside-outside recursive neural network model for dependency parsing", "venue": "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing,", "year": 2014 }, { "authors": [ "Tal Linzen", "Emmanuel Dupoux", "Yoav Goldberg" ], "title": "Assessing the ability of lstms to learn syntaxsensitive dependencies", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2016 }, { "authors": [ "Xiaodong Liu", "Pengcheng He", "Weizhu Chen", "Jianfeng Gao" ], "title": "Improving multi-task deep neural networks via knowledge distillation for natural language understanding", "venue": "ArXiv, abs/1904.09482,", "year": 2019 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Sgdr: Stochastic gradient descent with warm restarts", "venue": "In Proceedings of the 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Sihui Luo", "Xinchao Wang", "Gongfan Fang", "Yao Hu", "Dapeng Tao", "Mingli Song" ], "title": "Knowledge amalgamation from heterogeneous networks by common feature learning", "venue": "In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Niru Maheswaranathan", "Alex Williams", "Matthew Golub", "Surya Ganguli", "David Sussillo" ], "title": "Universality and individuality in neural dynamics across large populations of recurrent networks. In Advances in neural information processing systems 32, NeurIPS’19, 2019", "venue": "URL https://arxiv.org/abs/1907.08549", "year": 1907 }, { "authors": [ "R. Thomas McCoy", "Robert Frank", "Tal Linzen" ], "title": "Does syntax need to grow on trees? sources of hierarchical inductive bias in sequence-to-sequence", "venue": "URL https://arxiv.org/abs/2001.03632", "year": 2001 }, { "authors": [ "Tom M. Mitchell" ], "title": "The need for biases in learning generalizations", "venue": "Technical report,", "year": 1980 }, { "authors": [ "Hossein Mobahi", "Mehrdad Farajtabar", "Peter L Bartlett" ], "title": "Self-distillation amplifies regularization in hilbert space", "venue": "arXiv preprint arXiv:2002.05715,", "year": 2020 }, { "authors": [ "Norman Mu", "Justin Gilmer" ], "title": "Mnist-c: A robustness benchmark for computer vision", "venue": "arXiv preprint arXiv:1906.02337,", "year": 2019 }, { "authors": [ "Rafael Rodrigo Mueller", "Simon Kornblith", "Geoffrey E. Hinton" ], "title": "When does label smoothing help", "venue": "In Advances in Neural Information Processing Systems 32,", "year": 2019 }, { "authors": [ "Behnam Neyshabur", "Hanie Sedghi", "Chiyuan Zhang" ], "title": "What is being transferred in transfer learning", "venue": "arXiv preprint arXiv:2008.11687,", "year": 2020 }, { "authors": [ "Wonpyo Park", "Dongju Kim", "Yan Lu", "Minsu Cho" ], "title": "Relational knowledge distillation", "venue": "In Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Mary Phuong", "Christoph Lampert" ], "title": "Towards understanding knowledge distillation", "venue": "Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": "OpenAI Blog,", "year": 2019 }, { "authors": [ "Victor Sanh", "Lysandre Debut", "Julien Chaumond", "Thomas Wolf" ], "title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", "venue": "arXiv preprint arXiv:1910.01108,", "year": 2019 }, { "authors": [ "Andrew Michael Saxe", "Yamini Bansal", "Joel Dapello", "Madhu Advani", "Artemy Kolchinsky", "Brendan Daniel Tracey", "David Daniel Cox" ], "title": "On the information bottleneck theory of deep learning", "venue": "In Proceedings of the 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "H.S. Seung", "H. Sompolinsky", "N. Tishby" ], "title": "Learning curves in large neural networks", "venue": "In Proceedings of the Fourth Annual Workshop on Computational Learning Theory, COLT ’91,", "year": 1991 }, { "authors": [ "Richard Socher", "Christopher D. Manning", "Andrew Y. Ng" ], "title": "Learning continuous phrase representations and syntactic parsing with recursive neural networks", "venue": "Proceedings of the NeurIPS’10 Deep Learning and Unsupervised Feature Learning Workshop,", "year": 2010 }, { "authors": [ "Suraj Srinivas", "R. Venkatesh Babu" ], "title": "Data-free parameter pruning for deep neural networks", "venue": "In Proceedings of the 26th British Machine Vision Conference,", "year": 2015 }, { "authors": [ "Yu Cheng", "Zhe Gan", "Jingjing Liu" ], "title": "Patient knowledge distillation for bert model compression", "venue": "EMNLP/IJCNLP,", "year": 2019 }, { "authors": [ "Ilya Sutskever", "James Martens", "George Dahl", "Geoffrey Hinton" ], "title": "On the importance of initialization and momentum in deep learning", "venue": "In International conference on machine learning,", "year": 2013 }, { "authors": [ "Xu Tan", "Yi Ren", "Di He", "Tao Qin", "Zhou Zhao", "Tie-Yan Liu" ], "title": "Multilingual neural machine translation with knowledge distillation", "venue": "arXiv preprint arXiv:1902.10461,", "year": 2019 }, { "authors": [ "Jiaxi Tang", "Rakesh Shivanna", "Zhe Zhao", "Dong Lin", "Anima Singh", "Ed H Chi", "Sagar Jain" ], "title": "Understanding and improving knowledge distillation", "venue": "arXiv preprint arXiv:2002.03532,", "year": 2020 }, { "authors": [ "Raphael Tang", "Yao Lu", "Linqing Liu", "Lili Mou", "Olga Vechtomova", "Jimmy Lin" ], "title": "Distilling taskspecific knowledge from bert into simple neural networks", "venue": "ArXiv, abs/1903.12136,", "year": 2019 }, { "authors": [ "Ke Tran", "Arianna Bisazza", "Christof Monz" ], "title": "The importance of being recurrent for modeling hierarchical structure", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Frederick Tung", "Greg Mori" ], "title": "Similarity-preserving knowledge distillation", "venue": "ArXiv, abs/1907.09682,", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems 30,", "year": 2017 }, { "authors": [ "D.H. Wolpert", "W.G. Macready" ], "title": "No free lunch theorems for optimization", "venue": "IEEE Transactions on Evolutionary Computation,", "year": 1997 }, { "authors": [ "Li Yuan", "Francis EH Tay", "Guilin Li", "Tao Wang", "Jiashi Feng" ], "title": "Revisit knowledge distillation: a teacher-free framework", "venue": null, "year": 1909 }, { "authors": [ "Phuong", "Lampert" ], "title": "2019) studies KD from a theoretical point of view in a simplified setting where the task is a binary classification, and teacher and student are linear models. They attribute the success of distillation to three main factors: (1) data geometry", "venue": null, "year": 2020 }, { "authors": [ "2019 Liu et al", "2015 Hinton et al", "2019 Tan et al", "Kim", "2016). Rush" ], "title": "Previous work has examined the effectiveness of KD in different settings: where the teacher is bigger than the student, but both have similar building blocks (Hinton et al., 2015; Kim & Rush, 2016; Sanh et al., 2019); where teacher and student are of similar size and architecture", "venue": "(Furlanello et al.,", "year": 2017 }, { "authors": [ "2019 Tang et al", "2019 Luo et al", "Ahn" ], "title": "KD has also been proposed as an interpretation technique, where the knowledge of a big complex model is distilled into a more interpretable model (Craven", "venue": "Frosst & Hinton,", "year": 2019 }, { "authors": [ "Hinton" ], "title": "2015), which is based on the Kullback-Leibler divergence between output distributions of the teacher, i.e., soft targets, and the output distributions of the student. The output distributions of the teacher and student", "venue": null, "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Inductive biases are the characteristics of learning algorithms that influence their generalization behavior, independent of data. They are one of the main driving forces to push learning algorithms toward particular solutions (Mitchell, 1980). Having the right inductive biases is especially important for obtaining high performance when data or compute is a limiting factor, or when training data is not perfectly representative of the conditions at test time. Moreover, in the absence of strong inductive biases, a model can be equally attracted to several local minima on the loss surface; and the converged solution can be arbitrarily affected by random variations like the initial state or the order of training examples (Sutskever et al., 2013; McCoy et al., 2020; Dodge et al., 2020).\nThere are different ways to inject inductive biases into learning algorithms, for instance through architectural choices, the objective function, the curriculum, or the optimization regime. In this paper, we exploit the power of Knowledge Distillation (KD) to transfer the effect of inductive biases between neural networks. KD refers to the process of transferring knowledge from a teacher model to a student model, where the logits from the teacher are used to train the student. KD is best known as an effective method for model compression (Buciluǎ et al., 2006; Hinton et al., 2015; Sanh et al., 2019) which allows taking advantage of a huge number of parameters during training while having an efficient smaller model during inference.\nThe advantage of KD goes beyond model compression and it can be used to combine strengths of different learning algorithms (Kuncoro et al., 2019; 2020). Different algorithms vary in terms of the computational/memory efficiency at training/inference or the inductive biases for learning particular patterns. This makes them better at solving certain problems and worse at others, i.e., there is no “one size fits all” learning algorithm. Hence, it is important to explore the potential of KD for finding better trade-offs.\nThe question that we ask in this paper is: “In KD, are the preferences of the teacher that are rooted in its inductive biases, also reflected in its dark knowledge1, and can they thus be transferred to the student?”. We are interested in cases where the student model can realize functions that are realizable by the teacher, i.e., the student model is efficient with respect to the teacher model (Cohen et al., 2016), while the teacher has a preference inductive bias so that the desired solutions are easily learnable for the teacher (Seung et al., 1991).\nWe consider two scenarios where the teacher and the student are neural networks with heterogeneous architectures, hence have different inductive biases. We train the models, both independently and using KD, on tasks for which having the right inductive biases is crucial. In the first test case, we study RNNs vs. Transformers (Vaswani et al., 2017), on the subject-verb agreement prediction task (Linzen et al., 2016). In this task, we use LSTMs (Hochreiter & Schmidhuber, 1997) as the most widely used RNN variant. LSTMs are shown to perform better than vanilla Transformers in this task and their superior performance is attributed to their so-called “recurrent” inductive bias (Tran et al., 2018). First, we identify the sources of the recurrent inductive bias of LSTMs: sequentiality, memory bottleneck, and recursion, and design experiments to show the benefits of each. Then, we show that through distilling knowledge of LSTMs to Transformers, the solutions that the Transformer models learn become more similar to the solution learned by LSTMs.\nIn the second test case, we study CNNs vs. MLPs, in the context of the MNIST-C (Corrupted MNIST) benchmark (Mu & Gilmer, 2019), which is designed to measure out-of-distribution robustness of models. We train our models on MNIST and evaluate them on the Translated/Scaled MNIST. The particular form of parameter sharing in CNNs combined with the pooling mechanism makes them equivariant to these kinds of transformations (Goodfellow et al., 2016), which leads to better generalization in these scenarios compared to MLPs.\nIn our experiments and analysis on these two test cases2, we compare the behavior of different models, from a wide range of perspectives, when trained in different setups including (1) when trained without KD, but directly from the data, (2) when trained with KD using a teacher with a similar architecture to the student, i.e. self-distillation, and (3) when trained with KD using a teacher with a different architecture that has stronger inductive biases that suit the task, compared to the student.\nAs the first step, in setup (1), i.e., no KD, we demonstrate how inductive biases arising from different architectural choices affect the generalization behavior of the models we study (§2.1 and §3.1). We show that the models with more suitable inductive biases not only have better accuracy but also the solutions they converge to is a better solution in terms of other metrics. We also show that different instances of the model with stronger inductive biases have less variance in terms of all the metrics.\nThen, we apply KD to train the models and contrast the behavior of models trained with the setups (2) and (3) with the models trained with setup (1), i.e. with KD vs. without KD. We show that regardless of the properties of the teacher, KD is a powerful technique in which the teacher model drives the student toward a particular set of solutions that is more restricted compared to the set of possible solutions that a student can converge to when it learns directly from data (§2.2, §3.2, and Appendix C).\n1Dark knowledge refers to the information encoded in the output logits of a neural network (Hinton et al., 2015). 2The codes for the input pipelines, models, analysis, and the details of the hyper-parameters used in our experiments is\navailable at https://ANONYMIZED.\nNext, as the main contribution of our paper over previous works that study KD, we contrast the behavior of models trained with setup (3) with the models trained with setups (1) and (2): • We show the performance of the student models in setup (3) increases, not only on in-\ndistribution test sets (§2.2), but also on out-of-distribution data (§3.2). We demonstrate that this happens when the teacher has the right inductive bias and not necessarily otherwise, i.e., setup (2).\n• In setup (3), besides performance, we show that, the solution that a student model converges to shares similar characteristics with the solution of its teacher. For instance in terms of confidence calibration (§2.2 and §3.2), and per sample behaviour of the model (Appendix E).\n• We demonstrate that although the student model is merely exposed to the final logits of the teacher, the structure of the latent space of the student model becomes similar to the teacher, i.e. relational similarity of the internal representations from the student and its teacher increases (§2.2 and §3.2).\nAs an example, in our second test case (MNIST-C), when training an MLP model with KD using a CNN teacher, the student model explores the solution space in ways more similar to its teacher. Figure 1 visualizes and compares the path that an MLP takes during training (Figure 1a), compared to a CNN (Figure 1b). The CNN model explores the surface in a completely different manner than the MLP, while the path of a student MLP distilled from the CNN model as the teacher (Figure1c) is more similar to the CNN." }, { "heading": "2 DISTILLING LSTMS INTO TRANSFORMERS", "text": "LSTMs and Transformers are the basic building blocks of many state-of-the-art models for sequence modeling and natural language processing. Transformers are an expressive class of models that do extremely well on many tasks where the training data is adequate in quantity (Devlin et al., 2019; Keskar et al., 2019; Radford et al., 2019; Brown et al., 2020). Several studies, however, have shown that LSTMs can perform better than Transformers on tasks requiring sensitivity to (linguistic) structure, especially when the data is limited (Tran et al., 2018; Dehghani et al., 2019).\nWe chose the subject-verb agreement prediction task, introduced by Linzen et al. (2016), as the test case, as it yields a meaningful difference between LSTMs and Transformers (Tran et al., 2018). We compare these two families of models and conduct experiments to emphasize the differences between them when trained independently and through KD.\nRecurrent Inductive Bias. Among sequence modeling architectures, models with recursion are in particular powerful for natural language processing due to their adequacy to model hierarchical structures (Linzen et al., 2016). The recursion in a model can be implemented in various ways, like in Recurrent Neural Networks (Elman, 1990), Recursive Neural Networks (Socher et al., 2010; Le & Zuidema, 2014) and Universal Transformers (Dehghani et al., 2019; Hao et al., 2019). While theoretically, both recurrent neural networks (RNNs) and Transformers can deal with finite hierarchical structures, empirical results indicate the superiority of RNNs over Transformers (Tran et al., 2018; Dehghani et al., 2019; Hahn, 2020).\nIn the literature (Sutskever et al., 2013; Dehghani et al., 2019), the inductive bias of RNNs is referred to as the recurrent inductive bias. Here, we distinguish between three main sources of this bias: 1. Sequentiality: There is an inherent notion of order in the architecture that forces the model to\naccess the next tokens in the input one by one and process them sequentially. 2. Memory bottleneck: The model has no direct access to the past tokens and has to compress all\nthe information from the past in a hidden state, which is accessible when processing a new token. 3. Recursion: The model recursively applies the same function on the varying input at every step.\nTransformers (Vaswani et al., 2017), in contrast, process the input in parallel. Although a weak notion of order is encoded by positional embeddings, no explicit assumption is made in the connectivity structure of the architecture. Moreover, they have a global receptive field and can access all tokens through self-attention. Finally, standard Transformers are not recursive. However, the standard Transformer can be modified to have an architecture with specifications that are similar to RNNs. We provide empirical results to demonstrate the benefits of these different sources of inductive biases of RNNs. For this purpose, we design experiments with variants of Transformers in which we attempt to approximate some of the RNNs’ assumptions.\nTask and Models. We study the performance of LSTMs and variants of Transformers on the task of predicting number-agreement between subjects and verbs in English sentences. We investigate the quality of the solutions they converge to when they are trained independently and when they are trained through distillation. We use the subject-verb agreement dataset of Linzen et al. (2016), for which the size of the training set is ∼121k examples and the size of the test set is ∼1m. Succeeding at this task is a strong indicator that a model can learn syntactic structures and is therefore proposed by Linzen et al. (2016) as a proxy for assessing the ability of models to capture hierarchical structure in natural language. It is shown that RNNs have better inductive biases to learn this compared to standard Transformers (Tran et al., 2018; Dehghani et al., 2019). In this task, examples are grouped into different levels of difficulty based on the number of “agreement attractors”3, and distance between the verb and its subject. Hence, we report both micro accuracy (µ−Accuracy) and macro accuracy over different groups in terms of distance (D−Accuracy) and numbers of attractors (A−Accuracy). Similar to Tran et al. (2018), we follow two setups: 1) when the learning objective is next word prediction, i.e., language modeling (LM); 2) when we directly optimize for predicting the verb number, singular or plural, i.e., classification. In the LM setup, we look at the probabilities predicted when the target of the prediction is the verb of interest, and see whether the probability of the correct form of the verb is higher than the other form (singular vs plural). In the classification setup, the input to the model is a sentence up to the position of the verb of interest and the model predicts whether the verb at that position is singular or plural.\nIn the LM setup, we employ two unidirectional LSTMs with different sizes, LSTM and Small LSTM, and two Transformers, Transformer and Small Transformer. In this setup, corresponding LSTMs and Transformers have roughly the same number of parameters. In the classification setup we compare the following models: (1) a standard unidirectional LSTM (sequentiality + memory bottleneck + recursion) (2) Transformer: Transformer encoder with a class token (CLS) for classification, BERT (Devlin et al., 2019) style, (3) Transformer-seq: Transformer encoder with future masking where the classification is done using the representation of the last token4 (sequentiality), (4) UniversalTransformer-seq: Universal Transformer (Dehghani et al., 2019) encoder, in which the parameters are shared in depth, with future masking (sequentiality + recursion). Appendix H provides more details on the architectures." }, { "heading": "2.1 THE IMPORTANCE OF RECURRENT INDUCTIVE BIAS", "text": "In this section, we report results without distillation that illustrate the merits of the recurrent inductive bias. Table 1 shows the performance of the models when trained with the LM objective. A first important observation, in line with the results of Tran et al. (2018), is that LSTMs achieve better accuracy on the subject-verb agreement task compared to Transformers. Even for instances of Transformer language models that achieve better (lower) perplexity, the accuracy on this task is worse compared to LSTM instances.\n3Agreement attractors are intervening nouns with a different number than the number of the subject. E.g., given the input “The keys to the cabinet (is?/are?).”, the word “cabinet” is an agreement attractor.\n4Note that future tokens are masked out by default when using a transformer in the decoder mode, e.g., in LM setup.\nSince both models achieve good scores on the training set (Appendix D), this suggests that LSTMs better capture relevant patterns, such as the hierarchical structure of the input, which leads to better generalization on this task.\nFigure 2 illustrates the accuracy versus perplexity of several instances of each model, in the LM setup. Note that although perplexity is an indicator of how well the model is optimized given the objective function, the accuracy is the metric that matters and shows models’ generalization in the subject-verb agreement task. Later, we will show how using KD the behavior of Transformers changes in terms of accuracy versus\nperplexity and become more similar to LSTM teachers.\nThere is another interesting observation in Figure 2. In this plot, for each model, we have two different settings: large and small variants, as measured by the number of trainable parameters. More parameters for a model, given a fixed architecture, means richer hypothesis spaces. We can see that while for the LSTMs, increasing the size of the model results in better performance, for the Transformers increasing the number of parameters results in a worse performance. This aligns with the bias-variance trade-off argument that when using a model with weaker biases for the task at hand, in this case Transformers, if we fix the amount of data, richer hypothesis spaces may hurt the generalization because they increase variance. In contrast, adding more capacity leads to better accuracy in LSTMs as their stronger inductive biases control the generalization error.\nIn Table 2 we show the results of models trained on the classification objective. We compare LSTM with variants of Transformers with different inductive biases. The table shows that similar to the LM results, LSTM achieves the best performance. Interestingly, comparing all four models, we find that the performance steadily increases as more aspects of the recurrent inductive bias are included. This is illustrated in Figure 3a, with the filled circles on the black, dashed line (no distillation).\nAs another indicator of the quality of the solutions that different models converged to in the classification setup, we look into their confidence calibration. Confidence calibration captures how well the likelihood (confidence) of the prediction of the model predicts its accuracy (Guo et al., 2017). For a well-calibrated model, if we bin the confidence scores and compute the accuracy for each bin, the accuracies are perfectly correlated with the confidence values. The Expected Calibration Error (ECE) is computed as the distance between the calibration curve of the model and the perfect calibration curve (DeGroot & Fienberg, 1983). In Figure 3b, we plot the ECE (Guo et al., 2017) of the models in the classification setup, with the filled circles on the black dashed line (no distillation). In line with the trends in the performances of these models, the expected calibration error decreases as we move from Transformer toward LSTM.\nIn summary, the results from this section support the importance of recurrence for solving this task (Tran et al., 2018; Dehghani et al., 2019). Additionally, as shown in Figures 3a and 3b, we find a decreasing trend in the variance of the models, i.e., adding more inductive biases to the models decreases their variance. This is empirical evidence that supports the relation between variance of the solutions a model converges to and its inductive biases.\n2.2 TRANSFERRING THE EFFECT OF RECURRENT INDUCTIVE BIAS\nIn this section, we show distilling knowledge from LSTM to Transformer can close the gap between their performance by pushing the Transformer to converge to solutions more similar to LSTM’s.\nTable 3 and Table 4 summarize the distillation results, when the training objective is language modeling and classification respectively. A first general observation is that, for these tasks and setups, distilling a model into an identical model could result in a decrease in the performance. Note that whether self-distillation results in improved performance could potentially depend on many different factors such as the architecture of the model, optimization algorithm, and details of the distillation process (Furlanello et al.,\n2018; Mobahi et al., 2020). Despite no significant changes in the performance with self-distillation, we can improve the performance of the Transformers through distillation from LSTM teachers.\nTo check whether this improvement is due to the transfer of the effect of inductive biases through distillation and whether distillation helps students to converge to solutions similar to their teachers, we run a series of analyses. In Figure 4 we see how teacher LSTMs pull student Transformers toward solutions with higher accuracy on the subject-verb agreement task in the LM setup. This happens even when the perplexity of the student Transformer is higher (worse) than the independent Transformer.\nFigure 3, also shows the effects of distillation on each of the four models we study in the classification setup. In Transformer-based models, we get the most significant improvement both in accuracy and ECE when the teacher is an LSTM. As the recurrent inductive biases of the teacher get weaker, the amount of improvement in the performance of student models decreases. Figure 5 shows the effect of KD on the calibration, given a student Transformer and an LSTM teacher.\nIs the improvement in calibration merely the product of using soft targets? Mueller et al. (2019) shows training neural networks with soft targets (e.g. through label smoothing) results in models that are better calibrated. On the other hand, KD has a regularization effect similar to label smoothing (Yuan et al., 2019; Tang et al., 2020). Given the lack of significant improvement in ECE in the self-distillation experiments (Figure 3b), it is more likely that the cause of the improvement in ECE when distilling LSTMs into Transformers is beyond the label smoothing effect of KD.\nTo further explore and better understand the effects of KD, we compare the internal representations of these models besides their final output. Figure 6 shows the 2D projection of the relational similarity of representations5 (Laakso & Cottrell, 2000) from the penultimate layer of the models (see Appendix B for details). We see that, in the LM setup, the internal representations of student Transformers that are distilled from LSTMs are structured differently compared to independent Transformers and are more similar to the LSTM models. For the classification objective, we also see that the distilled models are further away from their independent versions. This supports the idea that the effect of distillation goes beyond the output of the models and their final performances." }, { "heading": "3 DISTILLING CNNS INTO MLPS", "text": "To evaluate the robustness of our findings on the transfer of inductive biases through KD, we performed a second case study, using different neural architectures and a different task. We use convolutional neural networks (CNN) vs. multilayer perceptrons (MLP) as two families of models with different inductive biases. CNNs are the de facto choice for processing data with grid-like topology. Sparse connectivity and parameter sharing in CNNs make them an effective and statistically efficient architecture. The particular form of parameter sharing in the convolution operation makes CNNs equivariant to translation (Goodfellow et al., 2016). Note that, we can view CNNs as MLPs with an infinitely strong prior over their weights, which says that first of all the weights for each hidden unit are identical to the weights of its neighbor with a shift in space, second, the weights out of the spatially continues receptive field assigned to each hidden unit are zero.\nTask and Models. We study CNNs and MLPs in the context of the Corrupted-MNIST dataset (MNIST-C) (Mu & Gilmer, 2019), which aims at benchmarking out-of-distribution robustness. We\n5Note that the relational similarity captures the similarity of the structures, not the absolute values.\ntrain the models on the original MNIST training set and evaluate them on the Translated and Scaled MNIST test sets from MNIST-C. In this scenario, the inductive biases of CNNs help them generalize better than MLPs. Our CNN architecture is a stack of convolutions and pooling layers. Combining convolution and pooling over spatial regions results in invariance to translation. To have CNNs that can learn to be invariant to other transformations like changes in the scale, we can use cross-channel pooling (Goodfellow et al., 2013), where we pool over separately parametrized convolutions that have learned to detect different transformed versions of the same underlying features. Our MLP is simply a stack of fully-connected layers. More details on the architectures are in Appendix H." }, { "heading": "3.1 THE IMPORTANCE OF TRANSLATION EQUIVARIANCE.", "text": "Table 5 presents the accuracy and ECE of CNNs and MLPs when trained independently. All models are trained on the original MNIST training set and tested on the Scaled and Translated sets from MNIST-C. Even though CNNs’ accuracy and ECE on the original MNIST test set are only slightly better than MLPs (.992 vs .985), there is a rather large gap between their performances on the Scaled (.962 vs. .794) and Translated (.981 vs. .373) test sets. This is as expected since the inductive biases of CNNs make them suitable for these types of generalizations. Moreover, the variance of the results from the CNNs is much less compared to MLPs. This is due to the fact that different instances of a model with stronger inductive biases are more likely to converge to solutions that belong to the same basin in the loss landscape (Neyshabur et al., 2020) (See Appendix C for more analysis)." }, { "heading": "3.2 BETTER OUT OF DISTRIBUTION GENERALIZATION WITH KD.", "text": "Table 6 shows that distillation from a CNN into an MLP improves both accuracy and ECE for all three test sets, decreasing the gap for the Scaled test set (.904 vs. .794 without KD), and much more improvement on the performance on the Translated test set (.510 vs. .373 without KD). We also see a lower variance in the performance of MLP models that are trained through KD with CNN teachers.\nWe further compare the results of all possible pairs of models as teachers and students, to take into account different effects of KD that can potentially improve the performance of the student model. Although self-distillation results in a slightly better performance in MLPs, perhaps due to the regularization effect of distillation (Mobahi et al., 2020; Tang et al., 2020), the improvement in the performance of MLPs with an MLP teacher is much less compared to when the teacher is a CNN. Regardless of the teacher (MLP or CNN), KD results in slightly lower performances in student CNNs compared to CNNs trained independently (similar to results of an LSTM student in test case 1).\nFurthermore, in Figure 7, we compare the relational similarity of the representations from penultimate layers of independently trained CNNs and MLPs as well as their distilled ones. First of all, as expected based on our assumptions about the inductive biases of these models, MLPs have more variance than CNNs. Second, distilling from a CNN to an MLP results in representations that are more similar to the representations learned by CNNs, while this is not the case with MLPs as teachers\nand CNNs as students. Moreover, for both CNNs and MLPs, self-distillation does not significantly change the representations they learn.\nFinally, we compare the paths the models follow during training until they converge to a solution. To plot the training path of a model, we compute the pairwise representational similarity between different stages of training of the model. Figure 1, illustrates the training path for an independent MLP, an independent CNN, and an MLP that is distilled from a CNN. While MLP and CNN seem to have very different behavior during training, the student MLP with a CNN as its teacher behaves differently than an independent MLP and more similar to its teacher CNN. This is interesting, in particular, since the student model is only exposed to the final solution the teacher has converged to and no information about the intermediate stages of training is provided in the offline KD." }, { "heading": "4 CONCLUSIONS", "text": "The no free lunch theorem states: for any learning algorithm, any improvement on performance over one class of problems is balanced out by a decrease in the performance over another class (Wolpert & Macready, 1997). Neural networks with different architectures have different inductive biases and this is reflected in their performance across different tasks. In this paper, we investigate the power of KD to enable benefiting from the advantages of different models at the same time. First, we demonstrate how inductive biases arising from different architectural choices affect the generalization behavior of the models we study. We further show that when a model has the right inductive bias, we can transfer its knowledge to a model that lacks the needed inductive bias and indicate that solutions that the student model learns are not only quantitatively but also qualitatively reflecting the effects of the inductive biases of the teacher model." }, { "heading": "A KNOWLEDGE DISTILLATION IN NEURAL NETWORKS", "text": "Knowledge Distillation is a technique that transfers knowledge from one model to another (Hinton et al., 2015). Hinton et al. (2015) suggest that the power of KD is mostly in being able to transfer the useful information that is embedded in the soft targets of the teacher model, e.g., the relation between the output classes as captured by the teacher model. This is often referred to as dark knowledge. Phuong & Lampert (2019) studies KD from a theoretical point of view in a simplified setting where the task is a binary classification, and teacher and student are linear models. They attribute the success of distillation to three main factors: (1) data geometry, (2) optimization bias, and (3) strong monotonicity. And more recently Tang et al. (2020), conduct extensive analysis and identify three sources for why KD helps: (1) label smoothing, (2) example re-weighting based on teacher’s confidence, and (3) prior knowledge of optimal output layer geometry.\nThe most well-known use of KD is to compress a large, unwieldy model or an ensemble model into a smaller model. Empirically, many people have found that bigger models are easier to train (often explained with the ‘lottery ticket hypothesis’ (Frankle & Carbin, 2019)); KD makes it possible to distill the knowledge in the large model into a much smaller model, and thus in some sense offer the best of both worlds (Buciluǎ et al., 2006; Hinton et al., 2015; Srinivas & Babu, 2015). Distilling knowledge from a very big model or an ensemble of models with similar or heterogeneous architectures that are trained on the same or different tasks into a single model with much fewer parameters can lead to similar or sometimes even better performance compared to the teachers (Luo et al., 2019; Liu et al., 2019; Hinton et al., 2015; Tan et al., 2019; Kim & Rush, 2016).\nPrevious work has examined the effectiveness of KD in different settings: where the teacher is bigger than the student, but both have similar building blocks (Hinton et al., 2015; Kim & Rush, 2016; Sanh et al., 2019); where teacher and student are of similar size and architecture (Furlanello et al., 2018; Freitag et al., 2017); or where the student and teacher have fundamentally different architectures (Frosst & Hinton, 2017; Tang et al., 2019; Luo et al., 2019; Ahn et al., 2019).\nKD has also been proposed as an interpretation technique, where the knowledge of a big complex model is distilled into a more interpretable model (Craven & Shavlik, 1995; Craven, 1996; Frosst & Hinton, 2017); Or as a method to compare the capacity and expressiveness of different models (Maheswaranathan et al., 2019; Saxe et al., 2018). Offline Distillation In most cases, KD is applied in an offline setting, i.e., we first train the teacher network and use the trained teacher to train the student, while the parameters of the teacher are fixed. This is the standard distillation process introduced by Buciluǎ et al. (2006); Hinton et al. (2015). We apply this setup in our experiments since it is the most common approach. There are other possible settings for KD, e.g. online distillation, where teacher and student models are trained simultaneously. Distillation Loss There are several different ways of computing the distillation loss: using only the output of the teacher or taking intermediate layers into account as well (Anil et al., 2018; Ahn et al., 2019; Sun et al., 2019; Park et al., 2019; Tung & Mori, 2019; Buciluǎ et al., 2006; Hinton et al., 2015). Potentially, using these alternative losses could lead to transferring different kinds of knowledge depending on the tasks and the configurations of the models. While it is worth doing a thorough comparison of all these techniques, in this paper we have focused on the most commonly used loss introduced by Hinton et al. (2015), which is based on the Kullback-Leibler divergence between output distributions of the teacher, i.e., soft targets, and the output distributions of the student. The output distributions of the teacher and student model, Pt and Ps, are computed similarly, with Equation 1.\nexp(zi/τ)∑ j exp(zj/τ) , (1)\nwhere τ > 1 is the softmax temperature and z is the logits from the model.\nThe distillation loss is: H(Pt, Ps), whereH is the cross entropy loss and is computed as: H(Pt, Ps) = − ∑ x Pt(x) logPs(x) (2)\nWhen KD is applied as a means for model compression, it is common to compute the total loss as a mixture of distillation loss and actual loss. Since, our focus in this paper is on how much the student model can learn from the teacher model, in our experiments we use pure distillation.\nB VISUALISATION OF REPRESENTATIONAL SIMILARITY OF THE ACTIVATIONS FROM THE PENULTIMATE LAYER\nTo compare and visualize the state of m different models to each other (at convergence or any stage of training), we propose using representational similarity (Laakso & Cottrell, 2000; Abnar et al., 2019) of the activations from their penultimate layer.\nNote that representational similarity measures how similar two models learn to represent the data in terms of the global “relations” between all the data points, not local example-by-example similarity. In fact, the “direct” similarity between the activations of the penultimate layers of two models can be quite low, while having high representational similarity. This is because models can keep the relations between data points similar while embedding data into completely different representational spaces.\nThis is particularly useful when these models do not have the same architecture and their parameter space is not directly comparable. To do so, given a sample set of size n from the validation/test set (e.g. 1000 examples), we feed them to the forward pass of each model to obtain the representation from the penultimate layer of the models. Then, for each model, we calculate the similarity of the representations of all pairs from the sample set using dot product which leads to a matrix of size n× n. We use the samples similarity matrix associated with each model to compute the similarity between all pairs of models. To do this, we compute the dot product of the corresponding rows of these two matrices after normalization and average all the similarities of all rows, which leads to a single scalar. Given all possible pairs of models, we then have a model similarity matrix of size m×m. We then apply a multidimensional scaling algorithm6 to embed all the models in a 2D space based on their similarities.\nThe code for projecting the representational similarity of the activations from the penultimate layer to a 2D space can be found in https://ANONYMIZED." }, { "heading": "C DO THE DISTILLED MODELS CONVERGE TO THE SAME BASIN IN THE LOSS", "text": "LANDSCAPE?\nTo gain a better understanding of the effect of KD and inductive biases of the models from an optimization point of view, we looked into how different models relate in terms of the solutions they converged to in the loss landscape.\nTo do so, inspired by the discussion in (Neyshabur et al., 2020), we look into different pairs of models and check if their final solution belong to the same flat basin7 of the loss landscape or they converged to completely different optima. To do so, given two models,m1 and m2, we take their parameters, θ1 and θ2, and evaluate a series of models obtained by linearly interpolating θ1 and θ2, with different coefficient, i.e., the parameters of model mi is computed as θi = λiθ1 + (1− λi)θ2. It has been shown (Neyshabur et al., 2020) that if the converged solutions of m1 and m2 belong to the same flat basin of the loss landscape, the models obtained by linearly interpolating their parameters are well-behaved because they also remain in that basin. However, for two models that converge to different optima and don’t share the flat basin of the loss landscape, the liner interpolations do not lead to well behave models.\nHere, we first, compare different instances of MLPs and CNNs. We train two instances of the same architecture with the same initial state but different random seeds (which would lead to different ordering of training examples, and different dropouts). Figure 8 shows the loss on the test set (y axis) for the two trained instances, as well as models obtained by linear interpolation of the two\n6https://scikit-learn.org/stable/modules/generated/sklearn.manifold.MDS.html 7Basin refers to areas in the parameter space where the loss function has relatively low values.\nmodels with different λs (x axis). In the case of MLPs, there is a large barrier between the two instances, showing that these models, even with the same initialization, will converge to solutions in different basins of the loss landscape. In contrast, for CNNs, their strong inductive biases drive them to converge to the solutions in the same basin, regardless of the stochasticity of the training process. This also supports the higher variance in the results we report for models with weaker inductive biases in §2.2 and §3.2.\nNext, we look into the effect of distillation on the diversity of the basins different instances of models converge to. Figure 9 shows the performance barriers of different pairs of MLPs (MLP#1 and MLP#2), when they are trained independently (i.e. when the teacher is data), as well as trained through KD, with an MLP and a CNN model as teachers.\nFirst of all, we observe that two models, initialized similarly but with different random seeds, trained through distillation with the same teacher are likely to converge to the same area in the loss surface (plots (c) and (f)). This happens regardless of the inductive bias of the teacher and student models. Comparing the plots in the diagonal of Figure 9, we can see that for both CNN → MLP (plot f) and MLP →MLP (plot c) the performance barrier is rather small in contrast to the large barrier between two independently trained MLPs (plot a). This indicates the power of KD to narrow down the search space of the student model and drive it to a particular set of solutions.\nMoreover, comparing the distilled instance of a model with an independently trained instance with the same initialization and different random seeds, the first column of Figure 9 (plots (a), (b), and (d)), we see that the distilled instances and independent instances are not in the same basin, regardless of the teacher but the barrier is larger (larger bump in the plots) when the teacher has a stronger inductive bias (CNN → MLP ). Similarly, as depicted in the second and third columns of Figure 9, while models distilled from the same teacher seem to be close in the loss surface (plots (c) and (f)), models distilled from different teachers (plot (e)) seem to be further away (have a larger barrier in between)." }, { "heading": "D PERFORMANCE SCORES ON THE TRAINING DATA", "text": "In the paper, for our first test case, we report the performance of LSTM and different Transformer models on the test set, when trained independently and with knowledge distillation. We observe that LSTMs achieve better accuracy on the test set compared to Transformers due to their inductive biases. Here, we also report the performance of all the models, for both classification and LM setup, on the training set, which confirms that Transformer models have enough capacity to achieve good scores on the training data.\nThis solidifies the narrative that the inductive bias of LSTMs is helping with generalization and rules out, for example, the possibility that LSTMs have a higher capacity or are trained better." }, { "heading": "E PER-SAMPLE BEHAVIOUR", "text": "To compare the models with each other and better understand how distillation affects the student models, we take a closer look at their per sample behavior and investigate if the errors a student model makes are more similar to its teacher’s errors. Here, we look into the error overlap of the students and teachers, which reflects their similarity in terms of their behavior per data example. This similarity can be another proxy to measure the similarity of the solutions learned by the models, with and without distillation. Figures 10, 11, and 12 illustrate the error overlap between different models as Venn diagrams when they are trained independently and when we use distillation.\nIn Figure 10, we observe that when the Transformer and LSTM models are trained independently, two independent LSTMs behave more similarly compared to two Transformers (Figures 10b and 10a). Given a similar number of trainable parameters, i.e., similar capacity for LSTMs and Transformers, this again supports the claim that models with stronger inductive biases converge to more similar solutions (Also shown in Figure 3a).\nWhen we apply KD in a cross-architecture setting, with an LSTM teacher and a student Transformer, Figures 10d and Figure 10c, the student Transformer behaves more similarly to the LSTM teacher and an independent LSTM, compared to the independent version of itself. This confirms that through distillation the way the student model solves the task becomes more similar to the way the teacher model solves the task.\nWe have similar observations in Figures 11, and 12; where errors of a student MLP are less and more similar to the errors the teacher CNN compared to an independently trained MLP.\nF IMPACT OF THE QUALITY OF THE TEACHER\nHere, in an ablation experiment for our second case study, we investigate the impact of the quality of the teacher in the in-distribution set on the generalization of the student in the out-of-distribution set. To do so, given a CNN as the teacher and an MLP as the student, we take snapshots of a CNN model\nduring different stages of training as teachers with different qualities (we use 9 different teachers). Using each teacher, we train an MLP student.\nFigure 13a presents the quality of the different teachers based on different test sets: Vanilla MNIST (in-distribution), Translated MNIST (out-of-distribution), and Scaled MNISt (out-of-distribution). For the CNN models that are trained with ground truth labels on vanilla MNIST, as expected, as the number of training iterations grows, the performance of the model on all the three test sets increases. In Figure 13b, we see that in general, the accuracy of the MLP students follows the same trend, i.e., better CNN teacher results in a better MLP student. Given the results of an independently trained MLP from Table 5a, the benefit of training an MLP via distillation for better generalization on in and out of distribution sets only kicks in when we have a CNN teacher with a quality more than a certain threshold.\nG IMPACT OF THE DATASET USED IN THE DISTILLATION STEP\nIn our experiments in this paper, our focus is on the setups where we use the same dataset that was used to train the teacher model, to transfer its knowledge to the student model.\nWe use this setup mainly because we want to see how effective is the distillation process to transfer the generalization behavior of the teacher in isolation, as using a different dataset in the distillation step would add another factor. In other words, during the training of the teachers and as well as the students (i.e., distillation step), we only use samples from the in-distribution set to make sure the desired generalization behavior is not apparent from the dataset used for training neither the teacher nor the student models.\nIn this section, we extend the CNN-MLP experiments on Corrupted-MNIST and look into the performance of the student model, when we use samples from the out-of-distribution set in the distillation step for training the student.\nTable 9 presents the result of an MLP student when we use different training sets in the distillation step. We can see that when distilling knowledge from a CNN teacher that is trained on vanilla MNIST, if we use translated or scaled MNIST in the distillation step, the student MLPs achieve relatively high performance on the corresponding test sets, while the performance on the other out-of-distribution set drops compared to when we use vanilla MNIST in the distillation step. To have complementary information for better comparisons, Table 10 shows the accuracies of MLPs when\nthey are directly trained on each of these datasets. Interestingly, we observe that when trained through KD the performance of the student MLPs are higher or match the performance of MLPs when trained with ground truth labels, and additionally, they achieve better performance on the other datasets. For example, for the case when we use translated MNIST to train the MLPs, the accuracy of the student MLPs match the accuracy of MLPs trained with the ground labels, while the accuracies on the other two datasets (vanilla and scaled) are higher for the student MLPs trained with CNN teachers." }, { "heading": "H DETAILED MODELS ARCHITECTURES AND TRAINING SETUP", "text": "For the subject-verb agreement task, we study Transformers and LSTMs. In the LM setup, we use two sizes for each architecture: LSTM: two-layer uni-direction LSTM, with a hidden size of 1024. Small LSTM: two-layer uni-direction LSTM, with a hidden size of 512. Transformer: six-layer Transformer decoder with a hidden size of 512 and 8 heads. Small Transformer: Transformer: six-layer Transformer decoder with a hidden size of 256 and 8 heads.\nIn the classification setup, we employ an LSTM and three variants of Transformer, where the LSTM has a two-layer with a hidden size of 256, and the Transformers have 6 layers, 8 heads, and a hidden size of 128. We use a hidden size of 256 for the UniversalTransformer-seq since its parameters are shared in depth and with the same hidden size as other Transformers, it will have fewer parameters.\nOn the MNIST-C dataset, we study CNNs and MLPs. Our CNN has two 3×3 convolutions, followed by a max-pooling layer over spatial dimensions, followed by another 3× 3 convolution and a maxout (max-pooling over channel dimension) layer (Goodfellow et al., 2013). Finally a global averaging is done over spatial dimensions, before the projection layer. The MLP model simply has three fully connected layers.\nFor training the independent models we use the Adam optimizer (Kingma & Ba, 2014) with exponential decay learning rate scheduler and for the student models in the distillation process, we use Adam optimizer with cosine decay restart (Loshchilov & Hutter, 2017) learning rate scheduler. The\nhyperparameters related to the regularization and learning rate schedulers are tuned separately for each model/experiment. For each model, we report the set of hyper-parameters that gives the best average performance across multiple trials with different random seeds for initialization." }, { "heading": "I CODE", "text": "The code for all the analysis and experiments including the input pipelines, models, the details of the hyper-parameter sets used in our experiments are available at https://ANONYMIZED, to facilitate the replication of all the experiments.\nFor the anonymized submission, we remove the links to the repository of the code from the paper and upload the code as a zip file as part of the supplementary materials for our submission." } ]
2,020
null
SP:a2081fef3126e03544d6c62d6b4b0e15f79d1cc6
[ "In this paper, the authors proposed the MTGAN framework, a GAN-based approach to the task of anomaly detection in hyperspectral images. The main idea behind this work is to exploit twin-least-square loss to perform background modeling in feature and image domains to alleviate the gradient vanishing problem of the previous GAN-based anomaly detection methods. Specifically, they proposed i) an MCM-aware strategy to construct the multi-scale priors, ii) a twin-least-square loss on GAN for training stabilization, and iii) an anomaly rejection loss for background estimation. The experiments on multiple benchmarks show the superiority of the MTGAN to the state of the art hyperspectral anomaly detection methods." ]
Hyperspectral anomaly detection under high-dimensional data and interference of deteriorated bands without any prior information has been challenging and attracted close attention in the exploration of the unknown in real scenarios. However, some emerging methods based on generative adversarial network (GAN) suffer from the problems of gradient vanishing and training instability with struggling to strike a balance between performance and training sample limitations. In this work, aiming to remedy the drawbacks of existing methods, we present a novel multi-scale covariance map (MCM)-aware twin-least-square GAN (MTGAN). Instead of the widely used single-scale Gaussian hypothesis background estimation, in MTGAN, we introduce the MCM-aware strategy to construct multi-scale priors with precise second-order statistics, thereby implicitly bridging the spatial and spectral information. Thus, we reliably and adaptively represent the prior of HSI to change the priors-lack situation. Moreover, we impose the twin-least-square loss on GAN, which helps improve the generative ability and training stability in feature and image domains, overcoming the gradient vanishing problem. Finally, the network enforced with a new anomaly rejection loss establishes a pure and discriminative background estimation. Experiments demonstrate that the average detection accuracy of MTGAN reaches 0.99809, which is superior to the state-ofthe-art algorithms.
[ { "affiliations": [], "name": "TWIN-LEAST-SQUARE GAN" } ]
[ { "authors": [ "Imtiaz Ahmed", "Xia Ben Hu", "Mithun P. Acharya", "Yu Ding" ], "title": "Neighborhood structure assisted nonnegative matrix factorization and its application in unsupervised point anomaly detection", "venue": null, "year": 2001 }, { "authors": [ "Salem Manel Ben", "Ettabaa Karim Saheb", "Hamdi Mohamed Ali" ], "title": "Anomaly detection in hyperspectral imagery: an overview", "venue": "In International Image Processing, Applications and Systems Conference (IPAS),", "year": 2014 }, { "authors": [ "Gilles Blanchar", "Gyemin Lee", "Clayton Scott" ], "title": "Semi-supervised novelty detection", "venue": "Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Gutflaish Eyal", "Kontorovich Aryeh", "Sabato Sivan", "Biller Ofer", "Sofer Oded" ], "title": "Temporal anomaly detection: Calibrating the surprise", "venue": "In 33rd AAAI Conference on Artificial Intelligence (AAAI),", "year": 2019 }, { "authors": [ "Zhiqiang Gong", "Ping Zhong", "Weidong Hu" ], "title": "Statistical loss and analysis for deep learning in hyperspectral image classification", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2020 }, { "authors": [ "Ian J Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Proceedings of the Conference on Advances in Neural Information Processing Systems (NIPS),", "year": 2014 }, { "authors": [ "Nico Grnitz", "Marius Kloft", "Konrad Rieck", "Ulf Brefeld" ], "title": "Toward supervised anomaly detection", "venue": "Journal of Artificial Intelligence Research,", "year": 2013 }, { "authors": [ "Jinjin Gu", "Yujun Shen", "Bolei Zhou" ], "title": "Image processing using multi-code gan prior", "venue": "In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Qiandong Guo", "Bing Zhang", "Qiong Ran", "Lianru Gao", "Jun li", "Antonio Plaza" ], "title": "Weighted-rxd and linear filter-based rxd: Improving background statistics estimation for anomaly detection in hyperspectral imagery", "venue": "IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing,", "year": 2014 }, { "authors": [ "Ning Huyan", "Xiangrong Zhang", "Huiyu Zhou", "Licheng Jiao" ], "title": "Hyperspectral anomaly detection via background and potential anomaly dictionaries construction", "venue": "IEEE Transactions on Geoscience and Remote Sensing,", "year": 2019 }, { "authors": [ "Charis Lanaras", "Emmanuel Baltsavias", "Konrad Schindler" ], "title": "Hyperspectral super-resolution by coupled spectral unmixing", "venue": "In Proceedings of the IEEE International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Lu Li", "Wei Li", "Qian Du", "Ran Tao" ], "title": "Low-rank and sparse decomposition with mixture of gaussian for hyperspectral anomaly detection", "venue": "IEEE Transactions on Cybernetics,", "year": 2020 }, { "authors": [ "Yezheng Liu", "Zhe Li", "Chong Zhou", "Yuanchun Jiang", "Jianshan Sun", "Meng Wang", "Xiangnan He" ], "title": "Generative adversarial active learning for unsupervised outlier detection", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2019 }, { "authors": [ "Weixin Luo", "Wen Liu", "Dongze Lian", "Jinhui Tang", "Shenghua Gao" ], "title": "Video anomaly detection with sparse coding inspired deep neural networks", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2019 }, { "authors": [ "Ying Qu", "Wei Wang", "Rui Guo", "Bulent Ayhan", "Chiman Kwan", "Steven Vance", "Hairong Qi" ], "title": "Hyperspectral anomaly detection through spectral unmixing and dictionary-based low-rank decomposition", "venue": "IEEE Transactions on Geoscience and Remote Sensing,", "year": 2018 }, { "authors": [ "Chalapathy Raghavendra", "Chawla Sanjay" ], "title": "Deep learning for anomaly detection: A survey", "venue": "arXiv preprint arXiv:1901.03407,", "year": 2019 }, { "authors": [ "Thomas Schlegl", "Philipp Seebck", "Sebastian M. Waldstein", "Ursula Schmidt-Erfurth", "Georg Langs" ], "title": "Unsupervised anomaly detection with generative adversarial networks to guide marker discovery", "venue": "In Information Processing in Medical Imaging(IPMI),", "year": 2017 }, { "authors": [ "Jastrzebski Stanislaw", "Szymczak Maciej", "Fort Stanislav", "Arpit Devansh", "Tabor Jacek", "Cho Kyunghyun", "Geras Krzysztof" ], "title": "The break-even point on optimization trajectories of deep neural networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Bing Tu", "Xianchang Yang", "Nanying Li", "Chengle Zhou", "Danbing He" ], "title": "Hyperspectral anomaly detection via density peak clustering", "venue": "Pattern Recognition Letters,", "year": 2020 }, { "authors": [ "Hao Wu", "Saurabh Prasad" ], "title": "Semi-supervised deep learning using pseudo labels for hyperspectral image classification", "venue": "IEEE Transactions on Image Processing,", "year": 2018 }, { "authors": [ "Naoto Yokoya", "Takehisa Yairi", "Akira Iwasaki" ], "title": "Coupled nonnegative matrix factorization unmixing for hyperspectral and multispectral data fusion", "venue": "IEEE Transactions on Geoscience and Remote Sensing,", "year": 2012 }, { "authors": [ "Qiangqiang Yuan", "Qiang Zhang", "Jie Li", "Huanfeng Shen", "Liangpei Zhang" ], "title": "Hyperspectral image denoising employing a spatial-spectral deep residual convolutional neural network", "venue": "IEEE Transactions on Geoscience and Remote Sensing,", "year": 2019 }, { "authors": [ "Chuxu Zhang", "Dongjin Song", "Yuncong Chen", "Xinyang Feng", "Cristian Lumezanu", "Wei Cheng", "Jingchao Ni", "Bo Zong", "Haifeng Chen", "Nitesh V. Chawla" ], "title": "A deep neural network for unsupervised anomaly detection and diagnosis in multivariate time series data", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI),", "year": 2019 }, { "authors": [ "Bo Zong", "Qi Song", "Martin Renqiang Min", "Wei Cheng", "Cristian Lumezanu", "Daeki Cho", "Haifeng Chen" ], "title": "Deep autoencoding gaussian mixture model for unsupervised anomaly detection", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Hyperspectral image (HSI) appears as a three-dimensional (3D) data cube, two dimensions of which show the spatial information of materials, and the other reveals hundreds of contiguous bands to perceive each scene (Yokoya et al., 2012). Among a wealth of HSIs interpretation techniques in practical situations, anomaly detection has many potential applications in video surveillance, activity recognition, and scene understanding, etc (Lanaras et al., 2015; Eyal et al., 2019; Tu et al., 2020). However, due to the insufficient prior information, inaccurate labels, complex scenes, and unbalanced samples, it is high-cost and sometimes infeasible to accurately detect different types of anomalies in HSI. Consequently, hyperspectral anomaly detection without any priors is a challenging task and is of great importance.\nDeep learning-based methods have powerful and unique advantages in modeling and characterizing complex data (Stanislaw et al., 2020). A lot of research has appeared in the field of anomaly detection, which can be roughly divided into three categories: supervised, semi-supervised, and unsupervised. However, due to the difficulty of annotation and collection of label training, supervised methods are rarely applied (Grnitz et al., 2013; Raghavendra & Sanjay, 2019). Semi-supervised work aims to break the dilemma between the number of samples and detection performance, but it still requires pure background training samples (Blanchar et al., 2010; Wu & Prasad, 2018). On the one hand, unsupervised learning based hyperspectral anomaly detection has become a new trend (Schlegl et al., 2017; Zhang et al., 2019). On the other hand, the detection performance is limited due to the lack of prior knowledge. Therefore, we propose an MCM-aware strategy to adaptively obtain reliable and stable pseudo-labeled prior information to alleviate these problems.\nConcretely, motivated by the observations mentioned above, we estimate the priors and model the background with multi-scale covariance matrices as the necessary preparation fed into the MTGAN model, which generates discriminative representations with second-order statistics in covariance\npooling and is conducive to exploiting the intrinsic spatial-spectral information of HSI. The progress of MCM-aware priors construction strategy is illustrated in Figure 1.\nFurthermore, though GAN performs well in anomaly detection tasks according to the literature, the real objective of GAN is supposed to capture more separable latent features between background and anomalies instead of minimizing the pixel-wise reconstruction error (Gong et al., 2020). The gradient vanishing problem, which is partly caused by the hypothesize that the discriminator as a classifier with the sigmoid cross-entropy loss function in regular GANs, is not conducive to the generation of background and discrimination of anomalies.\nHence, to facilitate the training stability and alleviate the gradient-vanishing problem, we present twin-least-square loss to perform background modeling in feature and image domains. Accordingly, we can solve the problem of gradient vanishing and enhance the representation directly aiming at the reconstruction of each pixel.\nIn light of the difficulties of the separability between the anomaly and background, we also impose an anomaly rejection loss to avoid anomalies contamination in background estimation. In this way, the network can reconstruct resembled background dictionaries, but dramatically changed anomalies, thereby increasing the degree of difference between them and endow better detection accuracy.\nTo verify the effectiveness of the proposed method, we implement evaluations on five public HSI data sets. In MTGAN, the average AUC scores of (Pd, Pf ) and (Pf , τ) are 0.99809 and 0.00518, respectively, which outperform previous state-of-the-art methods. To summary, our contributions are mainly three-fold:\n• To solve the problem of insufficient samples that previous methods suffer from, we propose an MCM-aware strategy to reliably and adaptively generate prior dictionaries. In specific, we calculate a series of multi-scale covariance matrices, taking advantage of the second-order statistics to naturally model the distribution with integrated spectral and spatial information.\n• The twin-least-square loss is introduced into both the feature and image domains to overcome the gradient vanishing problem. Meanwhile, the generative ability and training stability can be improved, which can fit the characteristics of high-dimension and complexity of HSI data.\n• To further reduce the false alarm rate, we design a novel anomaly rejection loss to enlarge the distribution diversity between background regions and anomalies, aiming to distinguish between background and anomalies. Experimental results illustrate that the AUC score of (Pf , τ) in MTGAN is one order of magnitude lower than other state-of-the-art methods." }, { "heading": "2 RELATED WORK", "text": "For traditional methods, the RX method assumes that each spectral channel is Gaussian-distributed, and the pixel is L-dimensional multi-variate Gaussian distributed (Guo et al., 2014; Luo et al., 2019; Ahmed et al., 2020). As a non-RX based methods, the ADLR method obtains abundance vectors by spectral decomposition and constructs a dictionary based on the mean value clustering of abundance vectors (Qu et al., 2018). The PAB-DC model imposed with low-rank and sparse constraints considers the homogeneity of background and the sparsity of anomalies to construct the dictionaries (Huyan et al., 2019). The emerging typical algorithm AED removes the background mainly by attribute filtering and difference operation. Additionally, the LSDM-MoG method combines the mixed noise models and low-rank background to characterize complex distributions more accurately (Li et al., 2020). However, these conventional methods are based on single-scale Gaussian assumption and cannot represent complex and high-dimensional data sets well, leading to the exploration of deep learning-based methods (Ben et al., 2014).\nIn deep auto-encoding Gaussian mixture model (DAGMM) (Zong et al., 2018), an autoencoder (AE) is introduced to the model to generate a low-dimensional representation and reconstruction error for each input data point as the input of the Gaussian Mixture Model (GMM). GAN has attracted a lot of attention for providing a generative model to minimize the distance between the training data distribution and the generative model samples without explicitly defining a parametric function (Goodfellow et al., 2014; Yuan et al., 2019; Gu et al., 2020). A novel single-objective generative adversarial active learning (SO-GAAL) method for outlier detection considers the anomaly detection as a binary-classification issue by sampling potential outliers from a uniform reference distribution (Liu et al., 2019). Nevertheless, these deep learning-based methods cannot achieve a balance between good performance and limited prior information. What’s more, the network structure of these methods is not specially designed for hyperspectral anomaly detection. Therefore, we propose MTGAN concerning hyperspectral anomaly detection for the first time, to approximate the performance of the supervised methods while releasing the limitation of training samples." }, { "heading": "3 PROBLEM STATEMENT AND FRAMEWORK", "text": "In this work, we elaborate on MTGAN for hyperspectral anomaly detection, as shown in Figure 2. The three key components of the framework include: 1) the MCM module for background dictionary construction; 2) the twin-least-square GAN module for background reconstruction; 3) the anomaly rejection loss added joint learning. The modules are cascaded together for hyperspectral anomaly detection." }, { "heading": "3.1 CONSTRUCTING THE OVERALL MODEL", "text": "We denote the HSI as H ∈ Rh×w×d , where d is the number of the spectral bands. h and w represent the spatial size of the data. For convenience, as the input of the network, we transform the 3-D cube H into a 2-D matrix H = {hi}ni=1 ∈ R d×n , where each column of H is a spectral pixel vector in the HSI and n = h × w is the number of the pixels. The HSI data matrix is decomposed into two components: background and anomaly. We denote background and anomaly as Y = [y1,y2, ...,ynB ], X = [x1,x2, ...,xnA ], and nA + nB = n, respectively, where yi and xi represent the ith vectors.\nBased on the defined background and anomaly dictionaries, we formulate the anomaly detection method as\nL (Y,X) = LTLS (Y) + Lauto\n( Y, Ŷ ) + Lenlarge ( Ŷ,X ) = LLS1 (Z) + LLS2 ( Y, Ŷ ) + ∥∥∥Y − Ŷ∥∥∥− α ∥∥∥Ŷ −X∥∥∥ , (1)\ns.t. Z = Enc (Y; arg(G))Ŷ = Dec (Z; arg(D)) α ∼ N (0, I) ,\nwhere ∥∥∥Y − Ŷ∥∥∥ denotes the reconstruction error of the basic AE network. The twin-least-square\nloss LTLS added for the two discriminators are denoted by LLS1 and LLS2 , respectively, which make up the whole twin-least-square loss LTLS . Lauto and Lenlarge represent the spectral reconstruction loss and separability loss between background-anomaly, respectively. Enc and Dec represent the encoder and decoder, respectively. Z is the output of the encoder. And the encoding and decoding process can be demonstrated as\nŶ = σ ( WWTY + B ) , (2)\nwhere σ (·) represents the activation function. Ŷ is the output of the network, and W is the weight of the encoder. B is the bias of the whole network." }, { "heading": "3.2 MCM-AWARE PRIOR FOR BACKGROUND CONSTRUCTION", "text": "Inspired by estimating the Mahalanobis distance between the test pixels and the constructed pixels on one scale, we generate pseudo priors for GAN training through multi-scale covariance maps construction. Thus, we can meet the requirement of sufficient prior information and take advantage of the spatial and spectral information of HSI. The whole generation progress by pseudo-labeling can be expressed as (Y,X) = fMCM (H) , (3) where fMCM (·) represents the nolinear learning process of MCM strategy. Y and X denote the background and anomaly dictionaries, respectively." }, { "heading": "3.2.1 MULTI-SCALE LOCALIZING", "text": "For each central pixel, we try to realize multi-scale localizing first based on the Euclidean distance with a classical classifier, i.e., K nearest neighbors (KNN), to obtain the local pixel cubes at different scales. Then, we generate a series of gradually increasing cubes of different scales. For each of the cubes, we transfer it to a vector. After that, the covariance matrix is calculated between the vectors." }, { "heading": "3.2.2 GENERATING CO-VARIANCE MAPS", "text": "For the central pixel hi, taking the scaleR×R as an example, the covariance map of hi on the fixed one scale is extracted as\nCk = 1\nR2 − 1 R2∑ i=1 (hi − µ) (hi − µ)T ∈ RL×L, (4)\nwhere µ represents the mean of the set of input HSI vectors {hi|i = 1, 2, · · · , R}.{ hi|i = 2, · · · , R2 } represents the corresponding adjacent pixels in a window of R × R pixels. In addition, M scales of Rk , i.e. k = 1, · · · ,M , are taken into account. The covariance maps of other scales are denoted by Ck, k = 1, · · · ,M , which make up the co-variance pool to construct the background." }, { "heading": "3.3 CASCADED ARCHITECTURE WITH TWIN-LEAST-SQUARE LOSS", "text": "" }, { "heading": "3.3.1 STABILITY BRANCH", "text": "As mentioned above, as the improved version of the original GAN, we present the architecture based on twin-least-square loss to solve the gradient vanishing problem and enhance the stability. As shown in Figure 2, the MTGAN network consists of two discriminators and a generator. Instead of the cross-entropy in GAN, we impose twin-least-square loss on the two discriminators. The twin-least-square loss can be expressed as\nLLS1 = 1\n2 Ey∼pdata(y)[(DF (y)− 1)\n2 ] + 1\n2 Ez∼pz(z)[(DF (G(z)) + 1)\n2\n], (5)\nLLS2 = 1\n2 Ey∼pdata(y)[(DR(y)− 1)\n2 ] + 1\n2 Eŷ∼pŷ(ŷ)[(DR(G(ŷ)) + 1)\n2\n]. (6)\nFrom the binary classification point of view, we introduce the twin-least-square loss to move false samples to the decision boundary of being anomaly or background and punish those being far from the decision boundary, even if on the right side. Thus, we perform adversarial training on the two least-square-loss imposed discriminators DR and DF against the generator, aiming to overcome gradient vanishing and stably match the distribution of the decoded vectors ŷ and the known input data distribution of y." }, { "heading": "3.3.2 SEPARABILITY BRANCH", "text": "As mentioned earlier, we impose the spectral reconstruction loss of AE using mean squared error (MSE) to minimize the deviation between the decoded images and the original input image:\nLauto\n( Y, Ŷ ) = ∥∥∥Y − Ŷ∥∥∥\n= ∥∥Y − σ (WWTY + B)∥∥ . (7)\nTo ensure that the learning samples come entirely from the background, we introduce the following distance function based on Lauto. The second item of which is expected to be as large as possible:\nLanorm\n( Y, Ŷ ) = Lauto ( Y, Ŷ ) + Lenlarge ( Ŷ,X ) = ∥∥∥Y − Ŷ∥∥∥− α ∥∥∥Ŷ −X∥∥∥\n= ∥∥Y − σ (WWTY + B)∥∥− α ∥∥σ (WWTY + B)−X∥∥ .\n(8)\nLet xi and yi represent the ith component in X and Y, respectively. x̄ denotes the mean of all the xi in X. When the distance between the reconstructed spectrum vector ŷi and the average spectrum vector x̄ is small, yi is suspected to be the anomaly. Then the suppression coefficient α of the function aims to reduce it rapidly for adjustment. When the distance between x̄ and yi is large, from a statistical point of view, yi is the background dictionary to be estimated. Then the value of the suppression coefficient α is approximate to 1." }, { "heading": "3.3.3 JOINT TRAINING AND DISCRIMINATION", "text": "The twin-least-square loss, spectral reconstruction loss, and anomaly rejection loss are jointly learned with the weighting coefficient of 1 by doing alternative updates of each component as follows. Subsequently, we obtain the detection maps by Gaussian statistics-based discrimination.\n• Minimize LLS1 by updating parameters of DF . • Minimize LLS2 by updating parameters of DR. • Minimize Lanorm by updating parameters of En and the decoder De." }, { "heading": "4 EXPERIMENT", "text": "" }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "Data Sets. HYDICE describes urban scenes in the United States that were captured by the hyperspectral digital image acquisition experiment (HYDICE) airborne sensors above the city. A subimage with a spatial size of 80 × 100 is cut out from the entire 307 × 307 original image with 162 spectral channels ranging from 400 nm to 2500 nm. Airport-Beach-Urban (ABU) was captured by the Airborne visible/infrared imaging spectrometer (AVIRIS) sensors. Among airport, beach, and urban scenes, we conduct experiment on two of the urban scenes. The sample images in ABU contain 100 × 100 or 150 × 150 pixels and so as the corresponding reference maps. EI Segundo is captured by AVIRIS sensor. The scene contains the area of the refinery, several residential areas, parks, and school areas. It has 250 × 300 pixels in the spatial domain and 224 bands in spectral,\nranging from 366 to 2496 nm. Grand Island is also acquired by the AVIRIS sensor at the location of Grand Island on the Gulf Coast, which contains 300 × 480 pixels and 224 spectral channels with a wavelength range of 366 to 2496 nm. The pseudo-color images of each data set and their corresponding ground truth maps are shown in Figure 4, respectively.\nHyper-parameters. In our experiment, we set the number of training iterations as 5000. The learning rate is set to 0.0001. There are three layers for the encoder, decoder, and discriminators networks, respectively. The number of the extracted feature is set to 20. With back propagation, we optimize all parameters by the Adam optimization algorithm." }, { "heading": "4.2 EVALUATION RESULTS", "text": "For quantitative comparison, we can observe from Tables 1 and 2 that for all data sets, the results of the MTGAN method and comparison methods are close to ideal values.\nSpecifically, for the HYDICE data set, though the (Pf , τ) obtained by MTGAN does not reach the order of magnitude of three decimal places, the AUC score of (Pd, Pf ) is 0.99945 which outperforms other methods, demonstrating that the MTGAN can maintain the detection ability well with extremely low missed detection. Compared with RX and PAB-DC methods, for the ABU-urban1 data set, MTGAN can detect more anomalies, showing better performance. According to the results in Table 1, the AUC score of (Pf , τ) of MTGAN is smaller than other comparison methods for the ABU-urban-1 data set, indicating that the MTGAN can inhibit background for the scenarios better. For Grand Island, the AUC score of (Pd, Pf ) obtained by MTGAN is 0.99991, which is much higher than most typical methods but less than the ADLR algorithm of 0.99993. However, the AUC score of (Pf , τ) of MTGAN is 0.00069, which is better than the ADLR of 0.00079. The detection of the ABU-urban2 and EI Segundo data sets also performs well, which indicates that MTGAN has compromised the AUC scores of (Pd, Pf ) and (Pf , τ). Generally speaking, the comparison methods may achieve good results for some specific data sets, but MTGAN can achieve promising detection results both in the AUC scores of (Pd, Pf ) and (Pf , τ) for all the data sets. From the ROC curves shown in Figure 3, we can conclude that the MTGAN obtains higher AUC score of (Pd, Pf ) with identical AUC score of (Pf , τ), illustrating a superior performance to other methods." }, { "heading": "4.3 PARAMETER SENSITIVITY ANALYSIS", "text": "There are mainly two parameters to be analyzed: the size of the window and the scales of the MCMs. Due to the pixel-level processing characteristics of the MTGAN and its sensitivity to the size of the\nwindows, we fixed the scales of the MCMs as 5. With the size of the windows varying from 30 to 50, the values differ from each other, and the optimal results are achieved for different data sets. For the HYDICE data set, it achieves the highest AUC score of (Pd, Pf ) when the size is 25. And the value is 15 for the ABU-urban-2 data set. As for the Grand Island and EI Segundo data sets, the optimal size values are 20 and 24, respectively.\nSimilarly, when the scales of the MCM are changed from 2 to 10 at 2 steps, the results of the AUC score of (Pd, Pf ) illustrate different performance. When we set the scales to 6 for the HYDICE data set, the performance of the experimental data set can achieve the best. For the rest of the data sets, we set the scales of the MCM as 5 to make a balance between the performance and the computation costs." }, { "heading": "4.4 ABLATION STUDY", "text": "To better understand the effect of each component on the output detection result in our method, we analyzed the performances under the following four training models: 1) MTGAN without MCM module; 2) MTGAN without twin-least-square loss module; 3) MTGAN without anomaly rejection loss; 4) MTGAN. In Table 3, we have the following observations.\nThe MTGAN achieves higher AUC score of (Pd, Pf ) and lower AUC score of (Pf , τ) than other models. The AUC score of (Pd, Pf ) on average compared to other configurations are improved by about 0.276%, 0.266%, and 0.156%, respectively. And the AUC score of (Pf , τ) on average are optimized by about 10.425%, 6.950%, and 1.351%, respectively. The results indicate the effective-\nness and necessity of MCM, twin-least-square loss, and anomaly rejection loss, which contribute effectively to improve the detection performance." }, { "heading": "4.5 VISUALIZING THE DETECTION MAPS", "text": "For qualitative comparison, as shown in Figure 4, the PAB-DC method almost visually loses all anomalies for the HYDICE data set. Most anomalies are visually mixed with the background in the detection results of AED and LSDM-MoG methods. The RX method can detect almost all anomalies with high intensity, but it cannot suppress background interference well. For the ABUurban1 data set, the MTGAN can well detect the anomalies with the highest intensity and relatively low false alarm rate in urban scenes. The RX, AAE, ADLR, and AED methods can identify the anomalies locations, but cannot retain the shape of them. For ABU-urban2 data set, both AAE and MTGAN can produce excellent detection results with good background suppression visually, but the ADLR gets poor performance. For the Grand Island data set, the RX, LSDM-MoG, and PAB-DC methods can almost completely detect anomalies but they cannot effectively suppress background. For the EI Segundo data set, MTGAN also shows the best detection performance in both quantitative evaluation and visual analysis." }, { "heading": "5 CONCLUSION", "text": "In this paper, we propose a new MCM-aware twin-least-squares GAN model for hyperspectral anomaly detection. Several aspects of our framework deserve consideration. For the first time, we propose the MCM strategy to construct multi-scale prior information, reuse and embed the covariance map at multi-scale to create reliable and stable priors adaptively. Therefore, we can solve the lack of priors with pseudo-labeling and make full use of spectral and spatial information. To overcome the problem of gradient vanishing and generate high-quality images, we introduce twin-leastsquare loss to the architecture in both feature and image domains. Finally, the network enforced with a novel anomaly rejection loss establishes a pure and discriminative background estimation, separating background and anomalies to a greater extent. Through experiments, we have proved that the MTGAN framework exhibits superior performance in background reconstruction and outperforms the state-of-the-art methods." } ]
2,020
null
SP:f0f3694b84631cb0ebb5cd4c3510f6279526a28c
[ "The paper proposes a neural-network architecture for modeling dynamical systems that incorporates prior domain knowledge of the system's dynamics. More specifically, the main contributions are the mechanisms for incorporating such knowledge, in terms of fully or partially known structure (differential equations) of the system, which in turn positively affects the modeling performance. The results from the experimental evaluation (on 3 synthetic and one real-world experiments), in general, show that the proposed Neural Dynamical Systems (NDS), and in particular the ones trained with partial prior knowledge, have better performance than several standard benchmarks (such as NeuralODEs, LSTMs, Sparse Regression etc.)." ]
We introduce Neural Dynamical Systems (NDS), a method of learning dynamical models in various gray-box settings which incorporates prior knowledge in the form of systems of ordinary differential equations. NDS uses neural networks to estimate free parameters of the system, predicts residual terms, and numerically integrates over time to predict future states. A key insight is that many real dynamical systems of interest are hard to model because the dynamics may vary across rollouts. We mitigate this problem by taking a trajectory of prior states as the input to NDS and train it to dynamically estimate system parameters using the preceding trajectory. We find that NDS learns dynamics with higher accuracy and fewer samples than a variety of deep learning methods that do not incorporate the prior knowledge and methods from the system identification literature which do. We demonstrate these advantages first on synthetic dynamical systems and then on real data captured from deuterium shots from a nuclear fusion reactor. Finally, we demonstrate that these benefits can be utilized for control in small-scale experiments.
[]
[ { "authors": [ "Ibrahim Ayed", "Emmanuel de Bézenac", "Arthur Pajot", "Julien Brajard", "Patrick Gallinari" ], "title": "Learning Dynamical Systems from Partial Observations. arXiv:1902.11136 [physics], February 2019", "venue": "URL http://arxiv.org/abs/1902.11136", "year": 1902 }, { "authors": [ "S.A. Billings" ], "title": "Identification of nonlinear systems–a survey", "venue": "IEE Proceedings D Control Theory and Applications,", "year": 1980 }, { "authors": [ "M.D. Boyer", "K.G. Erickson", "B.A. Grierson", "D.C. Pace", "J.T. Scoville", "J. Rauch", "B.J. Crowley", "J.R. Ferron", "S.R. Haskey", "D.A. Humphreys", "R. Johnson", "R. Nazikian", "C. Pawley" ], "title": "Feedback control of stored energy and rotation with variable beam energy and perveance on DIII-D", "venue": "Nuclear Fusion,", "year": 2019 }, { "authors": [ "M.D. Boyer", "S. Kaye", "K. Erickson" ], "title": "Real-time capable modeling of neutral beam injection on NSTX-U using neural networks", "venue": "Nuclear Fusion,", "year": 2019 }, { "authors": [ "K.E. Brenan", "S.L. Campbell", "L.R. Petzold. Numerical Solution of Initial-Value Problems in Differential-Algebraic Equations. Classics in Applied Mathematics. Society for Industrial", "Applied Mathematics", "January" ], "title": "ISBN 978-0-89871-353-4", "venue": "doi: 10.1137/1.9781611971224. URL https://epubs.siam.org/doi/book/10.1137/1.9781611971224.", "year": 1995 }, { "authors": [ "Steven L. Brunton", "Joshua L. Proctor", "J. Nathan Kutz" ], "title": "Discovering governing equations from data: Sparse identification of nonlinear dynamical systems", "venue": "doi: 10.1073/pnas", "year": 2015 }, { "authors": [ "Eduardo F Camacho", "Carlos Bordons Alba" ], "title": "Model predictive control", "venue": "Springer Science & Business Media,", "year": 2013 }, { "authors": [ "Ian Char", "Youngseog Chung", "Willie Neiswanger", "Kirthevasan Kandasamy", "Andrew Oakleigh Nelson", "Mark Boyer", "Egemen Kolemen", "Jeff Schneider" ], "title": "Offline Contextual Bayesian Optimization", "venue": "pp. 4627–4638,", "year": 2019 }, { "authors": [ "Ricky T.Q. Chen", "Yulia Rubanova", "Jesse Bettencourt", "David Duvenaud" ], "title": "Neural Ordinary Differential Equations", "venue": "URL http://arxiv.org/abs/ 1806.07366", "year": 2018 }, { "authors": [ "S. Chen", "S.A. Billings", "P.M. Grant" ], "title": "Non-linear system identification using neural networks", "venue": "International Journal of Control,", "year": 1990 }, { "authors": [ "Zhengdao Chen", "Jianyu Zhang", "Martin Arjovsky", "Léon Bottou" ], "title": "Symplectic Recurrent Neural Networks. arXiv:1909.13334 [cs, stat], September 2019", "venue": "URL http://arxiv.org/abs/ 1909.13334", "year": 1909 }, { "authors": [ "Kurtland Chua", "Roberto Calandra", "Rowan McAllister", "Sergey Levine" ], "title": "Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models. arXiv:1805.12114 [cs, stat], May 2018", "venue": "URL http://arxiv.org/abs/1805.12114", "year": 2018 }, { "authors": [ "Youngseog Chung", "Ian Char", "Willie Neiswanger", "Kirthevasan Kandasamy", "Andrew Oakleigh Nelson", "Mark D. Boyer", "Egemen Kolemen", "Jeff Schneider" ], "title": "Offline Contextual Bayesian Optimization for Nuclear Fusion", "venue": "URL http: //arxiv.org/abs/2001.01793", "year": 2020 }, { "authors": [ "Thomas F. Coleman", "Yuying Li" ], "title": "An Interior Trust Region Approach for Nonlinear Minimization Subject to Bounds", "venue": "SIAM Journal on Optimization,", "year": 1996 }, { "authors": [ "Miles D. Cranmer", "Rui Xu", "Peter Battaglia", "Shirley Ho" ], "title": "Learning Symbolic Physics with Graph Networks. arXiv:1909.05862 [astro-ph, physics:physics, stat], November 2019", "venue": "URL http: //arxiv.org/abs/1909.05862", "year": 1909 }, { "authors": [ "Noel Cressie", "Christopher K. Wikle" ], "title": "Statistics for Spatio-Temporal Data", "venue": null, "year": 2015 }, { "authors": [ "Filipe de Avila Belbute-Peres", "Kevin Smith", "Kelsey Allen", "Josh Tenenbaum", "J. Zico Kolter" ], "title": "End-to-end differentiable physics for learning and control", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Emmanuel de Bezenac", "Arthur Pajot", "Patrick Gallinari" ], "title": "Deep Learning for Physical Processes: Incorporating Prior Scientific Knowledge", "venue": "URL http://arxiv.org/abs/1711.07970", "year": 2018 }, { "authors": [ "J.R. Dormand", "P.J. Prince" ], "title": "A family of embedded Runge-Kutta formulae", "venue": "Journal of Computational and Applied Mathematics,", "year": 1980 }, { "authors": [ "J.R. Ferron", "M.L. Walker", "L.L. Lao", "H.E. St John", "D.A. Humphreys", "J.A. Leuer" ], "title": "Real time equilibrium reconstruction for tokamak discharge control", "venue": "Nuclear Fusion,", "year": 1998 }, { "authors": [ "Jan Frøyland", "Knut H. Alfsen" ], "title": "Lyapunov-exponent spectra for the Lorenz model", "venue": "Physical Review A,", "year": 1984 }, { "authors": [ "Alex Graves" ], "title": "Generating Sequences With Recurrent Neural Networks", "venue": "URL https: //arxiv.org/abs/1308.0850v5", "year": 2013 }, { "authors": [ "A. Hammerstein" ], "title": "Nichtlineare Integralgleichungen nebst Anwendungen", "venue": "Acta Mathematica,", "year": 1930 }, { "authors": [ "S.R. Haskey", "B.A. Grierson", "L. Stagner", "C. Chrystal", "A. Ashourvan", "A. Bortolon", "M.D. Boyer", "K.H. Burrell", "C. Collins", "R.J. Groebner", "D.H. Kaplan", "N.A. Pablant" ], "title": "Active spectroscopy measurements of the deuterium temperature, rotation, and density from the core to scrape off layer on the DIII-D tokamak (invited)", "venue": "Review of Scientific Instruments,", "year": 2018 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long Short-Term Memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "Michael Janner", "Justin Fu", "Marvin Zhang", "Sergey Levine" ], "title": "When to Trust Your Model: ModelBased Policy Optimization. arXiv:1906.08253 [cs, stat], June 2019", "venue": "URL http://arxiv", "year": 1906 }, { "authors": [ "Julian Kates-Harbeck", "Alexey Svyatkovskiy", "William Tang" ], "title": "Predicting disruptive instabilities in controlled fusion plasmas through deep learning", "venue": "Nature,", "year": 2019 }, { "authors": [ "Lennart Ljung" ], "title": "Perspectives on System", "venue": "Identification . pp", "year": 2010 }, { "authors": [ "Lennart Ljung", "Rajiv Singh", "Qinghua Zhang", "Peter Lindskog", "Anatoli Iouditski" ], "title": "Developments in The MathWorks System Identification Toolbox", "venue": "IFAC Proceedings Volumes,", "year": 2009 }, { "authors": [ "Edward N. Lorenz" ], "title": "Deterministic Nonperiodic Flow", "venue": "Journal of the Atmospheric Sciences,", "year": 1963 }, { "authors": [ "Gaurav Manek", "J. Zico Kolter" ], "title": "Learning Stable Deep Dynamics Models", "venue": "URL https://arxiv.org/abs/2001.06116v1", "year": 2020 }, { "authors": [ "O. Meneghini", "S.P. Smith", "L.L. Lao", "O. Izacard", "Q. Ren", "J.M. Park", "J. Candy", "Z. Wang", "C.J. Luna", "V.A. Izzo", "B.A. Grierson", "P.B. Snyder", "C. Holland", "J. Penna", "G. Lu", "P. Raum", "A. McCubbin", "D.M. Orlov", "E.A. Belli", "N.M. Ferraro", "R. Prater", "T.H. Osborne", "A.D. Turnbull", "G.M. Staebler" ], "title": "Integrated modeling applications for tokamak experiments with OMFIT", "venue": "Nuclear Fusion,", "year": 2015 }, { "authors": [ "Michael C Mozer", "Jonathan Bachrach" ], "title": "Discovering the structure of a reactive environment by exploration", "venue": "Advances in Neural Information Processing Systems", "year": 1990 }, { "authors": [ "Anusha Nagabandi", "Gregory Kahn", "Ronald S. Fearing", "Sergey Levine" ], "title": "Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning", "venue": "[cs],", "year": 2017 }, { "authors": [ "Anusha Nagabandi", "Kurt Konoglie", "Sergey Levine", "Vikash Kumar" ], "title": "Deep Dynamics Models for Learning Dexterous Manipulation. arXiv:1909.11652 [cs], September 2019", "venue": "URL http: //arxiv.org/abs/1909.11652", "year": 1909 }, { "authors": [ "Oliver Nelles" ], "title": "Nonlinear System Identification: From Classical Approaches to Neural Networks and Fuzzy Models", "venue": "Springer Science & Business Media,", "year": 2013 }, { "authors": [ "Lev Semenovich Pontryagin" ], "title": "Mathematical theory of optimal processes", "venue": null, "year": 2018 }, { "authors": [ "Gavin D. Portwood", "Peetak P. Mitra", "Mateus Dias Ribeiro", "Tan Minh Nguyen", "Balasubramanya T. Nadiga", "Juan A. Saenz", "Michael Chertkov", "Animesh Garg", "Anima Anandkumar", "Andreas Dengel", "Richard Baraniuk", "David P. Schmidt" ], "title": "Turbulence forecasting via Neural ODE", "venue": "[physics],", "year": 2019 }, { "authors": [ "Dimitris C. Psichogios", "Lyle H. Ungar" ], "title": "A hybrid neural network-first principles approach to process modeling", "venue": "AIChE Journal,", "year": 1992 }, { "authors": [ "Alireza Rahrooh", "Scott Shepard" ], "title": "Identification of nonlinear systems using NARMAX model", "venue": "Nonlinear Analysis: Theory, Methods & Applications,", "year": 2009 }, { "authors": [ "M. Raissi", "P. Perdikaris", "G.E. Karniadakis" ], "title": "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations", "venue": "Journal of Computational Physics,", "year": 2019 }, { "authors": [ "R. Rico-Martinez", "J.S. Anderson", "Yannis Kevrekidis" ], "title": "Continuous-time nonlinear signal processing: A neural network based approach for gray box identification", "venue": null, "year": 1994 }, { "authors": [ "Stephane Ross", "J. Andrew Bagnell" ], "title": "Agnostic System Identification for Model-Based Reinforcement Learning", "venue": "URL http://arxiv.org/abs/ 1203.1007", "year": 2012 }, { "authors": [ "Samuel H. Rudy", "J. Nathan Kutz", "Steven L. Brunton" ], "title": "Deep learning of dynamics and signalnoise decomposition with time-stepping constraints", "venue": "Journal of Computational Physics,", "year": 2019 }, { "authors": [ "Wilson John Rugh" ], "title": "Nonlinear system theory", "venue": null, "year": 1981 }, { "authors": [ "C. Runge" ], "title": "Ueber die numerische Auflösung von Differentialgleichungen", "venue": "Mathematische Annalen,", "year": 1895 }, { "authors": [ "Alvaro Sanchez-Gonzalez", "Victor Bapst", "Kyle Cranmer", "Peter Battaglia" ], "title": "Hamiltonian Graph Networks with ODE Integrators", "venue": "[physics],", "year": 2019 }, { "authors": [ "Johan Schoukens", "Lennart Ljung" ], "title": "Nonlinear System Identification: A User-Oriented Roadmap. arXiv:1902.00683 [cs], February 2019", "venue": "URL http://arxiv.org/abs/1902.00683", "year": 1902 }, { "authors": [ "Julian Schrittwieser", "Ioannis Antonoglou", "Thomas Hubert", "Karen Simonyan", "Laurent Sifre", "Simon Schmitt", "Arthur Guez", "Edward Lockhart", "Demis Hassabis", "Thore Graepel", "Timothy Lillicrap", "David Silver" ], "title": "Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model", "venue": "URL http://arxiv.org/abs/ 1911.08265", "year": 2019 }, { "authors": [ "B. Sohlberg", "E.W. Jacobsen" ], "title": "GREY BOX MODELLING – BRANCHES AND EXPERIENCES", "venue": "IFAC Proceedings Volumes,", "year": 2008 }, { "authors": [ "Michael L. Thompson", "Mark A. Kramer" ], "title": "Modeling chemical processes using prior knowledge and neural networks", "venue": "AIChE Journal,", "year": 1994 }, { "authors": [ "A. Vahidi", "A. Stefanopoulou", "H. Peng" ], "title": "Recursive least squares with forgetting for online estimation of vehicle mass and road grade: theory and experiments", "venue": "Vehicle System Dynamics,", "year": 2005 }, { "authors": [ "B. Ph. van Milligen", "V. Tribaldos", "J.A. Jiménez" ], "title": "Neural Network Differential Equation and Plasma Equilibrium Solver", "venue": "Physical Review Letters,", "year": 1995 }, { "authors": [ "Tingwu Wang", "Jimmy Ba" ], "title": "Exploring Model-based Planning with Policy Networks. arXiv:1906.08649 [cs, stat], June 2019", "venue": "URL http://arxiv.org/abs/1906.08649", "year": 1906 } ]
[ { "heading": "1 INTRODUCTION", "text": "The use of function approximators for dynamical system modeling has become increasingly common. This has proven quite effective when a substantial amount of real data is available relative to the complexity of the model being learned (Chua et al., 2018; Janner et al., 2019; Chen et al., 1990). These learned models are used for downstream applications such as model-based reinforcement learning (Nagabandi et al., 2017; Ross & Bagnell, 2012) or model-predictive control (MPC) (Wang & Ba, 2019).\nModel-based control techniques are exciting as we may be able to solve new classes of problems with improved controllers. Problems like dextrous robotic manipulation (Nagabandi et al., 2019), game-playing (Schrittwieser et al., 2019), and nuclear fusion are increasingly being approached using model-based reinforcement learning techniques. However, learning a dynamics model using, for example, a deep neural network can require large amounts of data. This is especially problematic when trying to optimize real physical systems, where data collection can be expensive. As an alternative to data-hungry machine learning methods, there is also a long history of fitting models to a system using techniques from system identification, some of which include prior knowledge about the system drawn from human understanding (Nelles, 2013; Ljung et al., 2009; Sohlberg & Jacobsen, 2008). These models, especially in the gray-box setting, are typically data-efficient and often contain interpretable model parameters. However, they are not well suited for the situation where the given prior knowledge is approximate or incomplete in nature. They also do not generally adapt to the situation where trajectories are drawn from a variety of parameter settings at test time. This is an especially crucial point as many systems of interest exhibit path-dependent dynamics, which we aim to recover on the fly.\nIn total, system identification methods are sample efficient but inflexible given changing parameter settings and incomplete or approximate knowledge. Conversely, deep learning methods are more flexible at the cost of many more samples. In this paper, we aim to solve both of these problems by biasing the model class towards our physical model of dynamics. Physical models of dynamics are often given in the form of systems of ordinary differential equations (ODEs), which are ubiquitious and may have free parameters that specialize them to a given physical system. We develop a model that uses neural networks to predict the free parameters of an ODE system from the previous\ntimesteps as well as residual terms added to each component of the system. To train this model, we integrate over the ODE and backpropagate gradients from the prediction error. This particular combination of prior knowledge and deep learning components is effective in quickly learning the dynamics and allows us to adjust system behavior in response to a wide variety of dynamic parameter settings. Even when the dynamical system is partially understood and only a subset of the ODEs are known, we find that our method still enjoys these benefits. We apply our algorithm to learning models in three synthetic settings: a generic model of ballistics, the Lorenz system (Lorenz, 1963), and a generalized cartpole problem, which we use for control as well. We also learn a high-level model of plasma dynamics for a fusion tokamak from real data.\nThe contributions of this paper are\n• We introduce Neural Dynamical Systems (NDS), a new class of model for learning dynamics that can incorporate prior knowledge about the system.\n• We show that these models naturally handle the issue of partial or approximate prior knowledge, irregularly spaced data, and system dynamics that change across instantiations, which generalizes the typical system identification setting. We also show that these advantages extend to control settings.\n• We demonstrate this model’s effectiveness on a real dynamics problem relevant to nuclear fusion and on synthetic problems where we can compare against a ground truth model." }, { "heading": "2 RELATED WORK", "text": "System Identification and Deep Learning with Structure There is a long tradition of forecasting physical dynamics with either machine learning or techniques based on domain knowledge of the dynamics, especially in the field of system identification, where Ljung (2010), Schoukens & Ljung (2019) and Cressie & Wikle (2015) are good summaries. Often, this space is discussed as a spectrum from a purely prior-knowledge-based system (white-box) to a purely data-driven system (black-box) with several shades of gray in between.\nWhite-box models use prior knowledge to precisely give the relationship between quantities of interest over time and there is extensive literature on solving them (Brenan et al., 1995). ‘Shades of gray’ may distinguish between levels of prior knowledge or how equations cover subsets of the state space (Ljung, 2010). Other prior work focuses on online parameter estimation (Vahidi et al., 2005), but this relies on an ongoing trajectory through the system and is difficult to use in our setting.\nIn nonlinear black-box settings, there are a variety of techniques used to solve system identification models. Volterra series, a generalization of Taylor series which respects dependency on the past, have been used for system identification (Rugh, 1981). Block models such as the Hammerstein (1930) and Weiner (Billings, 1980) models and their combination can also identify systems. Feedforward and recurrent neural networks have been widely used to model dynamical systems (Chua et al., 2018; Nagabandi et al., 2017; Hochreiter & Schmidhuber, 1997), with additional constraints on stability (Manek & Kolter, 2020) or the Hamiltonian (Chen et al., 2019) and many others added. Nonlinear autoregressive moving average models with exogenous variables (NARMAX) have also been used widely to model dynamical systems and this class is a superset of nearly everything else discussed (Brunton et al., 2015; Rahrooh & Shepard, 2009). Broadly, none of these algorithms are well-suited to a setting where the dynamic parameters of the system change across rollouts.\nThere have also been several approaches for including physical structure in deep models. Raissi et al. (2019) use automatic partial derivative computation to force a neural network to fit a given ODE or PDE solution. de Avila Belbute-Peres et al. (2018) uses a linear complementarity problem to differentiate through 2d physics simulations however their method is not general to more dimensions or other types of problems besides mechanics. Cranmer et al. (2019) uses graph networks to discover physical laws. Chen et al. (2019), Sanchez-Gonzalez et al. (2019) and Cranmer et al. (2020) force the network to respect Hamiltonian and Lagrangian constraints but without specific problem data on the system. Psichogios & Ungar (1992) predicts physical parameters for a given ODE model and Rico-Martinez et al. (1994) predict residuals. Thompson & Kramer (1994) similarly builds a hybrid parameter prediction function into a dynamical model. These last three works are especially similar to ours, though they use tiny networks, are problem-specific in their setup, and don’t take advantage of backpropagation through a numerical ODE solver.\nNeural Ordinary Differential Equations As most numerical ODE solvers are algorithms involving differentiable operations, it has always been possible in principle to backpropagate through the steps of these solvers dating back to at least Runge (1895). However, since each step of the solver involves calling the derivative function, naı̈ve backpropagation incurs an O(n) memory cost, where n is the number of derivative calls made by the solver. Historically Pontryagin (2018) and recently Chen et al. (2018) demonstrated that by computing gradients through the adjoint sensitivity method, the memory complexity of backpropagating through a family of ODE solvers can be reduced to O(1) for a fixed network, as opposed to the naive O(n). However, this work only used generic neural networks as the derivative function and did not consider dynamics modeling. They also provide a PyTorch package which we have built off of in our work.\nThere has been some work using neural ordinary differential equations to solve physical problems. Portwood et al. (2019) used a fully-connected neural ODE with an RNN encoder and decoder to model Navier-Stokes problems. Rudy et al. (2019) used a neural network integrated with a RungeKutta method for noise reduction and irregularly sampled data. There has also been work learning the structure of dynamical systems, first with a convolutional warping scheme inspired by advectiondiffusion PDEs (de Bezenac et al., 2018), then with a Neural ODE which was forced to respect boundary conditions and a partial observation mechanism (Ayed et al., 2019).\nMachine Learning for Nuclear Fusion As far back as 1995, van Milligen et al. (1995) showed that by approximating the differential operator with a (single-layer, in their case) neural network, one could fit simple cases of the Grad-Shafranov equation for magnetohydrodynamic equilibria. Recent work has shown that plasma dynamics are amenable to neural network prediction. In particular, Kates-Harbeck et al. (2019) used a convolutional and LSTM-based architecture to predict possible plasma disruptions (when a plasma instability grows large and causes a loss of plasma containment and pressure).\nThere has also been work in the field of plasma control: a neural network model of the neutral beam injection for the DIII-D tokamak has been deployed in order to diagnose the effect of controls on shots conducted at the reactor (Boyer et al., 2019b). Additionally, (Boyer et al., 2019a) used classic control techniques and a simpler model of the dynamics to develop a controller that allows characteristics of the tokamak plasma to be held at desired levels. Others have used contextual Bayesian optimization to choose single-state controls which direct the plasma to desirable states (Char et al., 2019; Chung et al., 2020)." }, { "heading": "3 PROBLEM SETTING", "text": "Typically, a dynamical system ẋ = fφ(x, u, t) with some parameters φ is the conventional model for system identification problems. Here, state is x ∈ X , control is u ∈ U , and time is t ∈ R. The objective is to predict future states given past states, past and future controls, and prior knowledge of the form of f . We denote x(φ, t,u, x0) = x0 + ∫ t 0 fφ(x, u, t)dt as the state obtained by integrating our dynamical system around f to time t.\nWe consider in this work a more general setting and address the problem of prediction and control over a class of dynamical systems, which we define as the set {ẋ = fφ(x, u, t) | φ ∈ Φ} , where Φ is the space of parameters for the dynamical system (e.g. spring constant or terminal velocity). We can generate a trajectory from a class by sampling a φ ∼ P (Φ) for some distribution P and choosing initial conditions and controls. In real data, we can view nature as choosing, but not disclosing, φ. For a particular example j, we sample φ ∼ P (Φ) and x0 ∼ P (X0) and are given controls u indexed as u(t) and input data {x(φ, ti,u, x0)}Ti=0 during training. At test time, we give a shorter, prefix time series {x(φ, ti,u, x0)}T ′\ni=0 but assume access to future controls. Then the prediction objective for a class of systems for N examples for timesteps {ti}TT ′+1 is\nx̂ = arg min x̂ Eφ∼P (Φ),x0∼P (X0)\n[ T∑\ni=T ′+1\n||x(φ, ti,u, x0)− x̂ti || 2 2\n] . (1)\nThis objective differs from the traditional one in that implicitly, identifying φ for each trajectory needs to be done from the problem data in order to be able to predict the data generated by fφ.\nSimilarly, the control problem is\nu = min u\nEφ∼P (Φ),x0∼P (X0) [∫ t\n0\nc(u(t), x(t))dt ] , s.t. x(t) = x0 + ∫ t 0 fφ(x, u, t) dt (2)\nfor some cost functional c. We will primarily explore the prediction problem in this setting, but as secondary considerations, we explore robustness to noise, the ability to handle irregularly spaced input data, and the ability to recover the parameters φ which generated the original trajectories. We will also consider the control problem in a simple setting." }, { "heading": "4 METHODS", "text": "We build up the description of our proposed method by first describing the two methods that inspire it: gray box system identification through optimization (Ljung et al., 2009), and using a Neural ODE (Chen et al., 2018) to predict future states in a dynamical system.\nTo apply grey box optimization (Ljung et al., 2009) to a dynamical system ẋ = fφ(x, u, t) for problem data {xti}T ′ t=0, we would use nonlinear least squares (Coleman & Li, 1996) to find\nφ̂ = arg min φ̂ ∑ i ∣∣∣∣∣∣∣∣∫ ti 0 fφ(x, u(t), t) dt− ∫ ti 0 fφ̂(x, u(t), t)dt ∣∣∣∣∣∣∣∣ . (3) This makes good use of the prior knowledge component of our system but is prone to compounding errors through integration and does not leverage data that may have come from alternate system parameters.\nA data driven approach would be to minimize the same objective with a fully connected neural ODE (Chen et al., 2018) hθ in place of f . However, we find that this procedure requires large amounts of training data and doesn’t leverage any prior knowledge we might have, though it is flexible to classes of dynamical systems.\nWe define a Neural Dynamical System by taking the advantages of both these methods in the setting where we know the full and correct ODEs and then show how to generalize it to situations where only some ODEs are known or they are approximate. Specifically, a Neural Dynamical System (NDS) is a class of dynamical systems where a neural network predicts some part of fφ(x, u, t), usually parameters φ or a term which is added to f .\nNDS with Full System Dynamics Consider a class of dynamical systems as defined in Section 3 where x ∈ Rn, u ∈ Rm, φ ∈ Rdp , dh, dc ∈ N and let θ, ϑ, τ be trainable neural network weights. Let hθ(xt1:T ′ , ut1:T ) be a neural net mapping state history and control sequence to the dp parameters of the system φ̂ and an embedding bh ∈ Rdh . Also let cϑ(xt, ut) be a similar network taking a single state and control that outputs an embedding bc ∈ Rdc . Finally, let dτ (bh, bc) be a network which takes the two output embeddings from the previous network and outputs residual terms r̂. Intuitively, we would like to use the observed history to estimate our system parameters, and some combination\nof the observed history and current observation to estimate residuals, which influences the design of our model, the neural dynamical system (a visualization of which is shown in Figure 1), written\nẋ = gφ̂(xt, ut, t)︸ ︷︷ ︸ Prior knowledge +r̂ φ̂, bh = hθ(xt1:T ′ , ut1:T )︸ ︷︷ ︸ History encoder bc = cϑ(xt, ut)︸ ︷︷ ︸ Context encoder r̂ = dτ (bh, bc)︸ ︷︷ ︸ Residual prediction\n(4)\nwhere g are domain-specific ODEs which are the input ‘domain knowledge’ about the system being modeled. Note that if the prior knowledge g is identically zero, this method reduces to the Neural ODE predictions we discussed at the beginning of this section. We also study an ablated model, NDS0, which lacks the residual component r̂ and context encoder network dτ . We note here that the context encoder is intended to potentially correct for model misspecification and noise but in the noiseless case with a model which is perfect, it may not be necessary. We explore this throughout Section 5.\nExample 1: Lorenz system. To illustrate the full construction, we operate on the example of the the Lorenz system: a chaotic dynamical system originally defined to model atmospheric processes (Lorenz, 1963). The system has 3-dimensional state (which we’ll denote by x, y, z), 3 parameters, ρ, σ, and β, and no control input. The system is given by\nẋ = σ(y − x) ẏ = x(ρ− z)− y ż = xy − βz. (5)\nFor a given instantiation of the Lorenz system, we have values of φ = [β, σ, ρ] that are constant across the trajectory. So, we can instantiate hθ which outputs φ̂ = [β̂, σ̂, ρ̂]. We use the DOPRI5 method (Dormand & Prince, 1980) to integrate the full neural dynamical system in Equation 4, with g given by the system in Equation 5 using the adjoint method of Chen et al. (2018). We use the state xT ′ as the initial condition for this integration. This gives a sequence {x̂t}Tt=T ′ , which we evaluate and supervise with a loss of the form\nLθ,ϑ,τ ({x̂ti}Ti=T ′+1, {xti}Tt=T ′+1) = T∑\nt=T ′+1\n||xti − x̂ti ||22. (6)\nBecause of the way we generate our input data, this is equivalent to Equation 1. We assume in our setting with full dynamics that the true dynamics lie in the function class established in Equation 4. By the method in Chen et al. (2018) we can backpropagate gradients through this loss into the parameters of our NDS. Then algorithms in the SGD family will converge to a local minimum of our loss function.\nNDS with Partial System Dynamics Suppose we only had prior knowledge about some of the components of our system and none about others. We can easily accomodate this incomplete information by simply ‘zeroing out’ the function. This looks like\ngφ(x, u, t) =\n[{ g\n(i) φ (x, u, t) if g (i) φ is known,\n0 else.\n] (7)\nsubstituted into equation 4. In this setup, the residual term essentially makes the unknown dimensions unstructured Neural ODEs, which still can model time series well (Portwood et al., 2019).\nNDS with Approximate System Dynamics For Neural Dynamical Systems to be useful, they must handle situations where the known model is approximate. This is transparently handled by our formulation of Neural Dynamical Systems: the parameters of the approximate model φ̂ are predicted by hθ(x1:T ′ , u1:T ′) and the residuals r̂ are predicted by dτ (bh, bc). This is the same as in the case where we have the correct dynamics, but we remove the assumption of a perfect model.\nExample 2: Nuclear Fusion System. In this paper, we apply this technique to plasma dynamics in a tokamak. In a tokamak, two quantities of interest are the stored energy of the plasma, which we denote E and its rotational frequency, ω. The neutral beams and microwave heating allow us to add power (P ) and torque (T ) to the plasma. The plasma also dissipates energy and rotational momentum via transport across the boundary of the plasma, radiative cooling, and other mechanisms. While the detailed evolution of these quantities is described by turbulent transport equations, for the purposes of control and design studies, physicists often use reduced, volume-averaged models. The\nsimple linear model (up to variable parameters) used for control development in Boyer et al. (2019a) is used in this work.\nĖ = P − E τe ω̇ = T nimiR0 − ω τm\n(8)\nHere, ni is ion density, mi is ion mass, and R0 is the tokamak major radius (values are in A.4). We use the constant known values for these. τe and τm are the confinement times of the plasma energy and angular momentum, which we treat as variable parameters (because they are!). These are predicted by the neural network in our model. We again use the model in Equation 4 to give us a neural dynamical system which can learn the real dynamics starting from this approximation in Section 5.2." }, { "heading": "5 EXPERIMENTS", "text": "In the following experiments, we aim to show that our methods improve predictions of physical systems by including prior dynamical knowledge. These improvements hold even as we vary the configurations between structured and fluid settings. We show that our models learn from less data and are more accurate, that they handle irregularly spaced data well, and that they learn the appropriate parameters of the prior knowledge systems even when they only ever see trajectories.\nWe use L2 error as our evaluation measure for predictive accuracy as given by Equation 6. We also evaluate our model’s ability to predict the system parameters by computing the L2 error, i.e.∑n i=1 ||φ̂i − φi||22. In the settings where we are adding either noise or, as will be defined, ‘jitter’, we use percent change in L2 error of the trajectories relative to a noise- or jitter-free baseline for the same experiment. We believe this is the appropriate metric as it abstracts away the original differences in accuracy between the methods and focuses on the effects of the noise or jitter.\nFor synthetic examples, we consider the Lorenz system in equation 5 and the Ballistic system (A.1.1). We learn over trajectories {(xti , uti , ti)}Ti=1 where the xti are generated by numerically integrating ẋφ(x, u, t) using scipy’s odeint function (Virtanen et al., 2019), with x0 and φ uniformly sampled from X and Φ, and uti given. Note that φ remains fixed throughout a single trajectory. Details on the ranges of initial conditions and parameters sampled are in the appendix. We evaluate the synthetic experiments on a test set of 20,000 trajectories that is fixed for a particular random seed generated in the same way as the training data. We use a timestep of 0.5 seconds for the synthetic trajectories. On the Ballistic system this allows us to see trajectories that do not reach their peak and those that start to fall. Since the Lyapunov exponent of the Lorenz system is less than 3, in 16 predicted timesteps we get both predictable and unpredictable data (Frøyland & Alfsen, 1984)). We believe it is important to look at the progress of the system across this threshold to understand whether the NDS model is robust to chaotic dynamics — since the Lorenz system used for structure is itself chaotic, we want to make sure that the system does not blow up over longer timescales.\nWe compare our models with other choices along the spectrum of structured to flexible models from both machine learning and system identification. The models we looked at for experiments are the Full NDS, a Partial NDS, a NDS0, a fully connected neural network (FC), a fully connected neural ODE (FC NODE), an LSTM, MATLAB’s Gray Box Optimization (GBO), and a sparse regression algorithm (SR) due to Brunton et al. (2015). Details on each algorithm are given in Appendix A.2.\nWe can view the Partial NDS and NODE as ablations of the Full NDS model which remove some and all of the prior knowledge, respectively. Each model takes 32 timesteps of state and control information as input and are trained on predictions for the following 16 timesteps. The ODE-based models are integrated from the initial conditions of the last given state. All neural networks are all trained with a learning rate of 3× 10−3, which was seen to work well across models. We generated a training set of 100,000 trajectories, test set of 20,000 trajectories, and validation set of 10,000 trajectories. Training was halted if validation error did not improve for 3 consecutive epochs." }, { "heading": "5.1 SYNTHETIC EXPERIMENTS", "text": "We first present results on a pair of synthetic physical systems where the data is generated in a noiseless and regularly spaced setting. We then add noise (in A.9) and irregular spacing to our data to highlight the performance of these methods as conditions become more challenging.\nSample Complexity and Overall Accuracy In order to test sample complexity in learning or fitting, we generated data for a full training dataset of 100,000 trajectories. We then fit our models on different fractions of the training dataset: 1, 0.25, 0.05, 0.01. We repeated this process with 5\ndifferent random seeds and plotted the L2 error of the model over the dataset fraction seen by the model in Figure 2. The error regions are the standard error of the errors over the various seeds.\nAs seen in Figure 4, the learning of Neural Dynamical Systems looks very different to that of the comparison models. We also see that with small amounts of data, the NDS models greatly outperform the Neural ODE, but with the full dataset, their performances get closer. This makes sense as the Neural ODE is likely able to infer the structure of the system with large amounts of data. Also, the Fully Connected Neural ODE outperforms the other baselines, which we posit is due to the fact that it implicitly represents that this system as a continuous time dynamical process and should change in a continuous fashion. From a sample-complexity perspective it makese sense that the better initialization of NDS should matter most when data is limited. A table of the full results of all experiments can be seen in A.11.\nWe notice that the NDS0 slightly outperforms the NDS with higher variance on these systems. Since it has a perfect model of the system, the residual components aren’t necessary for the model to perform well, however, there is no way the network can ‘correct’ for a bad estimate.\nCuriously, we see on the ballistic system that the partial NDS outperforms the full NDS in the small data setting, but they converge to similar performance with slightly more data. A potential explanation for this is that errors propagate through the dynamical model when the parameters are wrong, while the partial systems naturally dampen errors since, for example, ż only depends on the other components through a neural network. Concretely this might look like a full NDS predicting the wrong Rayleigh number σ which might give errors to y which would then propagate to x and y. Conversely, this wouldn’t happen as easily in a partial NDS because there are neural networks intermediating the components of the system.\nNoise, Irregular Sampling, and Parameter Identification We include further experiments on irregular sampling in Figure 6, where we see that the NDS models are the only ones which do not suffer from varied spacing between timesteps. The Neural ODE model does modestly worse, while the feedforward and recurrent machine learning models struggle. We hypothesize that the initialization of the NDS models helps it generalize to this setting as the prior knowledge models are necessarily invariant to the timestep.\nWe also include experiments with the performance of our methods under uniform noise in Figure 7. NDS models may be less robust to noise than the machine learning and system identification competitors since they experience a greater relative change in performance (however they still exhibit small absolute changes in performence). They still outperform the baselines on an absolute basis. We hypothesize that here the relatively inflexible prior knowledge causes errors to compound in a way that is only somewhat mitigated by the residual terms.\nAdditionally, we tested the subset of models which identify model parameters to see how close they are to the true values in Figure A.6. Here, we see that the NDS models do well but slightly worse than NDS0. We also see that the NDS models benefit from the generalization of the neural network parameter estimation over the GBO which does a new optimization per trajectory." }, { "heading": "5.2 FUSION EXPERIMENTS", "text": "We explored the concept of approximate system dynamics in a simplified fusion system. We predict the state of the tokamak as summarized by its stored energy and rotational frequency given the time series of control input in the form of injected power and torque. As mentioned in Section 4, we have a simplified physical model given by Equation 8 that approximately gives the dynamics of these quantities and how they relate to one another through time. Though there is a lot of remaining work to apply this model in a real experiment, approaches merging theoretical models with data to make useful predictions can be embedded into useful controller designs and improve the state of fusion.\nOur full dataset consisted of 17,686 trajectories, which we randomly partitioned into 1000 as a test set and 16,686 as a training set.1 The data are measured from the D-IIID tokamak via magnetic reconstruction (Ferron et al., 1998) and charge-exchange recombination spectroscopy (Haskey et al., 2018). Similar to our synthetic experiments, we cut each trajectory into overlapping 48 timestep sections and train on 32 timesteps to predict 16 timesteps. We compare with the same models as in the previous section, but using our Fusion Neural Dynamical System as described in Equation 4 with g given by Equation 8. As we discussed above, the dynamics in this equation are approximate. To illustrate this, we have included the accuracy of the naive dynamics with no learning on our data with fixed confinement times τe = τm = 0.1s as the Nominal Fusion Model in Figure 3. We use a larger fully connected network with 6 layers with 512 hidden nodes to attempt to capture the added complexity of the problem.\nSample Complexity and Overall Accuracy When comparing our NDS models, the machine learning baselines, the system ID baselines, and a nominal model from Boyer et al. (2019b), we see that the Fusion NDS model performs best by a large margin. Although the fully connected neural ODE performs competitively, it fails to reach the same performance. We speculate that the dynamical model helps with generalization whereas the fully connected network may overfit the training data and fail to reach good performance on the test set. Here the NDS0 is unable to perform well compared to the NDS, as the approximate dynamics mean that the model error is somewhat catastrophic for predictions. We see however that the NDS0 outperforms the Nominal Physics Model as it is able to estimate the parameters for each example rather than fixing values of the parameters for the whole dataset.\nWe see these results as highly encouraging and will continue exploring uses of NDS in fusion applications.\n1Data is loaded and partially processed within the OMFIT framework (Meneghini et al., 2015). We used the “SIGNAL PROCESSING” module which has recently been developed for this task and is publicly available on the “profile prediction data processing” branch of the OMFIT source code. Details of the preprocessing are in the Appendix." }, { "heading": "5.3 CONTROL EXPERIMENT", "text": "We also explored the use of these models for control purposes using model-predictive control (Camacho & Alba, 2013). For this purpose, we modified the Cartpole problem from Brockman et al. (2016) so that there are a variety of parameter values for the weight of the cart and pole as well as pole length as specified in Section A.1.2. Typically, a ‘solved’ cartpole environment would imply a consistent performance of 200 from a control algorithm. However, there are three factors that make this problem more difficult. First, in order to allow each algorithm to identify the system in order to make appropriate control decisions, we begin each rollout with 8 random actions. The control never fails at this point but would certainly fail soon after if continued. Second, the randomly sampled parameters per rollout make the actual control problem more difficult as the environment responds less consistently to control. For example, MPC using the typical Cartpole environment as a model results in rewards of approximately 37. Third, all training data for these algorithms uses random actions with no exploration, which has been seen to degrade the performance of most model-based RL or control algorithms (Mozer & Bachrach, 1990).\nWe then trained dynamics models on this ‘EvilCartpole’ enviroment for each of our comparison algorithms on datasets of trajectories on the environment with random actions. At that point, we rolled out trajectories on our EvilCartpole environment using MPC with control sequences and random shooting with 1,000 samples and a horizon of 10 timesteps. The uncertainties are standard errors over 5 separately trained models.\nAs shown in Table 1, the NDS algorithms outperform all baselines on the cartpole task for both the modeling and control objectives. We see that all algorithms degrade in performance as the amount of data is limited. We see in Table 6 that with larger amounts of data the Fully Connected and Neural ODE models perform as well as the NDS models. We hypothesize that this is due to the fact that the cartpole dynamics are ultimately not that complicated and with sufficient data unstructured machine learning algorithms can learn the appropriate dynamics to reach a modestly performing controller as well as NDS." }, { "heading": "6 CONCLUSION", "text": "In conclusion, we give a framework that merges theoretical dynamical system models with deep learning by backpropagating through a numerical ODE solver. This framework succeeds even when there is a partial or approximate model of the system. We show there is an empirical reduction in sample complexity and increase in accuracy on two synthetic systems and on a real nuclear fusion dataset. In the future, we wish to expand upon our work to make more sophisticated models in the nuclear fusion setting as we move toward practical use. We also hope to explore applications of this framework in other area which have ODE-based models of systems." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 ADDITIONAL DYNAMICAL SYSTEMS USED", "text": "" }, { "heading": "A.1.1 BALLISTIC SYSTEM", "text": "We also predict trajectories for ballistics: an object is shot out of a cannon in the presence of air resistance. It has a mass and drag coefficient and follows a nearly parabolic trajectory. This system has a two-dimensional state space (altitude y and horizontal range x) and 2 parameters (mass and drag coefficient), which we reduce down to one: terminal velocity vt. It is a second order system of differential equations which we reduce to first order using the standard refactoring.\nThe system is given by\nẋ = ẋ ẍ = −gẋ vt\nẏ = ẏ ÿ = −g ( 1 + ẏ\nvt ) (9) (here, g is the constant of gravitational acceleration)." }, { "heading": "A.1.2 CARTPOLE SYSTEM", "text": "In section 5.3 we discuss experiments on a modified Cartpole system with randomly sampled parameters. Here we give a full delineation of that system as defined in Brockman et al. (2016) with our modifications.\nThis system has three parameters, one control, and four state variables. The parameters are l, the length of the pole, mc, the mass of the cart, and mp, the mass of the pole. The control is F , the horizontal force applied to the cart. In the current setup, u can be one of ±10. The state variables are the lateral position x, the angle of the pole from vertical θ, and their first derivatives, ẋ and θ̇.\nThe system is given by the following equations\nθ̇ = θ̇ θ̈ = g sin θ + cos θ\n( −F−mplθ̇2 sin θ\nmc+mp ) l (\n4 3 − mp cos2 θ mc+mp ) ẋ = ẋ ẍ = F +mpl(θ̇ 2 sin θ − θ̈ cos θ)\nmc +mp\n(10)\nWe give a reward of +1 for every timestep that |θ| ≤ π15 and |x| ≤ 2.4. Initial conditions are uniformly sampled from [−0.05, 0.05]4 at test time and at training x ∼ U([−2, 2]), ẋ ∼ U([−1, 1]), θ ∼ U([−0.2, 0.2]) (which is slightly wider than the π15 threshold used at test), and θ̇ ∼ U([−0.1, 0.1]). The parameters are also uniformly sampled at train and test with l ∼ U([0.6, 1.2]), mc ∼ U([0.5, 2]) and mp ∼ U([0.03, 0.2]). For the partial NDS for Cartpole, we remove the equations corresponding to ẋ and ẍ." }, { "heading": "A.2 COMPARISON METHODS", "text": "In our paper, we compared the following methods in our experiments:\n• Full NDS: A Neural Dynamical System with the full system dynamics for the problem being analyzed. The full construction of this model is given by Equation 4. For the functions hθ, cϑ, dτ , we use fully connected networks with 2 layers, Softplus activations, 64 hidden nodes in each layer, and batch normalization. • Partial NDS: A Neural Dynamical System with partial system dynamics for the problem being\nanalyzed. These follow Equation 7 as applied to Equation 4. For the Ballistic system, we only provide equations for ẋ and ẍ, excluding the information about vertical motion from our network. For the Lorenz system, we only provide equations for ẋ and ẏ, excluding information about motion in the z direction. For the Cartpole system, we only provide information about θ̇ and\nθ̈. These equations were chosen somewhat arbitrarily to illustrate the partial NDS effectiveness. We use similar neural networks here as for the Full NDS.\n• NDS0: A Full NDS with residual terms removed. This serves as an ablation which shows the use of the residual terms.\n• Fully Connected (FC): A Fully-Connected Neural Network with 4 hidden layers containing 128 nodes with ReLU activations and batch normalization.\n• Fully Connected Neural ODE (FC NODE): A larger version of the Neural ODE as given in Chen et al. (2018), we use 3 hidden layers with 128 nodes, batch norm, and Softplus activations for ẋ. This can be interpreted as a version of our NDS with no prior knowledge, i.e. g(x) = 0.\n• LSTM: A stacked LSTM with 8 layers as in Graves (2013). The data is fed in sequentially and we regress the outputs of the LSTM against the true values of the trajectory.\n• Gray Box Optimization (GBO): We use MATLAB’s gray-box system identification toolbox (Ljung et al., 2009) along with the prior knowledge ODEs to fit the parameters φ̂ as an alternative to using neural networks. This algorithm uses trust-region reflective nonlinear least squares with finite differencing (Coleman & Li, 1996) to find the parameter values which minimize the error of the model rollouts over the observed data.\n• Sparse Identification of Nonlinear Systems (SR): We use the method from Brunton et al. (2015) to identify the dynamical systems of interest. This method uses sparse symbolic regression to learn a linear mapping from basis functions of the state xt and control ut to the derivatives ẋt computed by finite differences. Our synthetic systems are in the span of the polynomial basis that we used.\nWe note that ReLU activations were chosen for all feedforward and recurrent architectures, while in the Neural-ODE-based architectures, we follow the recommendations of Chen et al. (2018) and use the Softplus. The sizes and depths of the baselines were chosen after moderate hyperparameter search." }, { "heading": "A.3 HYPERPARAMETERS", "text": "We trained using Adam with a learning rate of 3× 10−3 and an ODE relative and absolute tolerance when applicable of 10−3. This wide tolerance was basically a function of training time as with tighter tolerances experiments took a long time to run and we were more concerned with sample complexity than the tightness of the integration. Hyperparameters were predominantly chosen by trial and error.\nOver the course of figuring out how this works and then evaluating the models we were evaluating, we ran some form of this code 347 times. The typical setup was either a 1080Ti GPU and 6 CPU cores or 7 CPU cores for roughly a day per experiment. We found that most of these trainings were marginally faster on the GPU 1.5x speedup, but weren’t religious about the GPU as we had many CPU cores and many experiments to run in parallel.\nLorenz Initial Conditions and Parameters For our Lorenz system, we sampled ρ ∼ U([15, 35]), σ ∼ U([9, 12]), β ∼ U([1, 3]), x0 ∼ U([0, 5]), y0 ∼ U([0, 5]), z0 ∼ U([0, 5]).\nBallistic Initial Conditions and Parameters For our Ballistic System, we sampled masses m ∼ U([1, 100]), drag coefficients cd ∼ U([0.4, 3]), x0 ∼ U([−100, 100]), y0 ∼ U([0, 200]). We then use vt = mgcd to recover the terminal velocity used in our model." }, { "heading": "A.4 FUSION MODEL PARAMETERS", "text": "ni is ion density (which we approximate as a constant value of 5×1019 deuterium ions per m3), mi is ion mass (which we know since our dataset contains deuterium shots and the mass of a deuterium ion is 3.3436× 10−27 kg), and R0 is the tokamak major radius of 1.67 m." }, { "heading": "A.5 FUSION DATA PREPROCESSING", "text": "Data is loaded and partially processed within the OMFIT framework Meneghini et al. (2015). A new module, “SIGNAL PROCESSING”, has been developed for this task and is publicly available on the “profile prediction data processing” branch of the OMFIT source code. The rest of the processing is done on Princeton’s Traverse computing cluster, and is available in the GitHub sourcecode for this project (https://github.com/jabbate7/plasma-profile-predictor).\nDIII-D shots from 2010 through the 2019 campaign are collected from the MDS+ database. Shots with a pulse length less than 2s, a normalized beta less than 1, or a non-standard topology are excluded from the start. A variety of non-standard data is also excluded, including the following situations:\n1. during and after a dudtrip trigger (as determined by the pointname “dustripped”)\n2. during and after ECH (pointname “pech” greater than .01) activation, since ECH is not currently included as an actuator\n3. whenever density feedback is off (as determined by the pointname “dsifbonoff”)\n4. during and after non-normal operation of internal coil, with an algorithm described by Carlos Paz-Soldan\nAll signals are then put on the same 50ms time base by averaging all signal values available between the current time step and 50ms prior. If no data is available in the window, the most recent value is floated. Only time steps during the ”flattop” of the plasma current are included. The flattop period is determined by DIII-D’s “t ip flat” and “ip flat duration” PTdata pointnames.\nAll profile data is from Zipfit Meneghini et al. (2015). The profiles are linearly interpolated onto 33 equally spaced points in normalized toroidal flux coordinates, denoted by ρ, where ρ = 0 is the magnetic axis and ρ = 1 is the last closed flux surface." }, { "heading": "A.6 PARAMETER LEARNING WITHOUT EXPLICIT SUPERVISION", "text": "For experiments in Figure 2, we stored the parameter estimates ˆphi for the NDS and gray box models and compared them to the true values to see how they perform in identification rather than prediction. None of these models were ever supervised with the true parameters. We see in Figure A.6 that the NDS is better able to estimate the parameter values than the gray-box method for both systems tested. We believe this is because our method is able to leverage many trajectories to infer the parameters whereas the gray-box method only uses the single trajectory." }, { "heading": "A.7 COMPUTATION OF NOISE", "text": "In our previous experiments, we did not add noise to the trajectories generated by the synthetic systems. We generate noise added to the trajectories by first sampling a set of 100 trajectories and computing for the ith component of the state space the RMS value ci across our 100 trajectory\nsamples. Then we sample noiseN (0, rci) from a normal distribution where the variance is ci scaled by our ‘relative noise’ term r. We vary r to control the amount of noise added to a system in a way that generalizes across problems and across components of a problem’s state space." }, { "heading": "A.8 ROBUSTNESS TO IRREGULAR TIMESTEPS", "text": "We also explored how these models respond to data sampled in intervals that are not regularly spaced in a fashion discussed in A.10. In Figure 6, we see that the ODE-based models are able to handle the irregularly spaced data much better than the others. Like with noise, we are focusing on relative performance but even so, the full NDS even does substantially better under large jitter settings. As we have discussed, this makes sense because these models natively handle time. We conjecture that the Full NDS neural networks may learn something slightly more general by this ‘domain randomization’ given that they are correctly specified models that receive all the information of the differing timesteps. The symbolic regression method fails under jitter, presumably because it relies heavily on finite differencing." }, { "heading": "A.9 ROBUSTNESS TO ADDED NOISE", "text": "We evaluate our experiments on additive noise scaled relative to the system as discussed in A.7. As the NDS does substantially better than the other models on the synthetic data, we look at the effect of noise relative to the original performance to focus on the properties of the new model. When a large amount of noise is added to the system, NDS’s performance degrades faster than that of the other models, though it still outperforms them on an absolute basis. We think full and partial NDS might be unstable here due to errors propagating through the prior knowledge model. The other models all otherwise perform fairly similarly under added noise." }, { "heading": "A.10 COMPUTATION OF JITTER", "text": "Though, as we will discuss, our fusion data in section 5.2 has been postprocessed to be regularly spaced, in practice, data on tokamaks and in many other systems come in irregularly and are sometimes missing. As both the fully connected neural ODE and NDS models are integrated in continuous time, they can handle arbitrarily spaced data. For the LSTM and Fully Connected models, we concatenated the times of the datapoints to the associated data and fed this into the model. For each batch of training and test data and some value of ‘jitter’, j, we create a new time series {ti + ji}Ti=1, where ji ∼ U([−j, j]). Since our timestep for synthetic experiments is 0.5s we try values of j of 0.1s and 0.25s. We then generate the batch of trajectories by integrating our systems to the new timesteps." }, { "heading": "A.12 ADDITIONAL CONTROL RESULTS", "text": "" }, { "heading": "A.11 TABLES OF RESULTS", "text": "" } ]
2,020
null
SP:b7e2096e6070edf0d080bcf5113e469563f98dc2
[ "The paper introduces a new type of soft threshold operator in conjunction with appropriate weight regularization that can be used in the context of neural network pruning to obtain sparse, performant networks from pre-trained, dense networks. The main idea is to replace the Heaviside step function that occurs in \"hard threshold\" pruning, which is non-differentiable, by a sigmoid function that can be differentiated and thus enables the efficient training/optimization of relevant pruning parameters. Pruning is hereby performed on a per-layer basis by training a regularized per-layer threshold. " ]
This paper presents a novel differentiable method for unstructured weight pruning of deep neural networks. Our learned-threshold pruning (LTP) method learns perlayer thresholds via gradient descent, unlike conventional methods where they are set as input. Making thresholds trainable also makes LTP computationally efficient, hence scalable to deeper networks. For example, it takes 30 epochs for LTP to prune ResNet50 on ImageNet by a factor of 9.1. This is in contrast to other methods that search for per-layer thresholds via a computationally intensive iterative pruning and fine-tuning process. Additionally, with a novel differentiable L0 regularization, LTP is able to operate effectively on architectures with batch-normalization. This is important since L1 and L2 penalties lose their regularizing effect in networks with batch-normalization. Finally, LTP generates a trail of progressively sparser networks from which the desired pruned network can be picked based on sparsity and performance requirements. These features allow LTP to achieve competitive compression rates on ImageNet networks such as AlexNet (26.4× compression with 79.1% Top-5 accuracy) and ResNet50 (9.1× compression with 92.0% Top-5 accuracy). We also show that LTP effectively prunes modern compact architectures, such as EfficientNet, MobileNetV2 and MixNet.
[]
[ { "authors": [ "Sajid Anwar", "Kyuyeon Hwang", "Wonyong Sung" ], "title": "Structured pruning of deep convolutional neural networks", "venue": "ACM Journal on Emerging Technologies in Computing Systems (JETC),", "year": 2017 }, { "authors": [ "Han Cai", "Ligeng Zhu", "Song Han" ], "title": "Proxylessnas: Direct neural architecture search on target task and hardware", "venue": "arXiv preprint arXiv:1812.00332,", "year": 2018 }, { "authors": [ "François Chollet" ], "title": "Xception: Deep learning with depthwise separable convolutions", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Bin Dai", "Chen Zhu", "David Wipf" ], "title": "Compressing neural networks using the variational information bottleneck", "venue": "arXiv preprint arXiv:1802.10399,", "year": 2018 }, { "authors": [ "Emily L. Denton", "Wojciech Zaremba", "Joan Bruna", "Yann LeCun", "Rob Fergus" ], "title": "Exploiting linear structure within convolutional networks for efficient evaluation", "venue": "In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems", "year": 2014 }, { "authors": [ "Julian Faraone", "Nicholas J. Fraser", "Michaela Blott", "Philip H.W. Leong" ], "title": "SYQ: learning symmetric quantization for efficient deep neural networks", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Song Han", "Huizi Mao", "William J Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "arXiv preprint arXiv:1510.00149,", "year": 2015 }, { "authors": [ "Song Han", "Jeff Pool", "John Tran", "William J. Dally" ], "title": "Learning both weights and connections for efficient neural networks", "venue": "CoRR, abs/1506.02626,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Yihui He", "Xiangyu Zhang", "Jian Sun" ], "title": "Channel pruning for accelerating very deep neural networks", "venue": "In IEEE International Conference on Computer Vision, ICCV 2017,", "year": 2017 }, { "authors": [ "Yihui He", "Ji Lin", "Zhijian Liu", "Hanrui Wang", "Li-Jia Li", "Song Han" ], "title": "AMC: automl for model compression and acceleration on mobile devices", "venue": "In Computer Vision - ECCV 2018 - 15th European Conference,", "year": 2018 }, { "authors": [ "Elad Hoffer", "Ron Banner", "Itay Golan", "Daniel Soudry" ], "title": "Norm matters: efficient and accurate normalization schemes in deep networks", "venue": "Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Jie Hu", "Li Shen", "Gang Sun" ], "title": "Squeeze-and-excitation networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Andrey Ignatov", "Radu Timofte", "William Chou", "Ke Wang", "Max Wu", "Tim Hartley", "Luc Van Gool" ], "title": "Ai benchmark: Running deep neural networks on android smartphones", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015,", "year": 2015 }, { "authors": [ "Benoit Jacob", "Skirmantas Kligys", "Bo Chen", "Menglong Zhu", "Matthew Tang", "Andrew Howard", "Hartwig Adam", "Dmitry" ], "title": "Kalenichenko. Quantization and training of neural networks for efficient integer-arithmetic-only inference", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Max Jaderberg", "Andrea Vedaldi", "Andrew Zisserman" ], "title": "Speeding up convolutional neural networks with low rank expansions", "venue": "In British Machine Vision Conference,", "year": 2014 }, { "authors": [ "Yong-Deok Kim", "Eunhyeok Park", "Sungjoo Yoo", "Taelim Choi", "Lu Yang", "Dongjun Shin" ], "title": "Compression of deep convolutional neural networks for fast and low power mobile applications", "venue": "arXiv preprint arXiv:1511.06530,", "year": 2015 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E. Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "Commun. ACM,", "year": 2017 }, { "authors": [ "Aditya Kusupati", "Vivek Ramanujan", "Raghav Somani", "Mitchell Wortsman", "Prateek Jain", "Sham Kakade", "Ali Farhadi" ], "title": "Soft threshold weight reparameterization for learnable sparsity, 2020", "venue": null, "year": 2020 }, { "authors": [ "Andrey Kuzmin", "Markus Nagel", "Saurabh Pitre", "Sandeep Pendyam", "Tijmen Blankevoort", "Max Welling" ], "title": "Taxonomy and evaluation of structured compression of convolutional neural networks", "venue": null, "year": 1912 }, { "authors": [ "Vadim Lebedev", "Yaroslav Ganin", "Maksim Rakhuba", "Ivan Oseledets", "Victor Lempitsky" ], "title": "Speeding-up convolutional neural networks using fine-tuned cp-decomposition", "venue": "arXiv preprint arXiv:1412.6553,", "year": 2014 }, { "authors": [ "Yann LeCun", "John S. Denker", "Sara A. Solla" ], "title": "Optimal brain damage", "venue": "In Advances in Neural Information Processing Systems 2, [NIPS Conference,", "year": 1989 }, { "authors": [ "Hao Li", "Asim Kadav", "Igor Durdanovic", "Hanan Samet", "Hans Peter Graf" ], "title": "Pruning filters for efficient convnets", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Darryl D. Lin", "Sachin S. Talathi", "V. Sreekanth Annapureddy" ], "title": "Fixed point quantization of deep convolutional networks", "venue": "In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48,", "year": 2016 }, { "authors": [ "Christos Louizos", "Karen Ullrich", "Max Welling" ], "title": "Bayesian compression for deep learning", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Christos Louizos", "Max Welling", "Diederik P. Kingma" ], "title": "Learning sparse neural networks through l0 regularization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Franco Manessi", "Alessandro Rozza", "Simone Bianco", "Paolo Napoletano", "Raimondo Schettini" ], "title": "Automated pruning for deep neural network compression", "venue": "In 24th International Conference on Pattern Recognition,", "year": 2018 }, { "authors": [ "Huizi Mao", "Song Han", "Jeff Pool", "Wenshuo Li", "Xingyu Liu", "Yu Wang", "William J. Dally" ], "title": "Exploring the granularity of sparsity in convolutional neural networks", "venue": "IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops", "year": 2017 }, { "authors": [ "Dmitry Molchanov", "Arsenii Ashukha", "Dmitry P. Vetrov" ], "title": "Variational dropout sparsifies deep neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Jose Javier Gonzalez Ortiz", "Davis W. Blalock", "John V. Guttag" ], "title": "Standardizing evaluation of neural network pruning", "venue": "In Workshop on AI Systems at SOSP,", "year": 2019 }, { "authors": [ "Alex Renda", "Jonathan Frankle", "Michael Carbin" ], "title": "Comparing rewinding and fine-tuning in neural network", "venue": null, "year": 2020 }, { "authors": [ "Mark Sandler", "Andrew Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Suraj Srinivas", "Akshayvarun Subramanya", "R. Venkatesh Babu" ], "title": "Training sparse neural networks", "venue": "IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops", "year": 2017 }, { "authors": [ "Mingxing Tan", "Quoc V Le" ], "title": "Efficientnet: Rethinking model scaling for convolutional neural networks", "venue": "arXiv preprint arXiv:1905.11946,", "year": 2019 }, { "authors": [ "Mingxing Tan", "Quoc V Le" ], "title": "Mixconv: Mixed depthwise convolutional kernels", "venue": "CoRR, abs/1907.09595,", "year": 2019 }, { "authors": [ "Karen Ullrich", "Edward Meeds", "Max Welling" ], "title": "Soft weight-sharing for neural network compression", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Twan van Laarhoven" ], "title": "L2 regularization versus batch and weight", "venue": "normalization. CoRR,", "year": 2017 }, { "authors": [ "Bichen Wu", "Xiaoliang Dai", "Peizhao Zhang", "Yanghan Wang", "Fei Sun", "Yiming Wu", "Yuandong Tian", "Peter Vajda", "Yangqing Jia", "Kurt Keutzer" ], "title": "Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Shaokai Ye", "Xiaoyu Feng", "Tianyun Zhang", "Xiaolong Ma", "Sheng Lin", "Zhengang Li", "Kaidi Xu", "Wujie Wen", "Sijia Liu", "Jian Tang", "Makan Fardad", "Xue Lin", "Yongpan Liu", "Yanzhi Wang" ], "title": "Progressive DNN compression: A key to achieve ultra-high weight pruning and quantization rates using ADMM", "venue": null, "year": 1903 }, { "authors": [ "Tianyun Zhang", "Shaokai Ye", "Kaiqi Zhang", "Jian Tang", "Wujie Wen", "Makan Fardad", "Yanzhi Wang" ], "title": "A systematic DNN weight pruning framework using alternating direction method of multipliers", "venue": "In Computer Vision - ECCV 2018 - 15th European Conference,", "year": 2018 }, { "authors": [ "Xiangyu Zhang", "Jianhua Zou", "Kaiming He", "Jian Sun" ], "title": "Accelerating very deep convolutional networks for classification and detection", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2016 }, { "authors": [ "Aojun Zhou", "Anbang Yao", "Yiwen Guo", "Lin Xu", "Yurong Chen" ], "title": "Incremental network quantization: Towards lossless cnns with low-precision weights", "venue": "arXiv preprint arxiv:1702.03044,", "year": 2017 }, { "authors": [ "Hattie Zhou", "Janice Lan", "Rosanne Liu", "Jason Yosinski" ], "title": "Deconstructing lottery tickets: Zeros, signs, and the supermask", "venue": "In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Michael Zhu", "Suyog Gupta" ], "title": "To prune, or not to prune: exploring the efficacy of pruning for model compression, 2017", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks (DNNs) have provided state-of-the-art solutions for several challenging tasks in many domains such as computer vision, natural language understanding, and speech processing. With the increasing demand for deploying DNNs on resource-constrained edge devices, it has become even more critical to reduce the memory footprint of neural networks and also to achieve power-efficient inference on these devices. Many methods in model compression Hassibi et al. (1993); LeCun et al. (1989); Han et al. (2015b); Zhang et al. (2018), model quantization Jacob et al. (2018); Lin et al. (2016); Zhou et al. (2017); Faraone et al. (2018) and neural architecture search Sandler et al. (2018); Tan & Le (2019a); Cai et al. (2018); Wu et al. (2019) have been introduced with these goals in mind.\nNeural network compression mainly falls into two categories: structured and unstructured pruning. Structured pruning methods, e.g., He et al. (2017); Li et al. (2017); Zhang et al. (2016); He et al. (2018), change the network’s architecture by removing input channels from convolutional layers or by applying tensor decomposition to the layer weight matrices whereas unstructured pruning methods such as Han et al. (2015b); Frankle & Carbin (2019); Zhang et al. (2018) rely on removing individual weights from the neural network. Although unstructured pruning methods achieve much higher weight sparsity ratio than structured pruning, unstructured is thought to be less hardware friendly because the irregular sparsity is often difficult to exploit for efficient computation Anwar et al. (2017). However, recent advances in AI accelerator design Ignatov et al. (2018) have targeted support for highly efficient sparse matrix multiply-and-accumulate operations. Because of this, it is getting increasingly important to develop state-of-the-art algorithms for unstructured pruning.\nMost unstructured weight pruning methods are based on the assumption that smaller weights do not contribute as much to the model’s performance. These pruning methods iteratively prune the weights that are smaller than a certain threshold and retrain the network to regain the performance lost during pruning. A key challenge in unstructured pruning is to find an optimal setting for these pruning thresholds. Merely setting the same threshold for all layers may not be appropriate because the distribution and ranges of the weights in each layer can be very different. Also, different layers may have varying sensitivities to pruning, depending on their position in the network (initial layers versus final layers) or their type (depth-wise separable versus standard convolutional layers). The\nbest setting of thresholds should consider these layer-wise characteristics. Many methods Zhang et al. (2018); Ye et al. (2019); Manessi et al. (2018) propose a way to search these layer-wise thresholds but become quite computationally expensive for networks with a large number of layers, such as ResNet50 or EfficientNet.\nIn this paper, we propose Learned Threshold Pruning (LTP) to address these challenges. Our proposed method uses separate pruning thresholds for every layer. We make the layer-wise thresholds trainable, allowing the training procedure to find optimal thresholds alongside the layer weights during finetuning. An added benefit of making these thresholds trainable is that it makes LTP fast, and the method converges quickly compared to other iterative methods such as Zhang et al. (2018); Ye et al. (2019). LTP also achieves high compression on newer networks Tan & Le (2019a); Sandler et al. (2018); Tan & Le (2019b) with squeeze-excite Hu et al. (2018) and depth-wise convolutional layers Chollet (2017).\nOur key contributions in this work are the following:\n• We propose a gradient-based algorithm for unstructured pruning, that introduces a learnable threshold parameter for every layer. This threshold is trained jointly with the layer weights. We use soft-pruning and soft L0 regularization to make this process end-to-end trainable. • We show that making layer-wise thresholds trainable makes LTP computationally very\nefficient compared to other methods that search for per-layer thresholds via an iterative pruning and finetuning process, e.g., LTP pruned ResNet50 to 9.11x in just 18 epochs with 12 additional epochs of fine-tuning, and MixNet-S to 2x in 17 epochs without need for further finetuning. • We demonstrate state-of-the-art compression ratios on newer architectures, i.e., 1.33×, 3×\nand 2× for MobileNetV2, EfficientNet-B0 and MixNet-S, respectively, which are already optimized for efficient inference, with less than 1% drop in Top-1 accuracy. • The proposed method provides a trace of checkpoints with varying pruning ratios and\naccuracies. Because of this, the user can choose any desired checkpoint based on the sparsity and performance requirements for the desired application." }, { "heading": "2 RELATED WORK", "text": "Several methods have been proposed for both structured and unstructured pruning of deep networks. Methods like He et al. (2017); Li et al. (2017) use layer-wise statistics and data to remove input channels from convolutional layers. Other methods apply tensor decompositions on neural network layers, Denton et al. (2014); Jaderberg et al. (2014); Zhang et al. (2016) apply SVD to decompose weight matrices and Kim et al. (2015); Lebedev et al. (2014) apply tucker and cp-decompositions to compress. An overview of these methods can be found in Kuzmin et al. (2019). These methods are all applied after training a network and need fine-tuning afterwards. Other structured methods change the shape of a neural network while training. Methods like Bayesian Compression Louizos et al. (2017), VIBnets Dai et al. (2018) and L1/L0-regularization Srinivas et al. (2017); Louizos et al. (2018) add trainable gates to each layer to prune while training.\nIn this paper we consider unstructured pruning, i.e. removing individual weights from a network. This type of pruning was already in use in 1989 in the optimal brain damage LeCun et al. (1989) and optimal brain surgeon Hassibi et al. (1993) papers, which removed individual weights in neural networks by use of Hessian information. More recently, Han et al. (2015a) used the method from Han et al. (2015b) as part of their full model compression pipeline, removing weights with small magnitudes and fine-tuning afterwards. This type of method is frequently used for pruning, and has recently been picked up for finding DNN subnetworks that work just as well as their mother network in Frankle & Carbin (2019); Zhou et al. (2019). Another recent application of Han et al. (2015b) is by Renda et al. (2020) where weight and learning-rate rewinding schemes are used to achieve competitive pruning performances. These methods, however, are very computationally extensive requiring many hundreds of epochs of re-training. Finally, papers such as Molchanov et al. (2017); Ullrich et al. (2017) apply a variational Bayesian framework on network pruning.\nOther methods that are similar to our work are Zhang et al. (2018) and Ye et al. (2019). These papers apply the alternating method of Lagrange multipliers to pruning, which slowly coaxes a network into pruning weights with a L2-regularization-like term. One problem of these methods is that they are\ntime-intensive, another is that they need manual tweaking of compression rates for each layer. In our method, we get rid of these restrictions and achieve comparable compression results, at fraction of the computational burden and without any need for setting per-layer pruning ratios manually. Kusupati et al. (2020) and Manessi et al. (2018) learn per-layer thresholds automatically using soft thresholding operator or a close variant of it. However they rely on L1 and/or L2 regularization, which as shown in section 3.2, is inefficient when used in networks with batch-normalization Ioffe & Szegedy (2015). He et al. (2018) use reinforcement learning to set layer-wise prune ratios for structured pruning, whereas we learn the pruning thresholds in the fine-tuning process." }, { "heading": "3 METHOD", "text": "LTP comprises two key ideas, soft-pruning and soft L0 regularization, detailed in sections 3.1 and 3.2, respectively. The full LTP algorithm is then presented in section 3.3." }, { "heading": "3.1 SOFT PRUNING", "text": "The main challenge in learning per-layer thresholds during training is that the pruning operation is not differentiable. More precisely, consider an N -layer DNN where the weights for the l-th convolutional or fully-connected layer are denoted by {wkl}, and let k index the weights within the layer. In magnitude-based pruning Han et al. (2015b) the relation between layer l’s uncompressed weights and pruned weights is given by: vkl = wkl × step(w2kl − τl), (1) where τl denotes the layer’s pruning threshold and step(.) denotes the Heaviside step function. We name this scheme hard-pruning. Since the step function is not differentiable, (1) cannot be used to learn thresholds through back-propagation. To get around this problem, during training LTP replaces (1) with soft-pruning\nvkl , wkl · sigm ( w2kl − τl\nT\n) , (2)\nwhere sigm(.) denotes the sigmoid function and T is a temperature hyper-parameter. As a result of (2) being differentiable, back-propagation can now be applied to learn both the weights and thresholds simultaneously.\nDefining soft-pruning as in (2) has another advantage. Note that if w2kl is much smaller than τl (i.e., τl − w2kl T ), wkl’s soft-pruned version is almost zero and it is pruned away, whereas if it is much larger (i.e., w2kl − τl T ), wkl ≈ vkl. Weights falling within the transitional region of the sigmoid function (i.e., |w2kl − τl| ∼ T ), however, may end up being pruned or kept depending on their contribution to optimizing the loss function. If they are important, the weights are pushed above the threshold through minimization of the classification loss. Otherwise, they are pulled below the threshold through regularization. This means that although LTP utilizes pruning thresholds similar to previous methods, it is not entirely a magnitude-based pruning method, as it allows the network to keep important weights that were initially small and removing some of the unimportant weights that were initially large, c.f., Figure 1 (left).\nContinuing with equation (2), it follows that\n∂vkl ∂τl = −1 2 · σT (wkl) and ∂vkl ∂wkl = sigm( w2kl − τl T ) + wkl · σT (wkl), (3)\nwith\nσT (wkl) , 2wkl T · sigm(w 2 kl − τl T\n)× (\n1− sigm(w 2 kl − τl T\n) ) . (4)\nThe σT (.) function also appears in subsequent equations and merits some discussion. First note that σT (wkl) as given by (4) is the derivative of sigm((w2kl − τl)/T ) with respect to wkl. Since the latter approaches the step function (located at w2kl = τl) in the limit as T −→ 0, it follows that the former, i.e., σT (wkl) would approach a Dirac delta function, meaning that its value approaches zero everywhere except over the transitional region where it is inversely proportional to region’s width, i.e.,\nσT (wkl) ∼ 1\nT , for |w2kl − τl| ∼ T. (5)" }, { "heading": "3.2 SOFT L0 REGULARIZATION", "text": "In the absence of weight regularization, the per-layer thresholds decrease to zero if initialized otherwise. This is because larger thresholds correspond to pruning more weights away, and unless these weights are completely spurious, their removal causes the classification loss, i.e., L, to increase. Loosely speaking,\n∂L ∂τl > 0, unless τl is small.\nAmong the different weight regularization methods, L0-norm regularization, which targets minimization of the number of non-zero weights, i.e.,\nL0,l , ∑ k ∣∣wkl∣∣0, befits pruning applications the most. This is because it directly quantifies the size of memory or FLOPS needed during inference. However, many works use L1 or L2 regularization instead, due to the L0-norm’s lack of differentiability. Notably, Han et al. (2015b) utilizes L1 and L2 regularization to push redundant weights below the pruning thresholds.\nL1 or L2 regularization methods may work well for pruning older architectures such as AlexNet and VGG. However, they fail to properly regularize weights in networks that utilize batch-normalization layers van Laarhoven (2017), Hoffer et al. (2018). This includes virtually all modern architectures such as ResNet, EfficientNet, MobileNet, and MixNet. This is because all weights in a layer preceding a batch-normalization layer can be re-scaled by an arbitrary factor, without any change in batch-norm outputs. This uniform re-scaling prevents L1 or L2 penalties from having their regularizing effect. To fix this issue, van Laarhoven (2017) suggests normalizing the L2-norm of a layer’s weight tensor after each update. This, however, is not desirable when learning pruning thresholds as the magnitude of individual weights constantly changes as a result of the normalization. Hoffer et al. (2018), on the other hand, suggests using L1 or L∞ batch-normalization instead of the standard scheme. This, again, is not desirable as it does not address current architectures. Consequently, in this work, we focus on L0 regularization, which does work well with batch-normalization.\nAs was the case with hard-pruning in (1), the challenge in using L0 regularization for learning per-layer pruning thresholds is that it is not differentiable, i.e.,\nL0,l = ∑ k step(w2kl − τl),\nThis motivates our soft L0 norm definition for layer l, i.e., L0,l , ∑ k sigm( w2kl − τl T ), (6)\nwhich is differentiable, and therefore can be used with back-propagation, i.e., ∂L0,1 ∂wkl = σT (wkl) and ∂L0,l ∂τl = −1 2 ∑ k σT (wkl) wkl , (7)\nwhere σT (wkl) is given by (4). Inspecting (7) reveals an important aspect of L0,l, namely that only weights falling within the sigmoid transitional region, i.e., |w2kl − τl| ∼ T , may contribute to any change in L0,l. This is because other weights are either very small and completely pruned away, or very large and unaffected by pruning. The consequence is that if a significant fraction of these weights, e.g., as a result of back-propagation update, are moved out of the transitional region, L0,l becomes constant and the pruning process stalls. The condition for preventing the premature termination of pruning, when using L0,l can then be expressed as\nη · ∣∣∣ ∂LT ∂wkl ∣∣∣ T, for |w2kl − τl| ∼ T, (8) where LT denotes the overall objective function comprising both classification and soft L0 regularization losses, i.e.,\nLT = L+ λ ∑ l L0,l. (9)\nNote that the left hand side of Eq. (8) is the displacement of wkl as a result of weight update. So (8) states that weight displacement should not be comparable to transition region’s width (∼ T )." }, { "heading": "3.3 LEARNED THRESHOLD PRUNING", "text": "LTP is a magnitude based pruning method that learns per-layer thresholds while training or finetuning. Specifically, LTP adopts a framework of updating all network weights, but only using their soft-pruned versions in the forward pass. Gradients for thresholds and weights can be computed using\n∂L ∂τl = ∑ k ∂L ∂vkl · ∂vkl ∂τl and ∂L ∂wkl = ∂L ∂vkl · ∂vkl ∂wkl , (10)\nwhere vkl is the soft-pruned version of the weight wkl as defined by (2). LTP uses (9), (10), (3) and (7) to update the per-layer thresholds:\n∆τl = −ητl( ∂L ∂τl + λ ∂L0,l ∂τl ). (11)\nUpdating weights needs more care, in particular, minimization of LT with respect to wkl is subject to the constraint given by (8). Interestingly, ∂LT /∂wkl as given by (9), (10), (3) and (7), i.e.,\n∂LT ∂wkl = sigm( w2kl − τl T ) · ∂L ∂vkl\n+ ( wkl ·\n∂L ∂vkl\n+ λ ) · σT (wkl), (12)\nincludes σT (wkl) which as a result of (5) could violate (8) for T 1 (a requirement for setting T , c.f., (15) and Table 1). There are two simple solutions to enforce (8). The first approach is to compute ∂LT /∂wkl as given by (12), but clamping it based on (8). The second, and arguably simpler, approach is to use\n∂LT ∂wkl ≈ sigm(w 2 kl − τl T ) · ∂L ∂vkl . (13)\nTo appreciate the logic behind this approximation, note that for the vast majority of weights that are outside the sigmoid transitional region, equations (13) and (12) give almost identical values. On the other hand, although values given by (13) and (12), after clamping, do differ for weights within the transitional region, these weights remain there for a very small fraction of the training time (as τl moves past them). This means that they would acquire their correct values through back-propagation once they are out of the transitional region. Also note that (13) is equivalent to only using the classification loss L (and not L0,l) for updating weights, i.e.,\n∆wkl ≈ −η ∂L ∂wkl and ∂vkl ∂wkl ≈ sigm(w 2 kl − τl T ), (14)\ninstead of (3), i.e., treating the sigmoid function in (3) as constant. These adjustments are necessary for preventing premature termination of LTP. For example Figure 1 (right) depicts the scatter plot of the pruned model’s w2kl (y-axis) vs. those of the original one (x-axis) for layer3.2.conv2 of ResNet20 on Cifar100 when (3) is used instead of (14). Note how the formation of a gap around the threshold (the red line) causes the pruning process to terminate prematurely with a small threshold. Finally, after training is finished LTP uses the learned thresholds to hard-prune the network, which can further be finetuned, without regularization, for improved performance." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 CHOICE OF HYPER-PARAMETERS", "text": "LTP has three main hyper-parameters, T , ητl , and λ. Table 1 provides the hyper-parameter values to reproduce the results reported in this paper. LTP uses soft-pruning during training to learn per-layer thresholds, but hard-pruning to finally remove redundant weights. Selecting a small enough T ensures that the performance of soft-pruned and hard-pruned networks are close. Too small a T , on the other hand, is undesirable as it makes the transitional region of the sigmoid function too narrow. This could possibly terminate pruning prematurely. To set the per-layer Tl in this paper, the following equation was used: Tl = T0 × σ2|wkl|. (15) While one could consider starting with a larger T0 and anneal it during the training, a fixed value of T0 = 1e−3 provided us with good results for all results reported in this paper. One important\nconsideration when choosing ητl/η is given by equation (10), namely, that the gradient with respect to the pruning threshold gets contribution from the gradients of all weights in the layer. This means that ∂L/∂τl can potentially be much larger than a typical ∂L/∂vkl, especially if values of ∂L/∂vkl are correlated. Therefore, to prevent changes in τl that are orders of magnitude larger than changes in vkl, ητl/η should be small. While Table 1 summarizes the values used for ητl/η for producing results reported in this paper, any value between 1e-5 and 1e-7 would work fine. Finally, λ is the primary hyper-parameter determining the sparsity levels achieved by LTP. Our experiments show that to get the best results, λ must be large enough such that the desired sparsity is reached sooner rather than later (this is likely due to some networks tendency to overfit, if trained for too long), however too aggressive a λ may be disadvantageous as the first pruned model may have poor performance without any subsequent recovery." }, { "heading": "4.2 ABLATION STUDY", "text": "Figure 2 provides an ablation study of LTP with respect to various regularization methods for ResNet20 on Cifar100. As the figure shows, in the absence of regularization, LTP only achieves a modest sparsity of 94% after 100 epochs of pruning (upper-left). L2 regularization achieves good sparsity, but it provides a poor performance (upper-right). This, as explained in section 3.2, is due to the unison and unbound fall of layer weights which deprives L2 loss of its regularizing effect (lower-right). Finally, the keep ratio plot indicates that L0 regularization provides LTP with a natural exponential pruning schedule shown in Zhu & Gupta (2017) to be very effective." }, { "heading": "4.3 IMAGENET PRUNING RESULTS", "text": "In this section we perform an evaluation of LTP on the ImageNet ILSVRC-2015 dataset (Russakovsky et al. (2015)). LTP is used to prune a wide variety of networks comprising AlexNet (Krizhevsky et al. (2017)), ResNet50 (He et al. (2016)), MobileNet-V2 (Sandler et al. (2018)), EfficientNet-B0 (Tan & Le (2019a)) and MixNet-S (Tan & Le (2019b)).\nTable 2 gives LTP’s ResNet50 pruning results on ImageNet, where it matches the performance achieved by Ye et al. (2019), i.e. 9.11× compression with a Top-5 accuracy of 92.0%. LTP also\nmatches Kusupati et al. (2020) performance, considering STR’s higher baseline. We note that the iterative weight rewinding method introduced by Renda et al. (2020) provides 1.6% higher top-1 accuracy at a compression rate of 9.31×, however it requires 900 epochs of re-training compared to LTP’s 30. In fact as Table 3 shows, LTP is very computationally efficient, pruning most networks on ImageNet in less than 100 epochs. This is in sharp contrast to, e.g., Renda et al. (2020), Ye et al. (2019), Han et al. (2015a), He et al. (2018), etc., which typically require a few hundred epochs of training. Finally, Figure 3 provides LTP’s top-1 trace (without finetuning) and error-bars across 10 independent runs on ResNet50. As the figure shows, LTP enjoys a low variability in terms of top-1 accuracy of pruned models across different runs.\nTable 4 provides LTP’s AlexNet performance results on ImageNet, where it achieves a compression rate of 26.4× without any drop in Top-5 accuracy. It is noteworthy that TorchVision’s AlexNet implementation is slightly different from CaffeNet’s. While both implementations have 56M weights in their fully-connected layers, TorchVision model has only 2.5M weights in its convolutional layers compared to 3.75M of CaffeNet’s. As a result of being slimmer, the TorchVision uncompressed model achieves 1.1% lower Top-1 accuracy, and we conjecture can be compressed less.\nTo the best of our knowledge, it is the first time that (unstructured) pruning results for MobileNetV2, EfficientNet-B0 and MixNet-S are reported, c.f., Table 5. This is partially because LTP, in contrast to\nFigure 3: ResNet50 top-1 trace (without finetuning) and error-bars over 10 runs.\nmany other methods such as, e.g., Han et al. (2015b), Zhang et al. (2018) and Ye et al. (2019), does not require preset per-layer compression rates, which is non-trivial given these networks’ large number of layers (50 ∼ 100), parallel branches and novel architectural building blocks such as squeezeand-excite. This, along with LTP’s computational efficiency and batch-normalization compatibility, enables it to be applied to such diverse architectures out-of-the-box. In the absence of pruning results in the literature, Global-Pruning, as described and implemented in Ortiz et al. (2019), was used to produce baselines. In particular, we see that LTP’s 3× compressed MobileNetV2 provides a 9% Top-1 advantage over one compressed by Global-Pruning. Finally, note that LTP can be used to compress MobileNetV2, EfficientNet-B0, and MixNet-S, which are architecturally designed to be efficient, by 1.33×, 3× and 2×, respectively, with less than 1% drop in Top-1 accuracy." }, { "heading": "5 CONCLUSION", "text": "In this work, we introduced Learned Threshold Pruning (LTP), a novel gradient-based algorithm for unstructured pruning of deep networks. We proposed a framework with soft L0 regularization and soft-pruning mechanisms to learn the pruning thresholds for each layer in an end-to-end manner. With an extensive set of experiments, we showed that LTP is an out-of-the-box method that achieves remarkable compression rates on traditional (AlexNet, ResNet50) as well as modern (MobileNetV2, EfficientNet, MixNet) architectures. Our experiments also established that LTP gives high compression rates even in the presence of batch normalization layers. LTP achieves 26.4x compression on AlexNet and 9.1x compression on ResNet50 with less than 1% drop in top-5 accuracy on the ImageNet dataset. We are also the first to report compression results on efficient architectures comprised of depth-wise separable convolutions and squeeze-and-excite blocks, e.g. LTP achieves 1.33x, 3x and 2x compression on MobileNetV2, EfficientNet-B0 and MixNet-S respectively with less than 1% drop in top-1 accuracy on ImageNet. Additionally, LTP demonstrates fast convergence characteristics, e.g. it prunes ResNet50 in 18 epochs (plus 12 epochs for finetuning) to a compression factor of 9.1." } ]
2,020
null
SP:6cc0e3b4b6385061150d8e36bcbc022069b475ba
[ "This paper studies the problem of visual imitation learning: given a video of an expert demonstration, take actions to reproduce that same behavior. The proposed method learns a distance metric on videos and uses that distance metric as a reward function for RL. Experiments show that this method does recover reasonable behaviors across a range of simulated robotic tasks. Compared with prior methods, the main contribution of this work is that the distance metric is parametrized and trained as a siamese network.", "This paper presents visual imitation with reinforcement learning (VIRL), an algorithm for learning to imitate expert trajectories based solely on visual observations, and without access to the expert’s actions. The algorithm is similar in form to GAIL and its extensions, learning a reward function which captures the similarity between an observed behavior and the expert's demonstrations, while simultaneously using reinforcement learning to find a policy maximizing this reward, such that the learned policy will replicate the demonstrated behavior as well as possible. A key feature of this method is that the learned reward function is defined by a learned distance metric, which evaluates the similarity between the agent's current trajectory, and the nearest demonstrated expert trajectory." ]
It would be desirable for a reinforcement learning (RL) based agent to learn behaviour by merely watching a demonstration. However, defining rewards that facilitate this goal within the RL paradigm remains a challenge. Here we address this problem with Siamese networks, trained to compute distances between observed behaviours and the agent’s behaviours. Given a desired motion such Siamese networks can be used to provide a reward signal to an RL agent via the distance between the desired motion and the agent’s motion. We experiment with an RNNbased comparator model that can compute distances in space and time between motion clips while training an RL policy to minimize this distance. Through experimentation, we have had also found that the inclusion of multi-task data and an additional image encoding loss helps enforce the temporal consistency. These two components appear to balance reward for matching a specific instance of a behaviour versus that behaviour in general. Furthermore, we focus here on a particularly challenging form of this problem where only a single demonstration is provided for a given task – the one-shot learning setting. We demonstrate our approach on humanoid agents in both 2D with 10 degrees of freedom (DoF) and 3D with 38 DoF.
[]
[ { "authors": [ "Pieter Abbeel", "Andrew Y. Ng" ], "title": "Apprenticeship learning via inverse reinforcement learning", "venue": "In Proceedings of the Twenty-first International Conference on Machine Learning,", "year": 2004 }, { "authors": [ "Brenna D. Argall", "Sonia Chernova", "Manuela Veloso", "Brett Browning" ], "title": "A survey of robot learning from demonstration", "venue": "Robotics and Autonomous Systems,", "year": 2009 }, { "authors": [ "Sumit Chopra", "Raia Hadsell", "Yann LeCun" ], "title": "Learning a similarity metric discriminatively, with application to face verification", "venue": "In Computer Vision and Pattern Recognition,", "year": 2005 }, { "authors": [ "D. Dwibedi", "J. Tompson", "C. Lynch", "P. Sermanet" ], "title": "Learning Actionable Representations from Visual Observations", "venue": "ArXiv e-prints,", "year": 2018 }, { "authors": [ "Chelsea Finn", "Tianhe Yu", "Tianhao Zhang", "Pieter Abbeel", "Sergey Levine" ], "title": "One-shot visual imitation learning via meta-learning", "venue": "CoRR, abs/1709.04905,", "year": 2017 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "Advances in Neural Information Processing Systems", "year": 2014 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Spatial pyramid pooling in deep convolutional networks for visual recognition", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2015 }, { "authors": [ "Jonathan Ho", "Stefano Ermon" ], "title": "Generative adversarial imitation learning", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "International Conference onLearning Representations (ICLR),", "year": 2014 }, { "authors": [ "Yunzhu Li", "Jiaming Song", "Stefano Ermon" ], "title": "Infogail: Interpretable imitation learning from visual demonstrations", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Timothy P. Lillicrap", "Jonathan J. Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "CoRR, abs/1509.02971,", "year": 2015 }, { "authors": [ "Yuxuan Liu", "Abhishek Gupta", "Pieter Abbeel", "Sergey Levine" ], "title": "Imitation from observation: Learning to imitate behaviors from raw video via context translation", "venue": "CoRR, abs/1707.03374,", "year": 2017 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Josh Merel", "Yuval Tassa", "Dhruva TB", "Sriram Srinivasan", "Jay Lemmon", "Ziyu Wang", "Greg Wayne", "Nicolas Heess" ], "title": "Learning human behaviors from motion capture by adversarial imitation", "venue": "CoRR, abs/1707.02201,", "year": 2017 }, { "authors": [ "Jonas Mueller", "Aditya Thyagarajan" ], "title": "Siamese recurrent architectures for learning sentence similarity", "venue": "In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence,", "year": 2016 }, { "authors": [ "Ashvin Nair", "Vitchyr Pong", "Murtaza Dalal", "Shikhar Bahl", "Steven Lin", "Sergey Levine" ], "title": "Visual reinforcement learning with imagined", "venue": "goals. CoRR,", "year": 2018 }, { "authors": [ "Andrew Y Ng", "Stuart J Russell" ], "title": "Algorithms for inverse reinforcement learning", "venue": "In Icml,", "year": 2000 }, { "authors": [ "Deepak Pathak", "Parsa Mahmoudieh", "Guanghao Luo", "Pulkit Agrawal", "Dian Chen", "Yide Shentu", "Evan Shelhamer", "Jitendra Malik", "Alexei A. Efros", "Trevor Darrell" ], "title": "Zero-shot visual imitation", "venue": "CoRR, abs/1804.08606,", "year": 2018 }, { "authors": [ "Xue Bin Peng", "Michiel van de Panne" ], "title": "Learning locomotion skills using deeprl: Does the choice of action space matter", "venue": "In Proceedings of the ACM SIGGRAPH / Eurographics Symposium on Computer Animation,", "year": 2017 }, { "authors": [ "Xue Bin Peng", "Glen Berseth", "Kangkang Yin", "Michiel Van De Panne" ], "title": "Deeploco: Dynamic locomotion skills using hierarchical deep reinforcement learning", "venue": "ACM Trans. Graph.,", "year": 2017 }, { "authors": [ "Xue Bin Peng", "Pieter Abbeel", "Sergey Levine", "Michiel van de Panne" ], "title": "Deepmimic: Exampleguided deep reinforcement learning of physics-based character skills", "venue": "ACM Trans. Graph.,", "year": 2013 }, { "authors": [ "Xue Bin Peng", "Angjoo Kanazawa", "Sam Toyer", "Pieter Abbeel", "Sergey Levine" ], "title": "Variational discriminator bottleneck: Improving imitation learning, inverse rl, and gans by constraining information flow", "venue": "arXiv preprint arXiv:1810.00821,", "year": 2018 }, { "authors": [ "Stephane Ross", "Geoffrey Gordon", "Drew Bagnell" ], "title": "A reduction of imitation learning and structured prediction to no-regret online learning", "venue": "Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics,", "year": 2011 }, { "authors": [ "David E Rumelhart", "Geoffrey E Hinton", "Ronald J Williams" ], "title": "Learning internal representations by error propagation", "venue": "Technical report, California Univ San Diego La Jolla Inst for Cognitive Science,", "year": 1985 }, { "authors": [ "J. Schulman", "F. Wolski", "P. Dhariwal", "A. Radford", "O. Klimov" ], "title": "Proximal Policy Optimization Algorithms", "venue": "ArXiv e-prints,", "year": 2017 }, { "authors": [ "John Schulman", "Sergey Levine", "Philipp Moritz", "Michael I. Jordan", "Pieter Abbeel" ], "title": "Trust region policy optimization", "venue": "CoRR, abs/1502.05477,", "year": 2015 }, { "authors": [ "Pierre Sermanet", "Corey Lynch", "Jasmine Hsu", "Sergey Levine" ], "title": "Time-contrastive networks: Selfsupervised learning from multi-view observation", "venue": "CoRR, abs/1704.06888,", "year": 2017 }, { "authors": [ "Y. Tassa", "Y. Doron", "A. Muldal", "T. Erez", "Y. Li", "D. de Las Casas", "D. Budden", "A. Abdolmaleki", "J. Merel", "A. Lefrancq", "T. Lillicrap", "M. Riedmiller" ], "title": "DeepMind Control Suite", "venue": null, "year": 2018 }, { "authors": [ "Faraz Torabi", "Garrett Warnell", "Peter Stone" ], "title": "Behavioral Cloning from Observation", "venue": "URL http://arxiv.org/abs/1805.01954", "year": 2018 }, { "authors": [ "Hado Van Hasselt" ], "title": "Reinforcement learning in continuous state and action spaces", "venue": "In Reinforcement Learning,", "year": 2012 }, { "authors": [ "Ziyu Wang", "Josh S Merel", "Scott E Reed", "Nando de Freitas", "Gregory Wayne", "Nicolas Heess" ], "title": "Robust imitation of diverse behaviors", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Tianhe Yu", "Chelsea Finn", "Annie Xie", "Sudeep Dasari", "Tianhao Zhang", "Pieter Abbeel", "Sergey Levine" ], "title": "One-shot imitation from observing humans via domain-adaptive meta-learning", "venue": "CoRR, abs/1802.01557,", "year": 2018 }, { "authors": [ "Zhuotun Zhu", "Xinggang Wang", "Song Bai", "Cong Yao", "Xiang Bai" ], "title": "Deep learning representation using autoencoder for 3d shape", "venue": "retrieval. Neurocomputing,", "year": 2016 }, { "authors": [ "Fuzhen Zhuang", "Xiaohu Cheng", "Ping Luo", "Sinno Jialin Pan", "Qing He" ], "title": "Supervised representation learning: Transfer learning with deep autoencoders", "venue": "In Twenty-Fourth International Joint Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Brian D. Ziebart", "Andrew Maas", "J. Andrew Bagnell", "Anind K. Dey" ], "title": "Maximum entropy inverse reinforcement learning", "venue": "In Proceedings of the 23rd National Conference on Artificial Intelligence - Volume 3,", "year": 2008 } ]
[ { "heading": "1 INTRODUCTION", "text": "Imitation learning and Reinforcement Learning (RL) often intersect when the goal is to imitate with incomplete information, for example, when imitating from motion capture data (mocap) or video. In this case, the agent needs to search for actions that will result in observations similar to the expert. However, formulating a metric that will provide a reasonable distance between the agent and the expert is difficult. Robots and people plan using types of internal and abstract pose representations that can have reasonable distances; however, typically when animals observe others performing tasks, only visual information is available. Using distances in pose-space is ill-suited for imitation as changing some features can result in drastically different visual appearance. In order to understand how to perform tasks from visual observation a mapping/transformation is used which allows for the minimization of distance in appearance. Even with a method to transform observations to a similar pose space, each person has different capabilities. Because of this, people are motivated to learn transformations in space and time where they can reproduce the behaviour to the best of their own ability. How can we learn a representation similar to this latent space?\nAn essential detail of imitating demonstrations is their sequential and causal nature. There is both an ordering and speed in which a demonstration is performed. Most methods require the agent to learn to imitate the temporal and spatial structure at the same time creating a potentially narrow solution space. When the agent becomes desynchronized with the demonstration, the agent will receive a low reward. Consider the case when a robot has learned to stand when its goal is to walk. Standing is spatially close to the demonstration and actions that help the robot stand, as opposed to falling, should be encouraged. How can such latent goals be encouraged?\nIf we consider a phase-based reward function r = R(s, a, φ) where φ indexes the time in the demonstration and s and a is the agent state and action. As the demonstration timing φ, often controlled by the environment, and agent diverge, the agent receives less reward, even if it is visiting states that exist elsewhere in the demonstration. The issue of determining if an agent is displaying outof-phase behaviour can understood as trying to find the φ that would result in the highest reward\nφ′ = maxφR(s, a, φ) and the distance φ′ − φ is an indicator of how far away in time or out-ofphase the agent is. This phase-independent form can be seen as a form of reward shaping. However, this naive description ignores the ordered property of demonstrations. What is needed is a metric that gives reward for behaviour that is in the proper order, independent of phase. This ordering motivates the creation of a recurrent distance metric that is designed to understand the context between two motions. For example, does this motion look like a walk, not, does this motion look precisely like that walk.\nOur proposed Visual Imitation with Reinforcement Learning (VIRL) method uses Recurrent Siamese Networks (RSNs) and has similarities to both Inverse Reinforcement Learning (IRL) (Abbeel & Ng, 2004) and Generative Advisarial Imitation Learning (GAIL) (Ho & Ermon, 2016). The process of learning a cost function that understands the space of policies to find an optimal policy given a demonstration is fundamentally IRL. While using positive examples from the expert and negative examples from the policy is similar to the method GAIL uses to train a discriminator to recognize in distribution examples. In this work, we build upon these techniques by constructing a method that can learn policies using noisy visual data without action information. Considering the problem’s data sparsity, we include data from other tasks to learn a more robust distance function in the space of visual sequence. We also construct a cost function that takes into account the demonstration ordering as well as pose using a recurrent Siamese network. Our contribution consists of proposing and exploring these forms of recurrent Siamese networks as a way to address a critical problem in defining reward structure for imitation learning from the video for deep RL agents and accomplishing this on simulated humanoid robots for the challenging single shot learning setting." }, { "heading": "2 RELATED WORK", "text": "Learning From Demonstration Searching for good distance functions is an active research area (Abbeel & Ng, 2004; Argall et al., 2009). Given some vector of features, the goal is to find an optimal transformation of these features, such in this transformed space, there exists a strong contextual meaning. Previous work has explored the area of state-based distance functions, but most rely on pose based metrics (Ho & Ermon, 2016; Merel et al., 2017) that come from an expert. While there is other work using distance functions, including for example Sermanet et al. (2017); Finn et al. (2017); Liu et al. (2017); Dwibedi et al. (2018), few use image based inputs and none consider the importance of learning a distance function in time as well as space. In this work, we train recurrent Siamese networks (Chopra et al., 2005) to learn distances between videos.\nPartially Observable Imitation Without Actions For Learning from Demonstration (LfD) problems the goal is to replicate the behaviour of expert πE behaviour. Unlike the typical setting for humans learning to imitate, LfD often assumes the availability of expert action and observation data. Instead, in this work, we focus on the case where only noisy actionless observations of the expert are available. Recent work uses Behavioural Cloning (BC) to learn an inverse dynamics model to estimate the actions used via maximum-likelihood estimation (Torabi et al., 2018). Still, BC often needs many expert examples and tends to suffer from state distribution mismatch issues between the expert policy and student (Ross et al., 2011). Work in (Merel et al., 2017) proposes a system based on GAIL that can learn a policy from a partial observation of the demonstration. In this work, the discriminator’s state input is a customized version of the expert’s state and does not take into account the demonstration’s sequential nature. The work in (Wang et al., 2017) provides a more robust GAIL framework along with a new model to encode motions for few-shot imitation. This model uses an Recurrent Neural Network (RNN) to encode a demonstration but uses expert state and action observations. In our work, the agent is limited to only a partial visual observation as a demonstration. Additional works learn implicit models of distance (Yu et al., 2018; Pathak et al., 2018; Finn et al., 2017; Sermanet et al., 2017), none of these explicitly learn a sequential model considering the demonstration timing. An additional version of GAIL, infoGAIL (Li et al., 2017), included pixel based inputs. Goals can be specified using the latent space from a Variational Auto Encoder (VAE) (Nair et al., 2018). Our work extends this VAE loss using sequence data to train a more temporally consistent latent representation. Recent work (Peng et al., 2018b) has a 2D control example of learning from video data. We show results on more complex 3D tasks and additionally model distance in time. In contrast, here we train a recurrent siamese model that can be used to en-\nable curriculum learning and allow for computing distances even when the agent and demonstration are out of sync." }, { "heading": "3 PRELIMINARIES", "text": "In this section, we outline the general RL framework and specific formulations for RL that we rely upon when developing our method in Section 4.\nReinforcement Learning Using the RL framework formulated with a Markov Dynamic Process (MDP): at every time step t, the world (including the agent) exists in a state st ∈ S, wherein the agent is able to perform actions at ∈ A, sampled from a policy π(at|st) which results in a new state st+1 ∈ S and reward rt according to the transition probability function T (rt, st+1|st, at). The policy is optimize to maximize the future discounted reward\nJ(π) = Er0,...,rT [ T∑ t=0 γtrt ] , (1)\nwhere T is the max time horizon, and γ is the discount factor, indicating the planning horizon length. Inverse reinforcement learning refers to the problem of extracting a reward function from observed optimal behavior Ng et al. (2000). In contrast, in our approach we learn a distance that works across a collection of behaviours. Further, we do not assume the example data to be optimal. See Appendix 7.2 for further discussion of the connections of our work to inverse reinforcement learning.\nGAIL VIRL is similar to the GAIL framework (Ho & Ermon, 2016) which uses a Generative Advasarial Network (GAN) (Goodfellow et al., 2014), where the discriminator is trained with positive examples from the expert trajectories and negative examples from the policy. The generator is a combination of the environment, policy and current state visitation probability induced by the policy pπ(s).\nmin θπ max θφ EπE [log(D(s, a|θφ))] + Eπθπ [log(1−D(s, a|θφ))] (2)\nIn this framework the discriminator provides rewards for the RL policy to optimize, as the probability of a state generated by the policy being in the distribution rt = D(st, at|θφ). While this framework has been shown to work in practice, this dual optimization is often unstable. In the next section we will outline our method for learning a more stable distance based reward over sequences of images." }, { "heading": "4 CONCEPTUAL DISTANCE-BASED REINFORCEMENT LEARNING", "text": "Our approach is aimed at facilitating imitation learning within an underlying RL formulation over partially observed observations o. Unlike the situation in GAIL, we do not rely on having accces to state, s and action, a information – our idea is to minimize a function that determintes the distance between two sequences observations, o, one from the desired example behavior oe, and another from the current agent behavior oa. We can then define the reward used within an underlying RL framework in terms of a distance function D, such that\nrt̂(o e, oa) = −D(oe, oa, t̂) = t̂∑ t=0 −d(oet , oat ), (3)\nwhere in our setting here D(oe, oa, t̂) models a distance between video clips from time t = 0 to t̂.\nA simple formulation of the approach above can be overly restrictive on sequence timing. While these distances can serve as RL rewards, they often provide insufficient signal for the policy to learn a good imitative behaviour, especially when the agent only has partial observations of the expert. We can see an example of this in Figure 1a were starting at t5 the agent (in red) begins to exhibit behaviour that is similar to the expert (in blue) yet the spatial distance indicates that this state is further away from the desired behaviour than at t4.\nTo encourge the agent to match any part of the expert behaviour we propose decomposing the distance into two distances, by adding a type of temporal distance shown in green. To compute a time\nindependant distance we can find the state in the expert sequence that is closest to the agent’s current state argmin t̂∈T d(oet̂ , o a t ) and use it in the following distance measure\ndT (oe, oa, t̂, t) = . . .+ d(oe t̂−1, o a t−1) + d(o e t̂ , oat ) + d(o e t̂+1 , oat+1) + . . . (4)\nUsing only a single state time-alined may lead to the agent fixating on mataching a single state in the expert demonstration. To avoid this the neighbouring states given sequence timing readjustment are used in the distance computation. This framework allows the agent to be rewarded for exhibiting behaviour that matches any part of the experts demonstration. The better is learns to match parts of the expert demonstration the more reward it is given. The previous spatial distance will then help the agent learn to sync up its timing with the deomonstration. Next we describe how we learn both of these distances.\nDistance Metric Learning Many methods can be used to learn a distance function in state-space. Here we use a Siamese network f(oe, oa) with a triplet loss over time and task data (Chopra et al., 2005). The triplet loss is used to minimize the distance between two examples that are positive, very similar or from the same class, and maximize the distance between pairs of examples that are known to be unrelated. For more details see supplementary document.\nSequence Imitation The distance metric is formulated in a recurrent style where the distance is computed from the current state and conditioned on all previous states d(ot|ot−1, . . . , o0). The loss function is a combination of distance Eq. 9 and VAE-based representation learning objectives from Eq. 7 and Eq. 8, detailed in the supplementary material. This combination of sequencebased losses assists in compressing the representation while ensuring intermediate representations are informative. The loss function used to train the distance model on a positive pair of sequences is:\nLV IRL(oi, op, ·) =λ0LSN (oi, op, ·) + λ1[ 1\nT T∑ t=0 LSN (oi,t, op,t, ·)]+\nλ2[ 1\nT T∑ t=0 LV AE(oi,t) + LV AE(op,t)]+\nλ3[LAE(oi) + LAE(op)].\nWhere λ = {0.7, 0.1, 0.1, 0.1}. With a negative pair, the second sequence used in the VAE and AE losses would be the negative sequence.\nThe Siamese loss function remains the same as in Eq. 9 but the overall learning process evolves to use an RNN-based deep networks. A diagram of the full model is shown in Figure 2. This model uses a time distributed Long Short-Term Memory (LSTM). A single convolutional network conva is first used to transform images of the demonstration oa to an encoding vector eat . After the sequence of images is distributed through conva there is an encoded sequence < ea0 , . . . , e a t >, this sequence is fed into the RNN lstma until a final encoding is produced hat . This same process is performed for a copy of the RNN lstma producing hbt for the agent ob. The loss is computed in a similar fashion to (Mueller & Thyagarajan, 2016) using the sequence outputs of images from the agent and another from the demonstration. The reward at each timestep is computed as rt =\n||hat −hbt ||+ ||eat − ebt || = ||lstma(conva(sat ))− lstma(conva(sbt))||+ ||conva(sat )− conva(sbt)||. At the beginning of each episode, the RNN’s internal state is reset. The policy and value function have 2 hidden layers with 512 and 256 units, respectively. The use of additional VAE-based image and Auto Encoder (AE)-based sequence decoding losses improve the latent space conditioning and representation.\nAlgorithm 1 Learning Algorithm Initialize model parameters θπ and θd Create experience memory D ← {} while not done do\nfor i ∈ {0, . . . N} do τi ← {} {st, oet , oat } ← env.reset() for t ∈ {0, . . . , T} do at ← π(·|st, θπ) {st+1, oet+1, oat+1} ← env.step(at) rt ← −d(oet+1, oat+1|θd) τi,t ← {st, oet , oat , at, rt} {st, oet , oat } ← {st+1, oet+1, oat+1}\nend for end for D ← D ⋃ {τ0, . . . , τN} Update d(·) parameters θd using D Update policy θπ using {τ0, . . . , τN}\nend while Unsupervised Data labelling To construct positive and negative pairs for training we make use of time information in a similar fashion to (Sermanet et al., 2017), where observations at similar times in the same sequence are often correlated and observations at different times will likely have little similarity. We compute pairs by altering one sequence and comparing this modified version to its original. Positive pairs are created by adding noise to the sequence or altering a few frames of the sequences. Negative pairs are created by shuffling one sequence or reversing it. More details are available in the supplementary material. Imitation data for 24 other tasks are also used to help condition the distance metric learning process. These include motion clips for running, backflips, frontflips, dancing, punching, kicking and jumping along with the desired motion. For details on how positive and negative pairs are created from this data, see the supplementary document.\nImportantly the RL environment generates two different state representations for the agent. The first state st+1 is the internal robot pose. The second state ot+1 is the agent’s rendered view, shown in Figure 2. The rendered view is used with the distance metric to compute the similarity between the agent and the demonstration. We attempted using the visual features as the state input for the policy as well; this resulted in poor policy quality. Details of the algorithm used to train the distance metric and policy are outlined in the supplementary document Algorithm 1." }, { "heading": "5 ANALYSIS AND RESULTS", "text": "The simulation environment used in the experiments is similar to the DeepMind Control Suite (Tassa et al., 2018). In this simulated robotics environment, the agent is learning to imitate a given reference motion. The agent’s goal is to learn a policy to actuate Proportional Derivative (PD) controllers at 30 fps to mimic the desired motion. The simulation environment provides a hard-coded reward function based on the robot’s pose that is used to evaluate the policy quality. The demonstration M the agent is learning to imitate is generated from a clip of mocap data. The mocap data is used to\nanimate a second robot in the simulation. Frames from the simulation are captured and used as video input to train the distance metric. The images captured from the simulation are converted to greyscale with 64× 64 pixels. We train the policy on pose data, as link distances and velocities relative to the robot’s Centre of Mass (COM). This simulation environment is new and has been created to take motion capture data and produce multi-view video data that can be used for training RL agents or generating data for computer vision tasks. The environment includes challenging and dynamic tasks for humanoid robots. Some example tasks are imitating running, jumping, and walking, shown in Figure 3 and humanoid2d detailed in the supplementary material.\n3D Humanoid Robot Imitation In these simulated robotics environments the agent is learning to imitate a given reference motion of a walk, run, jump or zombie motion. A single motion demonstration is provided by the simulation environment as a cyclic motion. During learning, we include additional data from all other tasks for the walking task this would be: walking-dynamic-speed, running, jogging, frontflips, backflips, dancing, jumping, punching and kicking) that are only used to train the distance metric. We also include data from a modified version of the tasks that has a randomly generated speed modifier ω ∈ [0.5, 2.0] walking-dynamic-speed, that warps the demonstration timing. This additional data is used to provide a richer understanding of distances in space and time to the distance metric. The method is capable of learning policies that produce similar behaviour to the expert across a diverse set of tasks. We show example trajectories from the learned policies in Figure 3 and in the supplemental Video. It takes 5− 7 days to train each policy in these results on a 16 core machine with an Nvidia GTX1080 GPU.\nAlgorithm Analysis and Comparison To evaluate the learning capabilities and improvements of VIRL we compare against two other methods that learn a distance function in state space, GAIL and using a VAE to train an encoding and compute distances between those encodings, similar to (Nair et al., 2018), using the same method as the Siamese network in Figure 4a. We find that the VAE alone does not appear to capture the critical distances between states, possibly due to the decoding transformation complexity. Similarly, the GAIL baseline produces very jerky motion or stands still, both of which are contained in the imitation distribution. Our method that considers the temporal structure of the data learns faster and produces higher value policies.\nAdditionally, we create a multi-modal version of VIRL. Here we replace the bottom conv net with a dense network and learn a distance metric between agent poses and imitation video. The results of these models, along with the default manual reward function provided by the environment, are shown in Figure 4b. The multi-modal version appears to perform about equal to the vision-only modal. In Figure 4b we also compare our method to a non-sequence-based model that is equivalent to Time Contrastive Network (TCN). On average VIRL achieves higher value policies. We find that using the RNN-based distance metric makes the learning process more gradual. We show this learning effect in Figure 4b, where the original manually created reward with flat feedback leads to slow initial learning.\nIn Figure 4c we compare the importance of the spatial ||eat −ebt ||2 and temporal ||hat −hbt ||2 representations learned by VIRL. Using the recurrent representation (temporal lstm) alone allows learning to progress quickly but can have difficulty informing the policy of how to best match the desired example. On the other hand, using only the encoding between single frames (spatial conv) slows learning due to limited reward for out-of-phase behaviour. We achieved the best results by combining the representations from these two models. The assistance of spatial rewards is also seen in Figure 4b, where the manual reward learns the slowest.\nAblation We conduct ablation studies in Figure 5a to compare the effects of data augmentation methods, network models and the use of additional data from other tasks. For the more complex humanoid3d control problems the data augmentation methods, including Early Episode Sequence Priority (EESP), increases average policy quality marginally. The use of mutlitask data Figure 8c and the additional representational losses Figure 8a greatly improve the methods ability to learn. More ablation results are available in the supplementary material.\nSequence Encoding Using the learned sequence encoder a collection of motions from different classes are processed to create a TSNE embedding of the encodings (Maaten & Hinton, 2008). In Figure 5c we plot motions both generated from the learned policy π and the expert trajectories\nπE . Overlaps in specific areas of the space for similar classes across learned π and expert πE data indicate a well-formed distance metric that does not sperate expert and agent examples. There is also a separation between motion classes in the data, and the cyclic nature of the walking cycle is visible.\nIn this section, we have described the process followed to create and analyze VIRL. Due to a combination of data augmentation techniques, VIRL can imitate given only a single demonstration. We have shown some of the first results to learn imitative policies from video data using a recurrent net-\nwork. Interestingly, the method displays new learning efficiencies that are important to the method success by separating the imitation problem into spatial and temporal aspects. For best results, we found that the inclusion of additional regularizing losses on the recurrent siamese network, along with some multi-task supervision, was key to producing results." }, { "heading": "6 DISCUSSION AND CONCLUSION", "text": "In this work, we have created a new method for learning imitative policies from a single demonstration. The method uses a Siamese recurrent network to learn a distance function in both space and time. This distance function, trained on noisy partially observed video data, is used as a reward function for training an RL policy. Using data from other motion styles and regularization terms, VIRL produces policies that demonstrate similar behaviour to the demonstration.\nLearning a distance metric is enigmatic, the distance metric can compute inaccurate distances in areas of the state space it has not yet seen. This inaccuracy could imply that when the agent explores and finds truly new and promising trajectories, the distance metric computes incorrect distances. We attempt to mitigate this effect by including training data from different tasks. We believe VIRL will benefit from a more extensive collection of multi-task data and increased variation of each task. Additionally, if the distance metric confidence is available, this information could be used to reduce variance and overconfidence during policy optimization.\nIt is probable learning a reward function while training adds additional variance to the policy gradient. This variance may indicate that the bias of off-policy methods could be preferred over the added variance of on-policy methods used here. We also find it important to have a small learning rate for the distance metric. The low learning rate reduces the reward variance between data collection phases and allows learning a more accurate value function. Another approach may be to use partially observable RL that can learn a better value function model given a changing RNN-based\nreward function. Training the distance metric could benefit from additional regularization such as constraining the kl-divergence between updates to reduce variance. Learning a sequence-based policy as well, given that the rewards are now not dependent on a single state observation is another area for future research.\nWe compare our method to GAIL, but we found GAIL has limited temporal consistency. This method led to learning jerky and overactive policies. The use of a recurrent discriminator for GAIL may mitigate some of these issues and is left for future work. It is challenging to produce results better than the carefully manually crafted reward functions used by the RL simulation environments that include motion phase information in the observations (Peng et al., 2018a; 2017). However, we have shown that our method that can compute distances in space and time has faster initial learning. Potentially, a combination of starting with our method and following with a manually crafted reward function could lead to faster learning of high-quality policies. Still, as environments become increasingly more realistic and grow in complexity, we will need more robust methods to describe the desired behaviour we want from the agent.\nTraining the distance metric is a complicated balancing game. One might expect that the distance metric should be trained early and fast so that it quickly understands the difference between a good and bad demonstration. However, quickly learning confuses the agent, rewards can change, which cause the agent to diverge off toward an unrecoverable policy space. Slower is better, as the distance metric may not be accurate, it may be locally or relatively reasonable, which is enough to learn a good policy. As learning continues, these two optimizations can converge together." }, { "heading": "7 APPENDIX", "text": "This section includes additional details related to VIRL." }, { "heading": "7.1 IMITATION LEARNING", "text": "Imitation learning is the process of training a new policy to reproduce the behaviour of some expert policy. BC is a fundamental method for imitation learning. Given an expert policy πE possibly represented as a collection of trajectories τ < (s0, a0), . . . , (sT , aT ) > a new policy π can be learned to match this trajectory using supervised learning.\nmax θ EπE [ T∑ t=0 log π(at|st, θπ)] (5)\nWhile this simple method can work well, it often suffers from distribution mismatch issues leading to compounding errors as the learned policy deviates from the expert’s behaviour." }, { "heading": "7.2 INVERSE REINFORCEMENT LEARNING", "text": "Similar to BC, Inverse Reinforcement Learning (IRL) also learns to replicate some desired behaviour. However, IRL makes use of the RL environment without a defined reward function. Here we describe maximal entropy IRL (Ziebart et al., 2008). Given an expert trajectory τ < (s0, a0), . . . , (sT , aT ) > a policy π can be trained to produce similar trajectories by discovering a distance metric between the expert trajectory and trajectories produced by the policy π.\nmax c∈C min π (Eπ[c(s, a)]−H(π))− EπE [c(s, a)] (6)\nwhere c is some learned cost function and H(π) is a causal entropy term. πE is the expert policy that is represented by a collection of trajectories. IRL is searching for a cost function c that is low for the expert πE and high for other policies. Then, a policy can be optimized by maximizing the reward function rt = −c(st, at)." }, { "heading": "7.3 AUTO-ENCODER FRAMEWORK", "text": "Variational Auto-encoders Previous work shows that VAEs can learn a lower dimensional structured representation of a distribution (Kingma & Welling, 2014). A VAE consists of two parts an encoder qφ and a decoder pψ . The encoder maps states to a latent encoding z and in turn the decoder transforms z back to states. The model parameters for both φ and ψ are trained jointly to maximize\nLV AE(φ, ψ, s) = −βDKL(qφ(z||s)||p(z) + Eqφ(z||s)[log pψ(s||z)] (7)\n, where DKL is the Kullback-Leibler divergence, p(s) is some prior and β is a hyper-parameter to balance the two terms. The encoder qφ takes the form of a diagonal Gaussian distribution qφ = N (µφ(s), σ2(s)). In the case of images, the decoder pψ parameterized a Bernoulli distribution over pixel values. This simple parameterization is akin to training the decoder with a cross entropy loss over normalized pixel values.\nSequence Auto-encoding The goal of sequence to sequence translation is to learn the conditional probability p(y0, . . . , yT ′ |x0, . . . , xT ), where x = x0, . . . , xT and y = y0, . . . , yT ′ are sequence Here we want to explicitly learn a latent variable zRNN that compresses the information in x0, . . . , xT . An RNN can model this conditional probability by calculating v =∏T t=0 p(yT |{x0, . . . , xT }) of the sequence x that can, in turn, be used to condition the decoding of the sequence y (Rumelhart et al., 1985).\np(y) = T∏ t=0 p(yT |{y0, . . . , yT−1}, v) (8)\n, This method has been used for learning compressed representations for transfer learning (Zhu et al., 2016) and 3D shape retrieval (Zhuang et al., 2015)." }, { "heading": "7.4 DATA", "text": "The mocap used in the created environment come from the CMU mocap database and the SFU mocap database.\nData Augmentation and Training We apply several data augmentation methods to produce additional data for training the distance metric. Using methods analogous to the cropping and warping methods popular in computer vision (He et al., 2015) we randomly crop sequences and randomly warp the demonstration timing. The cropping is performed by both initializing the agent to random poses from the demonstration motion and terminating episodes when the agent’s head, hands or torso contact the ground. As the agent improves, the average length of each episode increases and so to will the average length of the cropped window. The motion warping is done by replaying the demonstration motion at different speeds. Two additional methods influence the data distribution. The first method is Reference State Initialization (RSI) (Peng et al., 2018a), where the initial state of the agent and expert is randomly selected from the expert demonstration. With this property, the environment can also be thought of as a form of memory replay. The environment allows the agent to go back to random points in the demonstration as if replaying a remembered demonstration. The second is EESP where the probability a sequence x is cropped starting at i is p(i) = len(x)−i∑ i , increasing the likelihood of starting earlier in the episode." }, { "heading": "7.5 TRAINING DETAILS", "text": "The learning simulations are trained using Graphics Processing Unit (GPU)s. The simulation is not only simulating the interaction physics of the world but also rendering the simulation scene to capture video observations. On average, it takes 3 days to execute a single training simulation. The process of rendering and copying the images from the GPU is one of the most expensive operations with VIRL. We collect 2048 data samples between training rounds. The batch size for Trust Region Policy Optimization (TRPO) is 2048. The kl term is 0.5.\nThe simulation environment includes several different tasks that are represented by a collection of motion capture clips to imitate. These tasks come from the tasks created in the DeepMimic works (Peng et al., 2018a). We include all humanoid tasks in this dataset.\nIn Algorithm 1 we include an outline of the algorithm used for the method. The simulation environment produces three types of observations, st+1 the agent’s proprioceptive pose, svt+1 the image observation of the agent and mt+1 the image-based oberservation of the expert demonstration. The images are 64× 64." }, { "heading": "7.6 DISTANCE FUNCTION TRAINING", "text": "Our Siamese training loss consists of\nLSN (si, sp, sn) = y ∗ ||f(si)− f(sp)||+ ((1− y) ∗ (max(ρ− (||f(si)− f(sn)||), 0))), (9) where y = 1 is a positive example sp, pair where the distance should be minimal and y = 0 is a negative example sn, pair where the distance should be maximal. The margin ρ is used as an attractor or anchor to pull the negative example output away from si and push values towards a 0 to 1 range. f(·) computes the output from the underlying network. The distance between two states is calculated as d(s, s′) = ||f(s) − f(s′)|| and the reward as r(s, s′) = −d(s, s′). Data used to train the Siamese network is a combination of trajectories τ = 〈s0, . . . , sT 〉 generated from simulating the agent in the environment and the expert demonstration. For our recurrent model the same loss is used; however, the states sp, sn, si are sequences. During RL training we compute a distance given the sequence of states observed so far in the episode. This method allows us to train a distance function in state space where all we need to provide is labels that denote if two states, or sequences, are similar or not.\nIn Figure 6b we show the training curve for the recurrent siamese network. The model learns smoothly, considering that the training data used is continually changing as the RL agent explores. In Figure 6a the learning curve for the siamese RNN is shown after performing pretraining. We can see the overfitting portion the occurs during RL training. This overfitting can lead to poor reward prediction during the early phase of training.\nIt can be challenging to train a sequenced based distance function. One particular challenge is training the distance function to be accurate across the space of possible states. We found a good strategy was to focus on the beginning of episode data. When the model is not accurate on states it saw earlier in the episode; it may never learn how to get into good states later that the distance function understands better. Therefore, when constructing batches to train the RNN on, we give a higher probability of starting earlier in episodes. We also give a higher probability to shorter sequences. As the agent gets better average episodes length increase, so to will the randomly selected sequence windows." }, { "heading": "7.7 DISTANCE FUNCTION USE", "text": "We find it helpful to normalize the distance metric outputs using r = exp(r2∗wd) wherewd = −5.0 scales the filtering width. Early in training the distance metric often produces large, noisy values. Also, the RL method regularly updates reward scaling statistics; the initial high variance data reduces the significance of better distance metric values produced later on by scaling them to small numbers. The improvement of using this normalize reward is shown in Figure 7a." }, { "heading": "8 POSITIVE AND NEGATIVE EXAMPLES", "text": "We use two methods to generate positive and negative examples. The first method is similar to TCN, where we can assume that sequences that overlap more in time are more similar. For each episode two sequences are generated, one for the agent and one for the imitation motion. Here we list the methods used to alter sequences for positive pairs.\n1. Adding Gaussian noise to each state in the sequence (mean = 0 and variance = 0.02) 2. Out of sync versions where the first state is removed from the first sequence and the last\nstate from the second sequence 3. Duplicating the first state in either sequence 4. Duplicating the last state in either sequence\nWe alter sequences for negative pairs by\n1. Reversing the ordering of the second sequence in the pair. 2. Randomly picking a state out of the second sequence and replicating it to be as long as the\nfirst sequence. 3. Randomly shuffling one sequence. 4. Randomly shuffling both sequences.\nThe second method we use to create positive and negative examples is by including data for additional classes of motion. These classes denote different task types. For the humanoid3d environment, we generate data for walking-dynamic-speed, running, backflipping and frontflipping. Pairs from the same tasks are labelled as positive, and pairs from different classes are negative." }, { "heading": "8.1 ADDITIONAL ABLATION ANALYSIS", "text": "" }, { "heading": "8.2 RL ALGORITHM ANALYSIS", "text": "It is not clear which RL algorithm may work best for this type of imitation problem. A number of RL algorithms were evaluated on the humanoid2d environment Figure 9a. Surprisingly, TRPO (Schulman et al., 2015) did not work well in this framework, considering it has a controlled policy gradient step, we thought it would reduce the overall variance. We found that Deep Deterministic Policy Gradient (DDPG) (Lillicrap et al., 2015) worked rather well. This result could be related to having a changing reward function, in that if the changing rewards are considered off-policy data, it can be easier to learn. This can be seen in Figure 9b where DDPG is best at estimating the future discounted rewards in the environment. We also tried Continuous Actor Critic Learning Automaton (CACLA) (Van Hasselt, 2012) and Proximal Policy Optimization (PPO) (Schulman et al., 2017); we found that PPO did not work particularly well on this task; this could also be related to added variance." }, { "heading": "8.3 ADDITIONAL IMITATION RESULTS", "text": "Our first experiments evaluate the methods ability to learn a complex cyclic motion for a simulated humanoid robot given a single motion demonstration, similar to (Peng & van de Panne, 2017), but using video instead. The agent is able to learn a robust walking gate even though it is only given noisy partial observations of a demonstration Figure 10." } ]
2,019
null
SP:cb3cb0e206f4c3560538906a34265fcc95ca950f
[ "The paper assesses two different approaches to speed up the evaluations of neural network architectures for neural architecture search (NAS). The first one is weight sharing, which trains a supernetwork that contains all possible architecture of the search space. The performance of single architectures can be then approximated by simply using the shared parameters of the supernetwork. The second approach is to use different kind of predictors that are trained on offline evaluated architectures. Several methods following these two approaches from the literature are evaluated on the NASBench201 benchmark based on different rank-based evaluation scores." ]
Neural architecture search (NAS) has recently received extensive attention due to its effectiveness in automatically designing effective neural architectures. A major challenge in NAS is to conduct a fast and accurate evaluation (i.e., performance estimation) of neural architectures. Commonly used fast architecture evaluators include parameter-sharing ones and predictor-based ones. Despite their high evaluation efficiency, the evaluation correlation (especially of the well-performing architectures) is still questionable. In this paper, we conduct an extensive assessment of both the parameter-sharing and predictor-based evaluators on the NASBench-201 search space, and break up how and why different configurations and strategies influence the fitness of the evaluators. Specifically, we develop a set of NAS-oriented criteria to understand the behavior of fast architecture evaluators in different training stages. And based on the findings of our experiments, we give pieces of knowledge and suggestions to guide NAS application and motivate further research.
[]
[ { "authors": [ "Gabriel Bender", "Pieter-Jan Kindermans", "Barret Zoph", "Vijay Vasudevan", "Quoc Le" ], "title": "Understanding and simplifying one-shot architecture search", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Andrew Brock", "Theodore Lim", "James Millar Ritchie", "Nicholas J Weston" ], "title": "Smash: One-shot model architecture search through hypernetworks", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Han Cai", "Tianyao Chen", "Weinan Zhang", "Yong Yu", "Jun Wang" ], "title": "Efficient architecture search by network transformation", "venue": "In Thirty-Second AAAI conference on artificial intelligence,", "year": 2018 }, { "authors": [ "Han Cai", "Ligeng Zhu", "Song Han" ], "title": "Proxylessnas: Direct neural architecture search on target task and hardware", "venue": "arXiv preprint arXiv:1812.00332,", "year": 2018 }, { "authors": [ "Han Cai", "Chuang Gan", "Tianzhe Wang", "Zhekai Zhang", "Song Han" ], "title": "Once for all: Train one network and specialize it for efficient deployment", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Xin Chen", "Lingxi Xie", "Jun Wu", "Qi Tian" ], "title": "Progressive differentiable architecture search: Bridging the depth gap between search and evaluation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp. 1294–1303,", "year": 2019 }, { "authors": [ "Yukang Chen", "Tong Yang", "Xiangyu Zhang", "Gaofeng Meng", "Xinyu Xiao", "Jian Sun" ], "title": "Detnas: Backbone search for object detection", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Xiangxiang Chu", "Bo Zhang", "Ruijun Xu", "Jixiang Li" ], "title": "Fairnas: Rethinking evaluation fairness of weight sharing neural architecture search", "venue": null, "year": 1907 }, { "authors": [ "Boyang Deng", "Junjie Yan", "Dahua Lin" ], "title": "Peephole: Predicting network performance before training", "venue": "arXiv preprint arXiv:1712.03351,", "year": 2017 }, { "authors": [ "Xuanyi Dong", "Yi Yang" ], "title": "Nas-bench-201: Extending the scope of reproducible neural architecture search", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Thomas Elsken", "Jan Hendrik Metzen", "Frank Hutter" ], "title": "Efficient multi-objective neural architecture search via lamarckian evolution", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Golnaz Ghiasi", "Tsung-Yi Lin", "Quoc V Le" ], "title": "Nas-fpn: Learning scalable feature pyramid architecture for object detection", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Kirthevasan Kandasamy", "Willie Neiswanger", "Jeff Schneider", "Barnabas Poczos", "Eric P Xing" ], "title": "Neural architecture search with bayesian optimisation and optimal transport", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Aaron Klein", "Stefan Falkner", "Simon Bartels", "Philipp Hennig", "Frank Hutter" ], "title": "Fast bayesian hyperparameter optimization on large datasets", "venue": "Electronic Journal of Statistics,", "year": 2017 }, { "authors": [ "Yanxi Li", "Minjing Dong", "Yunhe Wang", "Chang Xu" ], "title": "Neural architecture search in a proxy validation loss landscape", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Chenxi Liu", "Barret Zoph", "Maxim Neumann", "Jonathon Shlens", "Wei Hua", "Li-Jia Li", "Li Fei-Fei", "Alan Yuille", "Jonathan Huang", "Kevin Murphy" ], "title": "Progressive neural architecture search", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "Darts: Differentiable architecture search", "venue": "arXiv preprint arXiv:1806.09055,", "year": 2018 }, { "authors": [ "Renqian Luo", "Fei Tian", "Tao Qin", "Enhong Chen", "Tie-Yan Liu" ], "title": "Neural architecture optimization", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Renqian Luo", "Tao Qin", "Enhong Chen" ], "title": "Understanding and improving one-shot neural architecture optimization", "venue": "arXiv preprint arXiv:1909.10815,", "year": 2019 }, { "authors": [ "Niv Nayman", "Asaf Noy", "Tal Ridnik", "Itamar Friedman", "Rong Jin", "Lihi Zelnik" ], "title": "Xnas: Neural architecture search with expert advice", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Xuefei Ning", "Yin Zheng", "Tianchen Zhao", "Yu Wang", "Huazhong Yang" ], "title": "A generic graph-based neural architecture encoding scheme for predictor-based nas", "venue": "In The European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Hieu Pham", "Melody Y Guan", "Barret Zoph", "Quoc V Le", "Jeff Dean" ], "title": "Efficient neural architecture search via parameter sharing", "venue": "arXiv preprint arXiv:1802.03268,", "year": 2018 }, { "authors": [ "Esteban Real", "Alok Aggarwal", "Yanping Huang", "Quoc V Le" ], "title": "Regularized evolution for image classifier architecture search", "venue": "In Proceedings of the aaai conference on artificial intelligence,", "year": 2019 }, { "authors": [ "Michael S Ryoo", "AJ Piergiovanni", "Mingxing Tan", "Anelia Angelova" ], "title": "Assemblenet: Searching for multi-stream neural connectivity in video architectures", "venue": null, "year": 1905 }, { "authors": [ "Christian Sciuto", "Kaicheng Yu", "Martin Jaggi", "Claudiu Musat", "Mathieu Salzmann" ], "title": "Evaluating the search phase of neural architecture search", "venue": null, "year": 1902 }, { "authors": [ "Han Shi", "Renjie Pi", "Hang Xu", "Zhenguo Li", "James T Kwok", "Tong Zhang" ], "title": "Multi-objective neural architecture search via predictive network performance optimization", "venue": null, "year": 1911 }, { "authors": [ "Yanan Sun", "Handing Wang", "Bing Xue", "Yaochu Jin", "Gary G Yen", "Mengjie Zhang" ], "title": "Surrogateassisted evolutionary deep learning using an end-to-end random forest-based performance predictor", "venue": "IEEE Transactions on Evolutionary Computation,", "year": 2019 }, { "authors": [ "Linnan Wang", "Yiyang Zhao", "Yuu Jinnai", "Rodrigo Fonseca" ], "title": "Alphax: exploring neural architectures with deep neural networks and monte carlo tree search", "venue": "arXiv preprint arXiv:1805.07440,", "year": 2018 }, { "authors": [ "Tianzhe Wang", "K. Wang", "H. Cai", "J. Lin", "Zhijian Liu", "Song Han" ], "title": "Apq: Joint search for network architecture, pruning and quantization policy", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Bichen Wu", "Xiaoliang Dai", "Peizhao Zhang", "Yanghan Wang", "Fei Sun", "Yiming Wu", "Yuandong Tian", "Peter Vajda", "Yangqing Jia", "Kurt Keutzer" ], "title": "Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Sirui Xie", "Hehui Zheng", "Chunxiao Liu", "Liang Lin" ], "title": "Snas: stochastic neural architecture search", "venue": "arXiv preprint arXiv:1812.09926,", "year": 2018 }, { "authors": [ "Yixing Xu", "Yunhe Wang", "Kai Han", "Shangling Jui", "Chunjing Xu", "Qi Tian", "Chang Xu" ], "title": "Renas:relativistic evaluation of neural architecture search, 2019", "venue": null, "year": 2019 }, { "authors": [ "Zhaohui Yang", "Yunhe Wang", "Xinghao Chen", "Boxin Shi", "Chao Xu", "Chunjing Xu", "Qi Tian", "Chang Xu" ], "title": "Cars: Continuous evolution for efficient neural architecture", "venue": null, "year": 1909 }, { "authors": [ "Arber Zela", "Julien Siems", "Frank Hutter" ], "title": "Nas-bench-1shot1: Benchmarking and dissecting oneshot neural architecture search", "venue": "arXiv preprint arXiv:2001.10422,", "year": 2020 }, { "authors": [ "Chris Zhang", "Mengye Ren", "Raquel Urtasun" ], "title": "Graph hypernetworks for neural architecture search", "venue": "arXiv preprint arXiv:1810.05749,", "year": 2018 }, { "authors": [ "Barret Zoph", "Quoc V. Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Luo" ], "title": "2018) made an attempt to use a parameter-sharing evaluator as the oracle evaluator in Fig. A1. That is to say, they use the noisy signals provided by the parameter sharing evaluator to train", "venue": null, "year": 2018 }, { "authors": [ "Wang" ], "title": "2020), since one predictor forward pass is faster than testing on the whole", "venue": null, "year": 2020 } ]
[ { "heading": null, "text": "Neural architecture search (NAS) has recently received extensive attention due to its effectiveness in automatically designing effective neural architectures. A major challenge in NAS is to conduct a fast and accurate evaluation (i.e., performance estimation) of neural architectures. Commonly used fast architecture evaluators include parameter-sharing ones and predictor-based ones. Despite their high evaluation efficiency, the evaluation correlation (especially of the well-performing architectures) is still questionable. In this paper, we conduct an extensive assessment of both the parameter-sharing and predictor-based evaluators on the NASBench-201 search space, and break up how and why different configurations and strategies influence the fitness of the evaluators. Specifically, we develop a set of NAS-oriented criteria to understand the behavior of fast architecture evaluators in different training stages. And based on the findings of our experiments, we give pieces of knowledge and suggestions to guide NAS application and motivate further research." }, { "heading": "1 INTRODUCTION", "text": "Studies have shown that the automatically discovered architectures by NAS can outperform the hand-crafted architectures for various applications, such as classification (Nayman et al., 2019; Zoph & Le, 2017), detection (Ghiasi et al., 2019; Chen et al., 2019b), video understanding (Ryoo et al., 2019), text modeling (Zoph & Le, 2017), etc. Early NAS algorithms (Zoph & Le, 2017) suffer from the extremely heavy computational burden, since the evaluation of neural architectures is slow. Thus, how to estimate the performance of a neural architecture in a fast and accurate way is vital for addressing the computational challenge of NAS.\nA neural architecture evaluator outputs the evaluated score of an architecture that indicates its quality. The straightforward solution is to train an architecture from scratch to convergence and then test it on the validation dataset, which is extremely time-consuming. Instead of exactly evaluating architectures on the target task, researchers usually construct a proxy model with fewer layers or fewer channels (Pham et al., 2018; Real et al., 2019; Wu et al., 2019), and train this model to solve a proxy task of smaller scales (Cai et al., 2018a; Elsken et al., 2018; Klein et al., 2017; Wu et al., 2019), e.g., smaller dataset or subsets of dataset, training or finetuning for fewer epochs.\nTraditional evaluators conduct separate training phases to acquire the weights that are suitable for each architecture. In contrast, one-shot evaluation amortized the training cost of different architectures through parameter sharing or a global hypernetwork, thus significantly reduce the evaluation cost. Pham et al. (2018) constructs an over-parametrized super network (supernet) such that all architectures in the search space are sub-architectures of the supernet. Throughout the search process, the shared parameters in the supernet are updated on the training dataset split, and each architecture is evaluated by directly using the corresponding subset of the weights. Afterwards, the parameter sharing technique is widely used for architecture search in different search spaces (Wu et al., 2019; Cai et al., 2020), or incorporated with different search strategies (Liu et al., 2018b; Nayman et al., 2019; Xie et al., 2018; Yang et al., 2019; Cai et al., 2020). Hypernetwork (Brock et al., 2018; Zhang et al., 2018) based evaluation is another type of one-shot evaluation strategy, in which a hypernetwork is trained to generate proper weights for each architecture. Since hypernetwork solutions are not generic currently, this paper concentrates on the evaluation of parameter sharing evaluators.\nOne-Shot (Hypernetwork)\nSuperNet\nTrain Loss\nValid Accuracy Gradient Update\nGenerate weights Architecture\n(with weights)\nData\nPredictor-based\nArchitecture\nPerformance Loss\nPredicted Score\nGround-Truth Performance\nMLPEncoder\nPredictor\nGradient Update\n[Arch1, 78.3%] … [ArchN, 80.4%]\nArchitecture\nProvided by another “oracle” evaluator\nHyperNet\nOne-Shot (Shared weights) Gradient Update\nTraining Inference\n(Evaluator Output)\n(Evaluator Output)\n(Evaluator Input)\n(Evaluator Input)\nFigure 1: An overview of fast neural architecture evaluators (i.e., performance estimators).\nWhether or not one-shot strategies can provide highly-correlated architecture evaluation results is essential for the efficacy of the NAS process. Many recent studies have been focusing on assessing the evaluation correlation of one-shot architecture evaluators (Bender et al., 2018; Sciuto et al., 2019; Zela et al., 2020).\nBesides one-shot evaluation strategies, predictor-based evaluation strategies (Luo et al., 2018; Liu et al., 2018a; Deng et al., 2017; Sun et al., 2019; Wang et al., 2018; Xu et al., 2019; Ning et al., 2020) use a performance predictor that takes the architecture description as inputs and outputs a predicted performance score. The performance predictor should be trained using “ground-truth” architecture performances. This paper utilizes the same set of criteria to evaluate and compare different performance predictors.\nThe fast neural architecture evaluators (i.e., performance estimators) are summarized in Fig. 1, including parameter sharing, hypernetworks, and predictor-based ones. And this paper aims at revealing the status of current architecture evaluation strategies systematically. Specifically, we develop a set of NAS-oriented criteria to understand the behavior of fast architecture evaluators in different training stages. And based on the findings of our experiments, we give pieces of knowledge and suggestions to guide NAS application and motivate further research.\nThe knowledge revealed by this paper includes: 1) Layer proxy brings a larger evaluation gap than using channel proxy, thus channel proxy can be utilized to reduce the computational cost, while proxy-less search w.r.t the layer number is worth studying. 2) The convergence rate of different criteria varies during the one-shot supernet training, which shows that the good architectures are distinguished from bad architectures in the early stage. 3) As training goes on, the one-shot performances of isomorphic sub-architectures become closer. 4) De-isomorphic sampling or post de-isomorphism handling can help avoid the over-estimation of simple architectures. 5) Parameter sharing evaluator tends to over-estimate smaller architectures, and is better at comparing smaller models than larger models. 6) One should use ranking losses rather than regression losses to train predictors, since they are more stable. 7) Different predictors under- or over-estimate different architectures, and currently, the best predictor might still have trouble in comparing large architectures. 8) As expected, architecture predictors can distinguish good architectures better after multiple stages of training, as the training data are more and more concentrated on the good architectures." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 ONE-SHOT EVALUATORS", "text": "One-shot evaluation mainly consists of two types of strategies: 1) parameter sharing (Pham et al., 2018; Wu et al., 2019; Liu et al., 2018b; Nayman et al., 2019; Xie et al., 2018; Yang et al., 2019; Cai et al., 2020), 2) hypernetworks (Brock et al., 2018; Zhang et al., 2018). These two strategies both amortize the training cost of different architectures via the sharing of the network or hypernetwork parameters.\nThe ranking correlation gaps of existing shared weights evaluators are brought by two factors: 1) proxy model and task: due to the memory constraint, a proxy supernet (supernet) (Liu et al., 2018b; Pham et al., 2018) with fewer channels or layers is usually used; 2) parameter sharing. To alleviate the first factor, there are some studies (Cai et al., 2018b; Chen et al., 2019a) that aim at making oneshot evaluation more memory efficient, thus the one-shot search could be conducted without using a proxy supernet. As for the second factor, there are a few studies that carried out correlation evaluation for one-shot evaluators. Zhang et al. (2018) conducted a correlation comparison between the GHN hypernetwork evaluator, shared weights evaluator, and several small proxy tasks. However, the correlation is evaluated using 100 architectures randomly sampled from a large search space, which is not a convincing and consistent benchmark metric. Luo et al. (2019) did a preliminary investigation into why parameter sharing evaluation fails to provide correlated evaluations, and proposed to increase the sample probabilities of the large models. Their evaluation is also conducted on dozens of architectures sampled from the search space. Zela et al. (2020) compare the evaluation correlation of different search strategies on NAS-Bench-101. Sciuto et al. (2019) conduct parameter sharing NAS in a toy RNN search space with only 32 architectures in total, and discover that the parameter sharing rankings do not correlate with the true rankings of architectures. To improve the evaluation correlation, Chu et al. (2019) proposed a sampling strategy in a layer-wise search space.\nIn this paper, we analyze the ranking correlation gaps brought by the model proxy (difference in the number of channels and layers) and the parameter sharing technique, as well as the behavior of one-shot evaluators during the training process." }, { "heading": "2.2 PREDICTOR-BASED EVALUATORS", "text": "An architecture performance predictor takes the architecture descriptions as inputs, and outputs the predicted performance scores without training the architectures. Two factors are crucial to the fitness of the predictors: 1) embedding space; 2) training technique. On the one hand, to embed neural architectures into a continuous space and get a meaningful embedding space, there are studies that propose different architecture encoders, e.g., sequence-based (Luo et al., 2018; Liu et al., 2018a; Deng et al., 2017; Sun et al., 2019; Wang et al., 2018), graph-based (Shi et al., 2019; Ning et al., 2020). As for nonparametric predictors, Kandasamy et al. (2018) design a kernel function in the architecture space and exploits gaussian process to get the posterior of the architecture performances. Shi et al. (2019) combined a graph-based encoder and nonparametric gaussian process to construct the performance predictor. On the other hand, from the aspect of training techniques, Luo et al. (2018) employed an encoder-decoder structure and used an auxiliary reconstruction loss term. Xu et al. (2019); Ning et al. (2020) employed learning-to-rank techniques to train the predictors.\nActually, in the overall NAS framework, the predictor-based evaluator plays a different role from the traditional or one-shot evaluators, since the predictor should be trained using “ground-truth” architecture performances. Usually, expensive traditional evaluators that can provide relatively accurate architecture performances are chosen as the “oracle” evaluators to output the “ground-truth” scores (Kandasamy et al., 2018; Liu et al., 2018a; Luo et al., 2018)." }, { "heading": "3 EVALUATION CRITERIA", "text": "In this section, we introduce the evaluation criteria used in this paper. We denote the search space size as M , the true performances and approximated evaluated scores of architectures {ai}i=1,··· ,M as {yi}i=1,··· ,M and {si}i=1,··· ,M , respectively. And we denote the ranking of the true performance\nyi and the evaluated score si as ri ∈ {1, · · · ,M} and ni ∈ {1, · · · ,M} (ri = 1 indicates that ai is the best architecture in the search space). The correlation criteria adopted in our paper are\n• Linear correlation: The pearson correlation coefficient corr(y, s)/ √\ncorr(y, y)corr(s, s). • Kendall’s Tau ranking correlation: The relative difference of concordant pairs and discor-\ndant pairs ∑ i<j sgn(yi − yj)sgn(si − sj)/ ( M 2 ) .\n• Spearman’s ranking correlation: The pearson correlation coefficient between the rank variables corr(r, n)/ √ corr(r, r)corr(n, n).\nBesides these correlation criteria, criteria that emphasize more on the relative order of architectures with good performances are desired. Denoting AK = {ai|ni < KM} as the set of architectures whose evaluated scores s is among the top K portion of the search space, we use two criteira\n• Precision@K (P@K) ∈ (0, 1] = #{i|ri<KM ∧ ni<KM}KM : The proportion of true top-K proportion architectures in the top-K architectures according to the scores. • BestRanking@K (BR@K) ∈ (0, 1] = argmini∈AK ri: The best normalized ranking\namong the top K proportion of architectures according to the scores.\nThe two criteria are similar to those used in Ning et al. (2020), except that rankings and architecture numbers are all normalized with respect to the search space size M .\nThe above criteria are used to compare the fitness of various architecture evaluators with different configurations and in different stages. Besides that, we’d also like to interpret their evaluation results. To identify which architectures are under- or over-estimated by various evaluators, and analyze the reasons accordingly, we investigate the relationship of the true-predicted ranking differences {ri − ni}i=1,··· ,M and the architecture properties such as the FLOPs: {FLOPs(ai)}i=1,··· ,M ." }, { "heading": "4 PARAMETER-SHARING EVALUATORS", "text": "In this section, we assess the behavior of one-shot evaluators and evaluate the influence of several sampling strategies and training techniques." }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "During the supernet training process, candidate architectures are randomly sampled, and their corresponding weights are updated in every iteration.\nWe conduct our experiments on CIFAR-10 using a recent NAS benchmarking search space, NASBench-201 (Dong & Yang, 2020). NAS-Bench-201 is a NAS benchmark that provides the performances of all the 15625 architectures in a cell-based search space. Actually, there are architectures with different matrix representations that are isomorphic (i.e., represent the same architecture) in this search space. As reported by their original paper, there are 6466 unique topology structures after de-isomorphism.\nThe hyper-parameters used to train all the parameter-sharing supernets are summarized in Tab. 1. We train parameter sharing evaluators via momentum SGD with momentum 0.9 and weight decay 0.0005. The batch size is set to 512. The learning rate is set to 0.05 initially and decayed by 0.5 each time the supernet accuracy stops to increase for 30 epochs. During training, the dropout rate before the fully-connected classifier is set to 0.1, and the gradient norm is clipped to be less than 5.0." }, { "heading": "4.2 TREND OF DIFFERENT CRITERIA", "text": "We inspect how BR@K, P@K, and the correlation criteria converge during the training process. We train a parameter sharing model with 17 layers and 16 initial channels on the de-isomorphic NASBench-201 search space. Shown in Fig. 2(a), the speed of convergence is highly different. BR@K converges very fast. P@K converges in around 250 epochs and then even gradually decreases. Meanwhile, linear correlation, Kendall’s Tau, and Spearman correlation are still growing till 500 epochs, while the parameter sharing accuracy grows during the whole 1000 epochs. This indicates\nthat the models with different rankings change at different speeds as the training progresses, and the top-ranked models stand out faster. Another evidence is shown in Fig. 2(b) that P@5% converges much faster than P@50%. This means that as the training goes on, the supernet is learning to compare architectures with medium or poor performances instead of good ones. Another fact to note in Fig. 2(b) (also see Tab. 3) is that P@5% shows a slow decreasing trend from 200 epochs on. This might be due to that, while the some best architectures stand out very fast in one-shot training, their one-shot performances will be caught up with by others as the training goes on." }, { "heading": "4.3 SAMPLING STRATEGY", "text": "The NAS-Bench-201 search space includes many isomorphic architectures. We expect that one-shot evaluators could handle isomorphic architectures correctly, which means that their evaluated rewards should be close. We calculate the average variances of test accuracy and ranking in isomorphic groups during the training process, as shown in Tab. 2. As the training progresses, the variance\nwithin the group gradually shrinks, which indicates that more sufficient training makes one-shot evaluator handle isomorphic architectures better.\nWe compare the results of sampling with/without isomorphic architectures during training in Tab. 3. The results show that if de-isomorphism sampling is not used, the supernet performs much worse on good architectures in that BR@1%, BR@5% and P@5% are significantly worse (2.515% V.S. 0.015%, 2.221% V.S. 0.015%, 9.22% V.S. 28.48%). In this case, we find that the top-ranked cell architectures are simple architectures (e.g., a single convolution). That is to say, parameter sharing training without de-isomorphism training might over-estimate simple architectures. This might be due to that the equivalent sampling probability is larger for these simple architectures with many isomorphic counterparts. We also experiment with the post de-isomorphism technique, in which the performances of architectures in each isomorphic group are averaged during testing, while no changes are incorporated in the training process. We find that post de-isomorphism can achieve improvements over “No De-isomorphism”. This might owe to the decreased variance of the estimates.\nTab. 4 shows the comparison of using different architecture sample numbers in supernet training. We can see that using multiple architecture MC samples is not beneficial, and MC sample=1 is a good choice. We also adapt Fair-NAS (Chu et al., 2019) sampling strategy to the NAS-Bench-201 search space (a special case of MC sample 5), and find it does not bring improvements." }, { "heading": "4.4 PROXY MODEL", "text": "Due to memory and time constraints, it is common to use a shallower or thinner proxy model in the search process. The common practice is to search using small proxy models with fewer channels and layers, and then “model augment” the discovered architecture to large neural networks. From the experimental results shown in Fig. 3(a)(b), we can see that channel proxy has little influence while\nlayer proxy reduces the reliability of search results. Thus, for cell-based search spaces, proxy-less search w.r.t the layer number is worth studying." }, { "heading": "4.5 OVER- AND UNDER-ESTIMATION OF ARCHITECTURES", "text": "For one-shot evaluators, we expect that the training process is fair and balance for all architectures. However, sub-architectures have different amounts of calculation, and they might converge with a different speed. To understand which architectures are under- or over-estimated by the one-shot evaluators, we inspect the Ranking Diff of the ground truth performance and the one-shot evaluation of an architecture ai: ri − ni. We divide the architectures into ten groups according to the amount of calculation (FLOPs), and show Kendall’s Tau and Average Rank Diff of each group in Fig. 3(c).\nNote that a positive Ranking Diff indicates that this architecture is over-estimated, otherwise it is underestimated. The x-axis is organized such that the architecture group with the least FLOPs is at the leftmost. The average Rank Diff shows a decreasing trend, which means that the larger the model, the easier it is to be underestimated. Also, the decreasing intra-group Kendall’s Tau indicates that it is harder for the one-shot evaluator to compare larger models (which usually have better performances) than comparing smaller models." }, { "heading": "5 PREDICTOR-BASED EVALUATORS", "text": "In this section, we employ the same criteria (i.e., Kendall’s Tau, Precision@K, BestRanking@K) to assess the architecture predictors (i.e., predictor-based evaluators)." }, { "heading": "5.1 EXPERIMENTAL SETUP", "text": "We experiment with four different architecture predictors: MLP, LSTM, GATES (Ning et al., 2020), and a random forest regressor (RF). For MLP, LSTM, and RF, we serialize each architecture matrix using the six elements of its lower triangular portion. We follow (Ning et al., 2020) to construct MLP, LSTM, and GATES. The RF predictor applies a random forest regressor on the 6-dim sequence.\nFor optimizing MLP, LSTM, and GATES, an ADAM optimizer with a learning rate of 1e-3 is used, the batch size is 512, and the training lasts for 200 epochs. Following (Ning et al., 2020), a hinge pairwise ranking loss with margin 0.1 is used for training these predictors. For RF, we use a random forest with 100 CARTs to predict architecture performances." }, { "heading": "5.2 EVALUATION RESULTS", "text": "We train these predictors on training sets of different sizes: 39 (0.25%), 78 (0.5%), 390 (2.5%), 781 (5%). Specifically, for each training set size, we randomly sample three different training sets, and train each predictor on each training set with three different random seeds (20, 2020, 202020). After training, we evaluate each model on the whole NAS-Bench-201 search space using Kendall’s Tau,\nPrecision@K, and BestRanking@K. Different from (Ning et al., 2020), the evaluation is carried out on all the architectures, instead of a separate validation split. As shown in Fig. 4, GATES outperforms other predictors in all settings.\nAs can be seen, training with different seeds on different training sets leads to similar results. In contrast, we found that training predictors with regression loss is not stable and sensitive to the choice of the training set. For example, the Kendall’s Taus of 3 GATES models trained using the regression loss on different training sets of size 78 are 0.7135, 0.7240, and 0.2067, respectively, while with ranking loss, the results are 0.7597, 0.7750, and 0.7645, respectively. These additional results are listed in the Appendix.\nAs illustrated in Fig. A1 in the Appendix, there are usually multiple stages in a predictor-based NAS search process. In each stage i, the current predictor is used to chooseN architectures to be evaluated by the oracle evaluator, then the results will be used to update the predictor. In our experiment, we select the top-N architectures predicted by the current predictor in the whole search space. As shown in Fig. 5, P@0.1% increases a lot after multiple stages of training, which indicates that the predictors perform better and better on distinguishing the good architectures. This is expected since the training data are more concentrated on the good architectures. Also, the performance gap between different predictors (LSTM/GATES) shrinks as more training data become available. The Kendall’s Tau, BR@K, and performance distribution results of multiple stages are listed in the Appendix." }, { "heading": "6 CONCLUSION", "text": "This paper assesses parameter-sharing evaluators and architecture predictors on the NAS-Bench-201 search space, with a set of carefully developed criteria. Hopefully, knowledge revealed by this paper can guide future NAS application and motivate further research." }, { "heading": "A ADDITIONAL RESULTS ABOUT PREDICTOR-BASED EVALUATORS", "text": "A.1 PREDICTOR PERFORMANCES RESULTS: SINGLE- AND MULTIPLE-STAGE\nSearcher\nPredictor\nGround-truth Performance\nArchitecture Oracle\nEvaluator Update\nSample\na. Exhaustive Test b. Evolutionary c. Random Sample d. Generator-based\nArch1, … ArchN\n(Arch1, 73.9%) … (ArchN, 43.6%) Parametric / Nonparametric\nIs it reasonable to use one-shot evaluator instead of traditional evaluator as the oracle\nArchitecture\nPredicted score\nFigure A1: The overview of predictor-based neural architecture search (NAS). The underlined descriptions between the parenthesis denote different methods.\nTable A1: The Kendall’s Tau of different predictors on 3 different randomly sampled training dataset of size 78\nTraining Loss Ranking Regression\nDataset 1 2 3 1 2 3\nMLP 0.1330±0.074 0.1560±0.0078 0.2481±0.0069 0.0111±0.0000 0.0548±0.0276 0.0467±0.0130 LSTM 0.5631±0.0060 0.6028±0.0457 0.5487±0.0150 0.6024±0.0039 0.5784±0.0180 0.4656±0.0176\nGATES (Ning et al., 2020) 0.7597±0.0079 0.7750±0.0106 0.7645±0.0054 0.2067±0.0000 0.7240±0.0074 0.7135±0.0055 RF (Sun et al., 2019) - - - 0.4329±0.0077 0.4123±0.0104 0.4218±0.0119\nTable A2: The performance distribution, BR@K, Kendall’s Tau of 5 training stages. In each stage, N = 78 architectures are chosen, evaluated, and used to train the predictor along with previous architecture data. Note that in this table, K in BR@K is the absolute architecture number without normalization\nGATES\nStage 0 1 2 3 4\nPerf. Range [0.560, 0.938] [0.921, 0.944] [0.935, 0.944] [0.933, 0.944] [0.933, 0.944] Perf. Std 6.43e-2 4.59e-3 2.16e-3 2.18e-3 2.30e-3\nBR@11/BR@7/BR@1 1/2/306 1/1/3 1/1/2 1/1/3 1/1/3 Kendall’s Tau 0.769 0.759 0.752 0.742 0.725\nLSTM\nStage 0 1 2 3 4\nPerf. Range [0.560, 0.938] [0.922, 0.944] [0.922, 0.944] [0.932, 0.944] [0.934, 0.944] Perf. Std 6.43e-2 4.52e-3 2.63e-3 2.42e-3 1.98e-3\nBR@11/BR@7/BR@1 99/268/393 2/2/9 1/1/6 1/2/5 1/1/3 Kendall’s Tau 0.562 0.556 0.571 0.739 0.724\nA.2 OVER- AND UNDER-ESTIMATION OF ARCHITECTURES\nFig. A2(d)(e)(f) illustrates the relationship between the FLOPs of architectures and how it is likely to be over-estimated. It seems that MLP and RF are more likely to overestimate the smaller architectures and underestimate the larger ones, while LSTM and GATES show no obvious preference on the architectures’ FLOPs. Fig. A2(a)(b)(c) shows that GATES can give more accurate rankings on smaller architectures than larger architectures, which indicates that GATES might still have trouble in comparing larger architectures that usually have good performances.\nA.3 ONE-SHOT ORACLE EVALUATOR\nLuo et al. (2018) made an attempt to use a parameter-sharing evaluator as the oracle evaluator in Fig. A1. That is to say, they use the noisy signals provided by the parameter sharing evaluator to train\n1 2 3 4 5 6 7 8 9 10 0.0\n0.1\n0.2\n0.3\n0.4\n0.5 LSTM MLP GATES RF\n(a)\n1 2 3 4 5 6 7 8 9 10\n0.0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6 0.7 LSTM MLP GATES RF\n(b)\n1 2 3 4 5 6 7 8 9 10\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7 0.8 LSTMMLP GATES RF\n(c)\n1 2 3 4 5 6 7 8 9 10\n0.3\n0.2\n0.1\n0.0\n0.1\n0.2 0.3 LSTM MLP GATES RF\n(d)\n1 2 3 4 5 6 7 8 9 10\n0.2\n0.1\n0.0\n0.1\n0.2\nLSTM MLP GATES RF\n(e)\n1 2 3 4 5 6 7 8 9 10\n0.15\n0.10\n0.05\n0.00\n0.05\n0.10\n0.15 0.20 LSTM MLP GATES RF\n(f)\nFigure A2: (a)(b)(c) Kendall-tau in different FLOPs groups, the training set size is 39, 78 and 390, respectively. (d)(e)(f) Average rank difference in different FLOPs groups, the training set size is 39, 78 and 390, respectively.\nthe predictor. This will significantly accelerate the NAS process, compared with using an expensive traditional evaluator. However, it is found to cause the NAS algorithm to fail to discover good architectures. Also, predictors have been used to accelerate parameter-sharing NAS methods (Li et al., 2020; Wang et al., 2020), since one predictor forward pass is faster than testing on the whole validation queue, even if no separate training phase is needed. In this section, we explore whether a predictor can recover from the noisy training signals provided by the parameter-sharing evaluator. Since GATES achieves consistently better results than other predictors, it is used in the following experiments. Specifically, we want to answer two questions:\n1. Can sampling only a subset during supernet training help achieve better one-shot Kendall’s Tau on these architectures?\n2. Can predictor training help recover from the noisy training signals provided by the one-shot evaluator?\nWe randomly sample 78 architectures from the search space. Two differently trained parametersharing evaluators are used to provide the one-shot instruction signal of these 78 architectures: 1) Uniformly sampling from the whole search space, 2) Uniformly sampling from the 78 architectures. We find that strategy 1 (sampling from the whole search space) can get a higher evaluation Kendall’s Tau, no matter whether the evaluation is on the 78 architectures (0.657 V.S. 0.628) or the whole search space (0.701 V.S. 0.670). Thus the answer to Question 1 is “No”.\nThen, to answer the second question, we utilize the one-shot instruction signal provided by the supernet trained with 15625 architectures to train the predictor1. The Kendall’s Tau between the architecture scores given by the resulting predictor and the ground-truth performances is 0.718 on all the 15625 architectures, which is slightly worse than the one-shot instruction signals (0.719). More importantly, BR@1% degrades from 2.5% to 12.1%, thus the answer to Question 2 is “No’.\nThus, we conclude that although training a predictor using one-shot signals can bring acceleration, since no extra inference is needed during the search, it is not beneficial in the sense of evaluation quality (especially of good architectures). Perhaps, incorporating more manual prior knowledge and regularizations can increase the denoising effect, which might be worth future research.\n1The average of scores provided by 3 supernets trained with different seeds is used." }, { "heading": "B ADDITIONAL DISCUSSION AND RESULTS ABOUT ONE-SHOT EVALUATORS", "text": "B.1 DISCUSSION ON DE-ISOMORPHIC SAMPLING\nTo conduct isomorphic sampling, we first design a simple encoding method, which can canonicalize computationally-isomorphic architectures to the same string and non-isomorphic architectures to different strings. In our paper, we use the encoding method to find out all isomorphic groups in the search space and make them as a table. Then, during the supernet training process, we sample architecture groups from this table uniformly. This method is feasible since the benchmark search space is not large. In practice, one can use lazy table-making and rejection sampling to conduct de-isomorphic sampling, by only accepting new or representative architecture samples. More specifically, one first encodes each sampled architecture into a canonical string. If this canonical string has not appeared before, this string stands for a new isomorphic group, and the architecture is recorded as the representative architecture for this isomorphic group. This architecture sample is also accepted. If this canonical string has been recorded before, we only accept the architecture sample if it is the representative architecture for its canonical string.\nThe simple encoding method is similar to the one in NASBench-201. To be more specific, the encoding method goes as follows. Denoting the expression of the i-th node as Si, and the operation in the directed edge (j,i) as Aji, the expression Si can be written as\nSi = Concat(Sort({“(” + Sj + “)” + “%” +Aji}j∈P (i))),\nwhere P (i) denotes the set of predecessor nodes of i, and Sort sorts the strings in dictionary order. We calculate Si(i = 1, · · · , 4) in topological order, and the expression S4 at the final output node is used as the encoding string of the architecture.\nB.2 RESULTS ON DE-ISOMORPHIC SAMPLING\n200 400 600 800 1000\n0.50\n0.55\n0.60\n0.65\n0.70\nNo De-isomorphism Kendall-tau De-isomorphism Kendall-tau Post De-isomorphism Kendall-tau\n(a) Kendall’s Tau\n200 400 600 800 1000\n20\n40\n60\n80\nNo De-isomorphism P@5% No De-isomorphism P@50% De-isomorphism P@5% De-isomorphism P@50% Post De-isomorphism P@5% Post De-isomorphism P@50%\n(b) P@5%, P@50%\n200 400 600 800 1000 0.0\n0.5\n1.0\n1.5\n2.0\n2.5\n3.0\nNo De-isomorphism BR@1% No De-isomorphism BR@5% De-isomorphism BR@1% De-isomorphism BR@5% Post De-isomorphism BR@1% Post De-isomorphism BR@1%\n(c) BR@1%, BR@5%\nFigure A3: How the evaluation quality evolves along the supernet training process with different sampling strategy. The data is also listed in Tab. 3 in the main text.\nFig. A3(a) shows that the ranking correlation keeps increasing throughout the training process. Fig. A3(b) indicates that P@50% increases with more sufficient training, while P@5% slightly shrinks, especially the supernet trained without de-isomorphism sampling. As also been analyzed in the main text, this might arise from the fact that simple architectures have more isomorphic counterparts. Fig. A3(c) shows that without de-isomorphic sampling, BR@1% and BR@5% deteriorate in the late stages of the training process." } ]
2,020
null
SP:c98c40dda3d811ff76816182962ccbed03693eb4
[ "This paper studies the coverage game where agents allocate their resources to target spaces to maximize their coverage, and the goal of this paper is to (approximately) compute the Nash Equilibrium. The proposed method simulates the game by iteratively updating the best response, and the main contribution is an algorithm to approximate the gradient of the utility function with respect to the resource allocation (over the space). In particular, the paper proposes to decompose the gradient into two parts and estimate each part by discretization. " ]
Resource allocation for coverage of physical spaces is a challenging problem in robotic surveillance, mobile sensor networks and security domains. Recent gradient-based optimization approaches to this problem estimate utilities of actions by using neural networks to learn a differentiable approximation to spatial coverage objectives. In this work, we empirically show that spatial coverage objectives with multiple-resources are combinatorially hard to approximate for neural networks and lead to sub-optimal policies. As our major contribution, we propose a tractable framework to approximate a general class of spatial coverage objectives and their gradients using a combination of Newton-Leibniz theorem, spatial discretization and implicit boundary differentiation. We empirically demonstrate the efficacy of our proposed framework on single and multi-agent spatial coverage problems.
[]
[ { "authors": [ "Kareem Amin", "Satinder Singh", "Michael P Wellman" ], "title": "Gradient methods for stackelberg security games", "venue": "In UAI,", "year": 2016 }, { "authors": [ "Peter W Battaglia", "Jessica B Hamrick", "Victor Bapst", "Alvaro Sanchez-Gonzalez", "Vinicius Zambaldi", "Mateusz Malinowski", "Andrea Tacchetti", "David Raposo", "Adam Santoro", "Ryan Faulkner" ], "title": "Relational inductive biases, deep learning, and graph networks", "venue": "arXiv preprint arXiv:1806.01261,", "year": 2018 }, { "authors": [ "Soheil Behnezhad", "Mahsa Derakhshan", "Mohammadtaghi Hajiaghayi", "Saeed Seddighin" ], "title": "Spatiotemporal games beyond one dimension", "venue": "In Proceedings of the 2018 ACM Conference on Economics and Computation,", "year": 2018 }, { "authors": [ "Alireza Dirafzoon", "Mohammad Bagher Menhaj", "Ahmad Afshar" ], "title": "Decentralized coverage control for multi-agent systems with nonlinear dynamics", "venue": "IEICE TRANSACTIONS on Information and Systems,", "year": 2011 }, { "authors": [ "Jiarui Gan", "Bo An", "Yevgeniy Vorobeychik", "Brian Gauch" ], "title": "Security games on a plane", "venue": "In AAAI,", "year": 2017 }, { "authors": [ "William Haskell", "Debarun Kar", "Fei Fang", "Milind Tambe", "Sam Cheung", "Elizabeth Denicola" ], "title": "Robust protection of fisheries with compass", "venue": "IAAI,", "year": 2014 }, { "authors": [ "Andrew Howard", "Maja J Matarić", "Gaurav S Sukhatme" ], "title": "Mobile sensor network deployment using potential fields: A distributed, scalable solution to the area coverage problem", "venue": "In Distributed Autonomous Robotic Systems", "year": 2002 }, { "authors": [ "Taoan Huang", "Weiran Shen", "David Zeng", "Tianyu Gu", "Rohit Singh", "Fei Fang" ], "title": "Green security game with community engagement", "venue": "arXiv preprint arXiv:2002.09126,", "year": 2020 }, { "authors": [ "Matthew P. Johnson", "Fei Fang", "Milind Tambe" ], "title": "Patrol strategies to maximize pristine forest area", "venue": "In AAAI,", "year": 2012 }, { "authors": [ "Nitin Kamra", "Umang Gupta", "Fei Fang", "Yan Liu", "Milind Tambe" ], "title": "Policy learning for continuous space security games using neural networks", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Nitin Kamra", "Umang Gupta", "Kai Wang", "Fei Fang", "Yan Liu", "Milind Tambe" ], "title": "Deep fictitious play for games with continuous action spaces", "venue": "In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS),", "year": 2019 }, { "authors": [ "Nitin Kamra", "Umang Gupta", "Kai Wang", "Fei Fang", "Yan Liu", "Milind Tambe" ], "title": "Deepfp for finding nash equilibrium in continuous action spaces. In Decision and Game Theory for Security (GameSec)", "venue": null, "year": 2019 }, { "authors": [ "Christopher Kiekintveld", "Manish Jain", "Jason Tsai", "James Pita", "Fernando Ordóñez", "Milind Tambe" ], "title": "Computing optimal randomized resource allocations for massive security games", "venue": "In AAMAS, pp", "year": 2009 }, { "authors": [ "Chan Sze Kong", "New Ai Peng", "Ioannis Rekleitis" ], "title": "Distributed coverage with multi-robot system", "venue": "In Proceedings 2006 IEEE International Conference on Robotics and Automation,", "year": 2006 }, { "authors": [ "Marc Lanctot", "Vinicius Zambaldi", "Audrunas Gruslys", "Angeliki Lazaridou", "Karl Tuyls", "Julien Pérolat", "David Silver", "Thore Graepel" ], "title": "A unified game-theoretic approach to multiagent reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Qian Long", "Zihan Zhou", "Abhibav Gupta", "Fei Fang", "Yi Wu", "Xiaolong Wang" ], "title": "Evolutionary population curriculum for scaling multi-agent reinforcement learning", "venue": "arXiv preprint arXiv:2003.10423,", "year": 2020 }, { "authors": [ "Ryan Lowe", "Yi Wu", "Aviv Tamar", "Jean Harb", "OpenAI Pieter Abbeel", "Igor Mordatch" ], "title": "Multi-agent actor-critic for mixed cooperative-competitive environments", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Ali Nasri Nazif", "Alireza Davoodi", "Philippe Pasquier" ], "title": "Multi-agent area coverage using a single query roadmap: A swarm intelligence approach", "venue": "In Advances in practical multi-agent systems,", "year": 2010 }, { "authors": [ "Deepak Pathak", "Pulkit Agrawal", "Alexei A Efros", "Trevor Darrell" ], "title": "Curiosity-driven exploration by self-supervised prediction", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2017 }, { "authors": [ "Huy Xuan Pham", "Hung Manh La", "David Feil-Seifer", "Aria Nefian" ], "title": "Cooperative and distributed reinforcement learning of drones for field coverage", "venue": "arXiv preprint arXiv:1803.07250,", "year": 2018 }, { "authors": [ "Alessandro Renzaglia", "Lefteris Doitsidis", "Agostino Martinelli", "Elias B Kosmatopoulos" ], "title": "Multirobot three-dimensional coverage of unknown areas", "venue": "The International Journal of Robotics Research,", "year": 2012 }, { "authors": [ "Martin Saska", "Jan Chudoba", "Libor Přeučil", "Justin Thomas", "Giuseppe Loianno", "Adam Třešňák", "Vojtěch Vonásek", "Vijay Kumar" ], "title": "Autonomous deployment of swarms of micro-aerial vehicles in cooperative surveillance", "venue": "In 2014 International Conference on Unmanned Aircraft Systems (ICUAS),", "year": 2014 }, { "authors": [ "Milind Tambe" ], "title": "Security and Game Theory: Algorithms, Deployed Systems, Lessons Learned", "venue": null, "year": 2011 }, { "authors": [ "Yufei Wang", "Zheyuan Ryan Shi", "Lantao Yu", "Yi Wu", "Rohit Singh", "Lucas Joppa", "Fei Fang" ], "title": "Deep reinforcement learning for green security games with real-time information", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Rong Yang", "Benjamin Ford", "Milind Tambe", "Andrew Lemieux" ], "title": "Adaptive resource allocation for wildlife protection against illegal poachers", "venue": "In AAMAS,", "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "Allocation of multiple resources for efficient spatial coverage is an important component of many practical single-agent and multi-agent systems, for e.g., robotic surveillance, mobile sensor networks and security game modeling. Surveillance tasks generally involve a single agent assigning resources e.g. drones or sensors, each of which can monitor physical areas, to various points in a target domain such that a loss function associated with coverage of the domain is minimized (Renzaglia et al., 2012). Alternatively, security domains follow a leader-follower game setup between two agents, where a defender defends a set of targets (or a continuous target density in a geographical area) with limited resources to be placed, while an attacker plans an attack after observing the defender’s placement strategy using its own resources (Tambe, 2011).\nTraditional methods used to solve single-agent multi-resource surveillance problems often rely on potential fields (Howard et al., 2002), discretization based approaches (Kong et al., 2006), voronoi tessellations (Dirafzoon et al., 2011) and particle swarm optimization (Nazif et al., 2010; Saska et al., 2014). Similarly, many exact and approximate approaches have been proposed to maximize the defender’s expected utility in two-agent multi-resource security domains against a best responding attacker (Kiekintveld et al., 2009; Amin et al., 2016; Yang et al., 2014; Haskell et al., 2014; Johnson et al., 2012; Huang et al., 2020). Notably, most existing traditional approaches focus on exploiting some specific spatio-temporal or symmetry structure of the domain being examined.\nRelated Work: Since spatial coverage problems feature continuous action spaces, a common technique used across most previous works is to discretize the area to be covered into grid cells and restrict the agents’ actions to discrete sets (Kong et al., 2006; Yang et al., 2014; Haskell et al., 2014; Gan et al., 2017) to find the equilibrium mixed strategies or optimal pure strategies using integer linear programming. However, discretization quickly becomes intractable when the number of each agent’s resources grows large. While some games can be characterized by succinct agent strategies and can be solved efficiently via mathematical programming after discretizing the agents’ actions spaces (Behnezhad et al., 2018), this is not true for most multi-resource games.\nRecent works in spatial coverage domains have focused on incorporating advances from deep learning to solve the coverage problems with more general algorithms. For instance, Pham et al. (2018) focus on the multi-UAV coverage of a field of interest using a model-free multi-agent RL method while StackGrad (Amin et al., 2016), OptGradFP (Kamra et al., 2018), PSRO (Lanctot et al., 2017) are model-free fictitious play based algorithms which can be used to solve games in continuous action spaces. However model-free approaches are sample inefficient and require many interactions with the domain (or with a simulator) to infer expected utilities of agents’ actions. Secondly, they often rely\non the policy gradients to compute the derivative of the agents’ expected utilities w.r.t. their mixed strategies, which induces a high variance in the estimate.\nTo alleviate these issues, more recent works take an actor-critic based approach (Lowe et al., 2017), which additionally learns a differentiable approximation to the agents’ utilities (Kamra et al., 2019a; Wang et al., 2019) and calculate gradients of strategies w.r.t. the utilities. But this requires learning accurate reward/value functions which becomes combinatorially hard for multi-resource coverage.\nContributions: To address the above challenge, we present a framework to tractably approximate a general class of spatial coverage objectives and their gradients via spatial discretization without having to learn neural network based reward models. We only discretize the target domain to represent integrals and all set operations over it, but not the action spaces of the agents. Hence we mitigate the intractability caused by discretizing high dimensional action spaces of agents with large number of resources, while also keeping agents’ actions amenable to gradient-based optimization. By combining our framework with existing solution methods, we successfully solve both single-agent and adversarial two-agent multi-resource spatial coverage problems." }, { "heading": "2 MULTI-RESOURCE SPATIAL COVERAGE PROBLEMS", "text": "In this section, we formally introduce notation and definitions for multi-resource allocation problems along with two example applications, which will be used for evaluation.\nMulti-agent multi-resource spatial coverage: Spatial coverage problems comprise of a target space Q ⊂ Rd (generally d ∈ {2, 3}) and a set of agents (or players) P with each agent p ∈ P having mp resources. We will use the notation −p to denote all agents except p i.e. P\\{p}. Actions: An action up ∈ Rmp×dp for agent p is the placement of all its resources in an appropriate coordinate system of dimension dp. Let Up denote the compact, continuous and convex action set of agent p. Mixed strategies: We represent a mixed strategy i.e. the probability density of agent p over its action set Up as σp(up) ≥ 0 s.t. ∫ Up σp(up)dup = 1. We denote agent p sampling an action up ∈ Up from his mixed strategy density as up ∼ σp. Joints: Joint actions, action sets and densities for all agents together are represented as u = {up}p∈P , U = ×p∈P {Up} and σ = {σp}p∈P respectively. Coverage: When placed, each resource covers (often probabilistically) some part of the target space Q. Let cvgp : q × u→ R be a function denoting the utility for agent p coming from a target point q ∈ Q due to a joint action u for all agents. We do not assume a specific form for the coverage utility cvgp and leave it to be defined flexibly, to allow many different coverage applications to be amenable to our framework. Rewards: Due to the joint action u, each player achieves a coverage reward rp : u → R of the form rp(u) = ∫ Q\ncvgp(q, u) impp(q) dq, where impp(q) denotes the importance of the target point q for agent p. With a joint mixed strategy σ, player p achieves expected utility: Eu∼σ[rp] = ∫ U rp(u)σ(u)du. Objectives: In single-agent settings, the agent would directly optimize his expected utility w.r.t. action up. But in multi-agent settings, the expected utilities of agents depend on other agents’ actions and hence cannot be maximized with a deterministic resource allocation due to potential exploitation by other agents. Instead agents aim to achieve Nash equilibrium mixed strategies σ = {σp}p∈P over their action spaces. Nash equilibria: A joint mixed strategy σ∗ = {σ∗p}p∈P is said to be a Nash equilibrium if no agent can increase its expected utility by changing its strategy while the other agents stick to their current strategy.\nTwo-player settings: While our proposed framework is not restricted to the number of agents or utility structure of the game, we will focus on single-player settings and zero-sum two-player games in subsequent examples. An additional concept required by fictitious play in two-player settings is that of a best response. A best response of agent p against strategy σ−p is an action which maximizes his expected utility against σ−p:\nbrp(σ−p) ∈ arg max up\n{ Eu−p∼σ−p [rp(up, u−p)] } .\nThe expected utility of any best response of agent p is called the exploitability of agent −p: −p(σ−p) := max\nup\n{ Eu−p∼σ−p [rp(up, u−p)] } .\nNotably, a Nash equilibrium mixed strategy for each player is also their least exploitable strategy. Example 1 (Single-agent Areal Surveillance). A single agent, namely the defender (D), allocates m areal drones with the ith drone Di having three-dimensional coordinates uD,i = (pD,i, hD,i) ∈\n[−1, 1]2 × [0, 1] to surveil a two-dimensional forestQ ⊂ [−1, 1]2 of arbitrary shape and with a known but arbitrary tree density ρ(q). Consequently, uD ∈ Rm×3. Each drone has a downward looking camera with a circular lens and with a half-angle θ such that at position (pD,i, hD,i), the drone Di sees the set of points SD,i = {q | ||q − pD,i||2 ≤ hD,i tan θ}. A visualization of this problem with m = 2 drones is shown for a sample forest in Figure 1a. We assume a probabilistic model of coverage\nwith a point q being covered by drone Di with probability PH(hD,i) = eK(hopt−hD,i) ( hD,i hopt )Khopt if q ∈ SD,i and 0 otherwise. With multiple drones, the probability of a point q being covered can then be written as: cvg(q, uD) = 1− ∏ i|q∈SD,i P̄H(hD,i) where P̄H stands for 1− PH . Hence, the\nreward function to be maximized is: rD,1p(uD) = ∫ Q ( 1− ∏ i|q∈SD,i P̄H(hD,i) ) ρ(q)dq with the tree density ρ(q) being the importance of target point q (subscript 1p denotes one agent).\nExample 2 (Two-agent Adversarial Coverage). Two agents, namely the defender D and the attacker A, compete in a zero-sum game. The defender allocatesm areal drones with the same coverage model as in example 1. The attacker controls n lumberjacks each with ground coordinates uA,j ∈ [−1, 1]2 to chop trees in the forest Q. Consequently, uA ∈ Rn×2. Each lumberjack chops a constant fraction κ of trees in a radius RL around its coordinates uA,j . We denote the area covered by the j-th lumberjack as SA,j = {q | ‖q − pA,j‖2 ≤ RL}. A visualization of this problem with m = n = 2 is shown for a sample forest in Figure 1b. A drone can potentially catch a lumberjack if its field of view overlaps with the chopping area. For a given resource allocation u = (uD, uA), we define Ij = {i | ‖pA,j−pD,i‖2 ≤ RL+hD,i tan θ} as the set of all drones which overlap with the j-th lumberjack. The areal overlap αij = ∫ SD,i∩SA,j dq controls the probability of the j-th lumberjack being caught by the i-th drone: PC(hD,i, αij) = PH(hD,i)PA(αij) where PH is the same as that in example 1\nand captures the effect of drone’s height on quality of coverage, while PA(αij) = 1−exp ( −Kaαij\nπR2L ) captures the effect of areal overlap on probability of being caught. Hence, the reward achieved by the j-th lumberjack can be computed as: rA,j(uD, uA,j) = κ ∫ SA,j∩Q ρ(q)dq with probability∏\ni∈Ij P̄ (hD,i, αij), and −κ ∫ SA,j∩Q ρ(q)dq otherwise i.e. the number of trees chopped if the j-th lumberjack is not caught by any drone or an equivalent negative penalty if it is caught. Hence, the total agent rewards are: rA,2p(uD, uA) = −rD,2p(uD, uA) = ∑ j rA,j(uD, uA,j) (subscript 2p denotes two-agent).\nNote that in the above examples drones provide best probabilistic coverage at a height hopt. By increasing their height, a larger area can be covered at the cost of deterioration in coverage probability. Further, the defender can increase coverage probability for regions with high tree density by placing multiple drones to oversee them; in which case, the drones can potentially stay at higher altitudes too. Example 2 further adds additional interactions due to overlaps between defender and attacker’s resources1. Hence, these examples form a challenging set of evaluation domains with multiple trade-offs and complex possibilities of coverage involving combinatorial interactions between the players’ resources. For both examples, we use the following constants: θ = π6 , hopt = 0.2, K = 4.0, RL = 0.1, Ka = 3.0, κ = 0.1. However, note that these values only serve as practical representative values. The techniques that we introduce in this paper are not specific to the above probabilistic capture models or specific values of game constants, but rather apply to a broad class of coverage problems where the agents act by placing resources with finite coverage fields and agents’ rewards are of the form: rp(u) = ∫ Q fp(u, q)dq.\nDealing with zero gradients: In the two-agent game, the attacker’s reward depends on the locations of its resources, but the defender’s reward solely depends on overlaps with the attacker’s resources. In absence of such overlap, the gradient of rD,2p w.r.t. uD,i becomes 0. Hence, we propose to use the reward from the one-agent game as an intrinsic reward for the defender similar to how RL algorithms employ intrinsic rewards when extrinsic rewards are sparse (Pathak et al., 2017). Then the reward function for the defender becomes: r̃D,2p(uD, uA) = rD,2p(uD, uA) + µrD,1p(uD). We use a small µ = 0.001 to not cause significant deviation from the zero-sum structure of the game and yet provide a non-zero gradient to guide the defender’s resources in the absence of gradients from rD,2p.\n1In reality, lumberjacks might act independent of each other and lack knowledge of each others’ plans. By allowing them to be placed via a single attacker and letting them collude, we tackle a more challenging problem and ensure that not all of them get caught by independently going to strongly covered forest regions." }, { "heading": "3 METHODS", "text": "Solution approaches: The key idea for all solution approaches is to obtain a differentiable approximation to the expected utility of the agents and then maximize it w.r.t. the agents’ actions (or mixed strategies). For single-agent games, this boils down to performing direct gradient ascent on a differentiable approximation to rD(uD), thereby converging at a (locally) optimal value of uD. For two-agent adversarial games, DeepFP (Kamra et al., 2019b), an actor-critic based approach based on fictitious play can be used. Briefly summarized in algorithm 1, it obtains a differentiable approximation to the reward functions r̃D,2p and rA,2p, creates an empty memory to store a non-parametric representation of the agents’ mixed strategies σ = (σD, σA) and initializes best responses for both agents randomly [lines 1-3]. Then it alternatively updates: (a) the agents’ strategies, by storing the current best responses in the memory [line 5], and (b) the best responses, by maximizing each agent p’s differentiable reward function against a batch of samples drawn from the other agent’s strategy σ−p [lines 6-8]. Details of DeepFP hyperparameters used can be found in section A.6 in the appendix. The key component required in both cases is a differentiable approximation to the reward function and we propose a tractable framework for this challenging task in the subsequent sub-sections.\nMitigating sub-optimal local best responses: During our preliminary experiments with DeepFP, we observed that updating best responses using purely gradient-based optimization can often get stuck in sub-optimal local optima. While DeepFP maintains stochastic best responses to alleviate this issue, it doesn’t eliminate it completely. We briefly describe our solution to this issue here (please see section A.4 in the appendix for a more elaborate discussion on the issue and details of the proposed solution). Motivated by Long et al. (2020), we propose a simple population-based approach wherein we maintain a set of K deterministic best responses brkp(σ−p), for p ∈ {D,A} and ∀k ∈ [K]. During the best response optimization step for agent p [lines 6-8], we optimize the K best responses independently and play the one which exploits agent −p the most. After the optimization step, the top K2 best responses are retained while the bottom half are discarded and freshly initialized with random placements for the next iteration. This allows retention and further refinement of the current best responses over subsequent iterations, while discarding and replacing the ones stuck in sub-optimal local minima." }, { "heading": "3.1 DIFFERENTIABLE APPROXIMATION FOR COVERAGE OBJECTIVES", "text": "First, we propose a method to approximate coverage objectives and their gradients w.r.t. agents’ actions. Consider an objective of the form:\nr(u) = ∫ Q f(u, q) dq (1)\nwhere u denotes actions of one or more agents having multiple resources to place at their disposal and q is any point in the target domain Q. We assume that the action u has m components with ui representing the location of i-th resource (i ∈ [m]) and u\\i representing the locations of all resources other than i. Note that the imp(q) function has been subsumed into f(u, q) in this formulation. We are interested in computing the gradient: ∂r∂ui . However, this is a hard problem since: (a) r(u) involves integration over arbitrary (non-convex shaped) target domains which does not admit a closed-form expression in terms of elementary functions and hence cannot be differentiated with autograd libraries\nAlgorithm 1: DeepFP Result: Final strategies σD, σA in mem\n1 Obtain a differentiable approximation r̂ = (r̂D, r̂A) to the reward functions: (r̃D,2p, rA,2p); 2 Initialize best responses (brD, brA) randomly; 3 Create empty memory mem to store σ = (σD, σA); 4 for game ∈ {1, . . . , max games} do\n/* Update strategies */ 5 Update σ by storing best responses {brD, brA} in mem; /* Update best responses */ 6 for agent p ∈ {D,A} do 7 Draw samples {ui−p}i=1:bs from σ−p in mem; 8 brp := maxup 1 bs ∑bs i=1 r̂p(up, u i −p);\nlike PyTorch and TensorFlow, and (b) most resources have a finite coverage area, outside of which the coverage drops to zero. This often makes the function f(u, q) discontinuous w.r.t. q given a fixed u especially at the coverage boundaries induced by the resources’ coordinates, for e.g., drones have a circular probabilistic coverage area governed by their height and camera half-angle θ, outside which the coverage probability suddenly drops to zero. Theorem 1. Let the objective function be as shown in eq 1: r(u) = ∫ Q f(u, q) dq. Denoting the set of points covered by the i-th resource as Si, the interior of a set with in(·) and the boundary with δ(·), the gradient of r(u) w.r.t. the i-th resource’s location ui is given by:\n∂r(u)\n∂ui = ∫ in(Q∩Si) ∂f(u, q) ∂ui dq + ∫ Q∩δSi ( f(u, q)− f(u\\i, q) )∂qQ∩δSi ∂ui T nqQ∩δSi dq (2)\nProof. While function f can be potentially discontinuous in q across resources’ coverage boundaries, r(u) integrates over q ∈ Q thereby removing the discontinuities. Hence, instead of directly taking the derivative w.r.t. a particular resource’s location ui inside the integral sign, we split the integral into two parts - over the i-th resource’s coverage area Si and outside it:\nr(u) = ∫ Q∩Si f(u, q) dq + ∫ Q\\Si f(u, q) dq (3)\nSplitting the integral at the boundary of the discontinuity allows us to explicitly capture the effect of a small change in ui on this boundary. Denoting the interior of a set with in(·) and the boundary with δ(·), the derivative w.r.t. ui can be expressed using the Newton-Leibniz formula as:\n∂r(u)\n∂ui = ∫ in(Q∩Si) ∂f(u, q) ∂ui dq + ∫ δ(Q∩Si) f(u, q) ∂qδ(Q∩Si) ∂ui T nqδ(Q∩Si) dq\n+ ∫ in(Q\\Si) ∂f(u\\i, q) ∂ui dq + ∫ δ(Q\\Si) f(u\\i, q) ∂qδ(Q\\Si) ∂ui T nqδ(Q\\Si) dq,\n(4)\nwhere ∂qδ(Q∩Si)\n∂ui denotes the boundary velocity for δ(Q ∩ Si) and nqδ(Q∩Si) denotes the unit-vector\nnormal to a point q on the boundary δ(Q ∩ Si) (similarly for δ(Q\\Si)). Since f(u\\i, q) does not depend on ui, we can set ∂f(u\\i,q)\n∂ui = 0. Next observe that the boundaries can be further decomposed\nas: δ(Q ∩ Si) = (δQ ∩ Si) ∪ (Q ∩ δSi) and similarly δ(Q\\Si) = (δQ\\Si) ∪ (Q ∩ δSi). However since ui does not change the boundary of the target domain δQ, we have:\n∂qδQ∩Si ∂ui = 0, ∀q ∈ δQ ∩ Si (5)\n∂qδQ\\Si ∂ui = 0, ∀q ∈ δQ\\Si (6)\nFurther on the boundary of Si, the following unit-vectors normal to the boundary are oppositely aligned:\nnqδ(Q\\Si) = −nqδ(Q∩Si) ∀q ∈ Q ∩ δSi. (7)\nSubstituting the above results, we can simplify the gradient expression in eq 4 to:\n∂r(u)\n∂ui = ∫ in(Q∩Si) ∂f(u, q) ∂ui dq + ∫ Q∩δSi ( f(u, q)− f(u\\i, q) )∂qQ∩δSi ∂ui T nqQ∩δSi dq (8)\nThe first term in eq 2 corresponds to the change in f inside the coverage area of resource i due to a small change in ui, while the second term elegantly factors-in the effects of movement or shape change of the coverage area boundary due to changes in ui (e.g. when a drone moves or elevates in height). While we show the general result here, the term ∂qQ∩δSi∂ui T nqQ∩δSi can be simplified further using implicit differentiation of the boundary of Si, which depends on the particular game under consideration. We show the simplification for our example domains in section A.2 in the appendix." }, { "heading": "3.2 DISCRETIZATION BASED APPROXIMATIONS", "text": "While we now have a general form for r(u) and ∂r∂u , both forms comprise of non closed-form integrals over the target domain Q or its subsets. While evaluating r and ∂r∂u in practice, we adopt a discretization based approach to approximate the integrals. Given a target domain Q ⊂ Rd with d ∈ {2, 3}, we discretize the full Rd space into B1, . . . , Bd bins respectively in each of the d dimensions. Approximating spatial maps: All spatial maps i.e. functions over the target domain Q (e.g. f(u, q)), are internally represented as real tensors of dimension d with size: (B1, . . . , Bd). Approximating sets: All geometric shapes (or sets of points) including Si for all resources (e.g., the circular coverage areas of drones and lumberjacks) and the target domain Q itself (e.g., the irregular shaped forest) are converted to binary tensors each of dimension d+1 with size: (B1, . . . , Bd, 3). The final dimension of length 3 denotes interior, boundary and exterior of the geometric shape respectively, i.e. a binary tensor T has Tb1,...,bd,0 = 1 if the bin at index (b1, . . . , bd) is inside the geometric shape, Tb1,...,bd,1 = 1 if the bin is on the boundary of the geometric shape and Tb1,...,bd,2 = 1 if the bin is outside the geometric shape. Approximating operators: Doing the above discretization requires an efficient function for computing the binary tensors associated with the in(·) and the δ(·) operators. This is performed by our efficient divide-and-conquer shape discretizer, which is presented in section A.3 due to space constraints. The other set operations are approximated as follows: (a) set intersections are performed by element-wise binary tensor products, (b) integrals of spatial maps over geometric sets are approximated by multiplying (i.e. masking) the real tensor corresponding to the spatial map with the binary tensor corresponding to the geometric set followed by an across-dimension sum over the appropriate set of axes.\nWhile our discretized bins growing exponentially with dimension d of the target domain may come off as a limitation, our method still scales well for most real-world coverage problems since they reside on two or three-dimensional target domains. Further, unlike previous methods which discretize the target domain and simultaneously restrict the agents’ actions to discrete bins (Yang et al., 2014; Haskell et al., 2014), we do not discretize the actions u of agents. Hence, we do not run into intractability induced by discretizing high-dimensional actions of agents owning multiple resources and we keep u amenable to gradient-based optimization. Our proposed framework acts as an autograd module for r(u), differentiable w.r.t. input u, and provides both the forward and the backward calls (i.e. evaluation and gradients)." }, { "heading": "4 EXPERIMENTS", "text": "In our experiments on both our application domains, we differentiably approximate rewards using the following variants: (a) feedforward neural networks [nn], (b) graph neural networks [gnn], and (c) and with our differentiable coverage approximation [diff ]. For the nn and gnn baselines, we trained neural networks, one per forest and per value of m (and n for two-agent games), to predict the reward of the defender (and attacker in case of two-agent game). The neural networks take as input the action uD of the defender (and uA also for two-agent game) and outputs a prediction for the reward r̂D,1p (r̂D,2p and r̂A,2p for two-agent game). Please see section A.6 in appendix for network architectures and hyperparameters. We also represent best responses with the following variants: (a) stochastic best response nets [brnet] as originally done by DeepFP, and (b) our deterministic evolutionary population [popK] with K being the population size. We use d = 2 dimensional forests and discretize them into B1 = B2 = 200 bins per dimension for a total of 40K bins." }, { "heading": "4.1 RESULTS ON AREAL SURVEILLANCE DOMAIN", "text": "We maximized differentiable approximations of r̂D,1p using all three methods: nn, gnn and diff for different values of m ∈ {1, 2, 4} over 5 different forest instances differing in shape and tree density. The maximum true reward rD,1p achieved by the three methods in all cases averaged over all the forest instances is summarized in Table 1. It is clear that diff always achieves the maximum true reward. While the difference difference from nn and gnn is less pronounced for m = 1, as the number of agent resources increases beyond 1, the approximation quality of nn and gnn deteriorates and the difference becomes very significant. This is also reflected in the plots of true reward achieved vs training iterations shown in Figure 2. Since diff is an unbiased approximator of the true reward2, the true reward continues to increase till convergence for diff. For nn and gnn, the true reward increases initially but eventually goes down as the defender action uD begins to overfit the biased and potentially inaccurate approximations made by nn and gnn3. Figure 3 shows the final locations computed for a randomly chosen forest and with m = 2 for all three methods." }, { "heading": "4.2 RESULTS ON ADVERSARIAL COVERAGE GAME", "text": "We implemented different variants of DeepFP with variations of differentiable reward models in {nn, gnn, diff} along with variations of best responses in {brnet, pop4}. We measured the exploitability D(σD) of the defender strategy found by all methods to compare them against each other. To compute the exploitability of the defender strategy found by any variant of DeepFP, we froze the defender strategy σD and directly maximized EuD∼σD [r̂A(uD, uA)] w.r.t. uA with r̂A being approximated by diff. This is a single-agent objective and can be directly maximized with gradient ascent. We perform 30 independent maximization runs to avoid reporting local maxima and report the best of them as the exploitability. Note that nash equilibrium strategies are the least exploitable strategies, hence the lower the value of D(σD) found, the closer σD is to the nash equilibrium strategy.\n2The only bias in diff is the discretization bin sizes, which can be made arbitrarily small in principle. 3Please see section A.1 in the appendix for a detailed analysis of this phenomenon.\nTable 2 shows the exploitability values for different variants of DeepFP. We observe that the exploitability when best responses are approximated by a population-based variant with K = 4 is always lower than that of stochastic best response networks employed by original DeepFP. Further, with few agent resources m = n = 1, the exploitability across diff, nn and gnn is nearly similar but the disparity increases for larger number of agent resources and diff dominates over nn and gnn with less exploitable defender strategies. Notably, the original DeepFP (nn + brnet) is heavily exploitable while our proposed variant (diff + popK) is the least exploitable. In Figure 4, we show a visualization of the points sampled from the defender and attacker’s strategies for m = n = 2 case on the same forest from Figure 3a. The visualization confirms that diff + popK covers the dense core of the forest with the defender’s drones so the attacking lumberjacks attack only the regions surrounding the dense core, while nn + brnet drones often gets stuck and concentrated in a small region thereby allowing lumberjacks to exploit the remaining dense forest. Please also see section A.5 in the appendix exploring the trade-offs in the choice of population size K." }, { "heading": "5 CONCLUSION", "text": "In this work, we show that spatial coverage objectives with multiple-resources are combinatorially hard to approximate with neural networks. We propose to directly approximate a large class of multiagent multi-resource spatial coverage objectives and their gradients tractably without learning neural network based reward models. By augmenting existing approaches with our spatial discretization based approximation framework, we show improved performance in both single-agent and adversarial two-agent multi-resource spatial coverage problems." } ]
2,020
null
SP:5d4084ca5f3570dfd854aa399f2778e0b649f862
[ "The paper focuses on soft-constrained RL techniques and proposes a meta-gradient approach for the same. It first extends the RCPO (Tessler et al) algorithm using the methodology of DDPG (Lillicarp et al) to propose an off-policy version of RCPO (called RC-D4PG). The main contribution of the work is the proposal of two new meta-gradients based algorithms for the soft-constrained RL problem that are able to find a good trade-off between constraint violation and maximizing returns. The first proposed algorithm - Meta-L - is based on a meta-learning based adaptive update rule for the Lagrange multiplier's learning rate. The second algorithm is based on similar principles but instead focuses on adapting the reward-shaping update in a meta manner. The author's show the empirical evidence of their method's strengths on a bunch of continuous control based simulator tasks. " ]
Deploying Reinforcement Learning (RL) agents to solve real-world applications often requires satisfying complex system constraints. Often the constraint thresholds are incorrectly set due to the complex nature of a system or the inability to verify the thresholds offline (e.g, no simulator or reasonable offline evaluation procedure exists). This results in solutions where a task cannot be solved without violating the constraints. However, in many real-world cases, constraint violations are undesirable yet they are not catastrophic, motivating the need for soft-constrained RL approaches. We present a soft-constrained RL approach that utilizes meta-gradients to find a good trade-off between expected return and minimizing constraint violations. We demonstrate the effectiveness of this approach by showing that it consistently outperforms the baselines across four different Mujoco domains.
[ { "affiliations": [], "name": "META-GRADIENT D4PG" }, { "affiliations": [], "name": "Dan A. Calian" }, { "affiliations": [], "name": "Daniel J. Mankowitz" }, { "affiliations": [], "name": "Tom Zahavy" }, { "affiliations": [], "name": "Zhongwen Xu" }, { "affiliations": [], "name": "Junhyuk Oh" }, { "affiliations": [], "name": "Nir Levine" } ]
[ { "authors": [ "Abbas Abdolmaleki", "Jost Tobias Springenberg", "Jonas Degrave", "Steven Bohez", "Yuval Tassa", "Dan Belov", "Nicolas Heess", "Martin A. Riedmiller" ], "title": "Relative entropy regularized policy", "venue": "iteration. CoRR,", "year": 2018 }, { "authors": [ "Joshua Achiam", "David Held", "Aviv Tamar", "Pieter Abbeel" ], "title": "Constrained policy optimization", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Eitan Altman" ], "title": "Constrained Markov decision processes, volume 7", "venue": "CRC Press,", "year": 1999 }, { "authors": [ "Gabriel Barth-Maron", "Matthew W Hoffman", "David Budden", "Will Dabney", "Dan Horgan", "Dhruva Tb", "Alistair Muldal", "Nicolas Heess", "Timothy Lillicrap" ], "title": "Distributed distributional deterministic policy gradients", "venue": "arXiv preprint arXiv:1804.08617,", "year": 2018 }, { "authors": [ "Marc G Bellemare", "Will Dabney", "Rémi Munos" ], "title": "A distributional perspective on reinforcement learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Steven Bohez", "Abbas Abdolmaleki", "Michael Neunert", "Jonas Buchli", "Nicolas Heess", "Raia Hadsell" ], "title": "Value constrained model-free continuous control", "venue": null, "year": 1902 }, { "authors": [ "Yinlam Chow", "Ofir Nachum", "Edgar Duenez-Guzman", "Mohammad Ghavamzadeh" ], "title": "A lyapunovbased approach to safe reinforcement learning, 2018", "venue": null, "year": 2018 }, { "authors": [ "Gabriel Dulac-Arnold", "Daniel J. Mankowitz", "Todd Hester" ], "title": "Challenges of real-world reinforcement learning", "venue": null, "year": 1904 }, { "authors": [ "Gabriel Dulac-Arnold", "Nir Levine", "Daniel J Mankowitz", "Jerry Li", "Cosmin Paduraru", "Sven Gowal", "Todd Hester" ], "title": "An empirical investigation of the challenges of real-world reinforcement learning", "venue": "arXiv preprint arXiv:2003.11881,", "year": 2020 }, { "authors": [ "Gabriel Dulac-Arnold", "Nir Levine", "Daniel J. Mankowitz", "Jerry Li", "Cosmin Paduraru", "Sven Gowal", "Todd Hester" ], "title": "An empirical investigation of the challenges of real-world reinforcement learning, 2020b", "venue": null, "year": 2020 }, { "authors": [ "Yonathan Efroni", "Shie Mannor", "Matteo Pirotta" ], "title": "Exploration-exploitation in constrained mdps", "venue": null, "year": 2020 }, { "authors": [ "Luca Franceschi", "Michele Donini", "Paolo Frasconi", "Massimiliano Pontil" ], "title": "Forward and reverse gradient-based hyperparameter optimization, 2017", "venue": null, "year": 2017 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "arXiv preprint arXiv:1509.02971,", "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A. Rusu", "Joel Veness", "Marc G. Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K. Fidjeland", "Georg Ostrovski", "Stig Petersen", "Charles Beattie", "Amir Sadik", "Ioannis Antonoglou", "Helen King", "Dharshan Kumaran", "Daan Wierstra", "Shane Legg", "Demis Hassabis" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518(7540):529–533,", "year": 2015 }, { "authors": [ "Santiago Paternain", "Luiz Chamon", "Miguel Calvo-Fullana", "Alejandro Ribeiro" ], "title": "Constrained reinforcement learning has zero duality gap", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Alex Ray", "Joshua Achiam", "Dario Amodei" ], "title": "Benchmarking safe exploration in deep reinforcement learning", "venue": "arXiv preprint arXiv:1910.01708,", "year": 2019 }, { "authors": [ "Harsh Satija", "Philip Amortila", "Joelle Pineau" ], "title": "Constrained markov decision processes via backward value functions", "venue": "arXiv preprint arXiv:2008.11811,", "year": 2020 }, { "authors": [ "David Silver", "Julian Schrittwieser", "Karen Simonyan", "Ioannis Antonoglou", "Aja Huang", "Arthur Guez", "Thomas Hubert", "Lucas Baker", "Matthew Lai", "Adrian Bolton", "Yutian Chen", "Timothy Lillicrap", "Fan Hui", "Laurent Sifre", "George van den Driessche", "Thore Graepel", "Demis Hassabis" ], "title": "Mastering the game of Go without human knowledge", "venue": null, "year": 2017 }, { "authors": [ "Ankur Sinha", "Pekka Malo", "Kalyanmoy Deb" ], "title": "A review on bilevel optimization: from classical to evolutionary approaches and applications", "venue": "IEEE Transactions on Evolutionary Computation,", "year": 2017 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement learning: An introduction", "venue": "MIT press,", "year": 2018 }, { "authors": [ "Chen Tessler", "Shahar Givony", "Tom Zahavy", "Daniel J Mankowitz", "Shie Mannor" ], "title": "A deep hierarchical approach to lifelong learning in minecraft", "venue": "In AAAI,", "year": 2017 }, { "authors": [ "Chen Tessler", "Daniel J Mankowitz", "Shie Mannor" ], "title": "Reward constrained policy optimization", "venue": "arXiv preprint arXiv:1805.11074,", "year": 2018 }, { "authors": [ "Philip S Thomas", "Bruno Castro da Silva", "Andrew G Barto", "Emma Brunskill" ], "title": "On ensuring that intelligent machines are well-behaved", "venue": "arXiv preprint arXiv:1708.05448,", "year": 2017 }, { "authors": [ "Vivek Veeriah", "Matteo Hessel", "Zhongwen Xu", "Richard Lewis", "Janarthanan Rajendran", "Junhyuk Oh", "Hado van Hasselt", "David Silver", "Satinder Singh" ], "title": "Discovery of useful questions as auxiliary tasks", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Zhongwen Xu", "Hado van Hasselt", "David Silver" ], "title": "Meta-Gradient Reinforcement Learning", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Kenny Young", "Baoxiang Wang", "Matthew E. Taylor" ], "title": "Metatrace actor-critic: Online step-size tuning by meta-gradient descent for reinforcement learning control, 2018", "venue": null, "year": 2018 }, { "authors": [ "Tom Zahavy", "Zhongwen Xu", "Vivek Veeriah", "Matteo Hessel", "Junhyuk Oh", "Hado van Hasselt", "David Silver", "Satinder Singh" ], "title": "Self-Tuning Deep Reinforcement Learning", "venue": null, "year": 2020 }, { "authors": [ "Ruiyi Zhang", "Tong Yu", "Yilin Shen", "Hongxia Jin", "Changyou Chen", "Lawrence Carin" ], "title": "Reward constrained interactive recommendation with natural language feedback, 2020", "venue": null, "year": 2020 }, { "authors": [ "Zeyu Zheng", "Junhyuk Oh", "Satinder Singh" ], "title": "On learning intrinsic rewards for policy gradient methods", "venue": "NeurIPS,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement Learning (RL) algorithms typically try to maximize an expected return objective (Sutton & Barto, 2018). This approach has led to numerous successes in a variety of domains which include board-games (Silver et al., 2017), computer games (Mnih et al., 2015; Tessler et al., 2017) and robotics (Abdolmaleki et al., 2018). However, formulating real-world problems with only an expected return objective is often sub-optimal when tackling many applied problems ranging from recommendation systems to physical control systems which may include robots, self-driving cars and even aerospace technologies. In many of these domains there are a variety of challenges preventing RL from being utilized as the algorithmic solution framework. Recently, Dulac-Arnold et al. (2019) presented nine challenges that need to be solved to enable RL algorithms to be utilized in real-world products and systems. One of those challenges is handling constraints. All of the above domains may include one or more constraints related to cost, wear-and-tear, or safety, to name a few.\nHard and Soft Constraints: There are two types of constraints that are encountered in constrained optimization problems; namely hard-constraints and soft-constraints (Boyd & Vandenberghe, 2004). Hard constraints are pairs of pre-specified functions and thresholds that require the functions, when evaluated on the solution, to respect the thresholds. As such, these constraints may limit the feasible solution set. Soft constraints are similar to hard constraints in the sense that they are defined by pairs of pre-specified functions and thresholds, however, a soft constraint does not require the solution to hold the constraint; instead, it penalizes the objective function (according to a specified rule) if the solution violates the constraint (Boyd & Vandenberghe, 2004; Thomas et al., 2017).\nMotivating Soft-Constraints: In real-world products and systems, there are many examples of soft-constraints; that is, constraints that can be violated, where the violated behaviour is undesirable but not catastrophic (Thomas et al., 2017; Dulac-Arnold et al., 2020b). One concrete example is that of energy minimization in physical control systems. Here, the system may wish to reduce the amount of energy used by setting a soft-constraint. Violating the constraint is inefficient, but not catastrophic to the system completing the task. In fact, there may be desirable characteristics that can only be attained if there are some constraint violations (e.g., a smoother/faster control policy). Another common setting is where it is unclear how to set a threshold. In many instances, a product\n* indicates equal contribution.\nmanager may desire to increase the level of performance on a particular product metric A, while ensuring that another metric B on the same product does not drop by ‘approximately X%’. The value ‘X’ is often inaccurate and may not be feasible in many cases. In both of these settings, violating the threshold is undesirable, yet does not have catastrophic consequences.\nLagrange Optimization: In the RL paradigm, a number of approaches have been developed to incorporate hard constraints into the overall problem formulation (Altman, 1999; Tessler et al., 2018; Efroni et al., 2020; Achiam et al., 2017; Bohez et al., 2019; Chow et al., 2018; Paternain et al., 2019; Zhang et al., 2020; Efroni et al., 2020). One popular approach is to model the problem as a Constrained Markov Decision Process (CMDP) (Altman, 1999). In this case, one method is to solve the following problem formulation: max⇡ J⇡R s.t. J ⇡ C , where ⇡ is a policy, J⇡ R is the expected return, J⇡ C\nis the expected cost and is a constraint violation threshold. This is often solved by performing alternating optimization on the unconstrained Lagrangian relaxation of the original problem (e.g. Tessler et al. (2018)), defined as: min 0 max⇡ J⇡R+ ( J⇡C). The updates alternate between learning the policy and the Lagrange multiplier .\nIn many previous constrained RL works (Achiam et al., 2017; Tessler et al., 2018; Ray et al., 2019; Satija et al., 2020), because the problem is formulated with hard constraints, there are some domains in each case where a feasible solution is not found. This could be due to approximation errors, noise, or the constraints themselves being infeasible. The real-world applications, along with empirical constrained RL research results, further motivates the need to develop a soft-constrained RL optimization approach. Ideally, in this setup, we would like an algorithm that satisfies the constraints while solving the task by maximizing the objective. If the constraints cannot be satisfied, then this algorithm finds a good trade-off (that is, minimizing constraint violations while solving the task by maximizing the objective).\nIn this paper, we extend the constrained RL Lagrange formulation to perform soft-constrained optimization by formulating the constrained RL objective as a nested optimization problem (Sinha et al., 2017) using meta-gradients. We propose MetaL that utilizes meta-gradients (Xu et al., 2018; Zahavy et al., 2020) to improve upon the trade-off between reducing constraint violations and improving expected return. We focus on Distributed Distributional Deterministic Policy Gradients (D4PG) (Barth-Maron et al., 2018) as the underlying algorithmic framework, a state-of-the-art continuous control RL algorithm. We show that MetaL can capture an improved trade-off between expected return and constraint violations compared to the baseline approaches. We also introduce a second approach called MeSh that utilizes meta-gradients by adding additional representation power to the reward shaping function. Our main contributions are as follows: (1) We extend D4PG to handle constraints by adapting it to Reward Constrained Policy Optimization (RCPO) (Tessler et al., 2018) yielding Reward Constrained D4PG (RC-D4PG); (2) We present a soft constrained meta-gradient technique: Meta-Gradients for the Lagrange multiplier learning rate (MetaL)1; (3) We derive the meta-gradient update for MetaL (Theorem 1); (4) We perform extensive experiments and investigative studies to showcase the properties of this algorithm. MetaL outperforms the baseline algorithms across domains, safety coefficients and thresholds from the Real World RL suite (Dulac-Arnold et al., 2020b)." }, { "heading": "2 BACKGROUND", "text": "A Constrained Markov Decision Process (CMDP) is an extension to an MDP (Sutton & Barto, 2018) and consists of the tuple hS,A, P,R,C, i where S is the state space; A is the action space; P : S ⇥ A ! S is a function mapping states and actions to a distribution over next states; R : S ⇥ A ! R is a bounded reward function and C : S ⇥ A ! RK is a K dimensional function representing immediate penalties (or costs) relating to K constraints. The solution to a CMDP is a policy ⇡ : S ! A which is a mapping from states to a probability distribution over actions. This policy aims to maximize the expected return J⇡\nR = E[ P1 t=0\ntrt] and satisfy the constraints J⇡ Ci = E[ P1 t=0 tci,t] i, i = 1 . . .K. For the purpose of the paper, we consider a single constraint; that is, K = 1, but this can easily be extended to multiple constraints.\nMeta-Gradients is an approach to optimizing hyperparameters such as the discount factor, learning rates, etc. by performing online cross validation while simultaneously optimizing for the overall RL optimization objective such as the expected return (Xu et al., 2018; Zahavy et al., 2020). The goal is to optimize both an inner loss and an outer loss. The update of the ✓ parameters on the inner\n1This is also the first time meta-gradients have been applied to an algorithm with an experience replay.\nloss is defined as ✓0 = ✓ + f(⌧, ✓, ⌘), where ✓ 2 Rd corresponds to the parameters of the policy ⇡✓(a|s) and the value function v✓(s) (if applicable). The function f : Rk ! Rd is the gradient of the policy and/or value function with respect to the parameters ✓ and is a function of an n-step trajectory ⌧ = hs1, a1, r2, s2 . . . sni, meta-parameters ⌘ and is weighted by a learning rate ↵ and is defined as f(⌧, ✓, ⌘) = ↵ dJ ⇡✓ obj(✓,⌧,⌘)\nd✓ where J ⇡✓ obj (✓, ⌧, ⌘) is the objective being optimized with respect to ✓. The idea is to then evaluate the performance of this new parameter value ✓0 on an outer loss – the meta-gradient objective. We define this objective as J 0(⌧ 0, ✓0, ⌘̄) where ⌧ 0 is a new trajectory, ✓0 are the updated parameters and ⌘̄ is a fixed meta-parameter (which needs to be selected/tuned in practice). We then need to take the gradient of the objective J 0 with respect to the meta-parameters ⌘ to yield the outer loss update ⌘0 = ⌘ + ↵⌘ @J 0(⌧ 0,✓0,⌘̄) @⌘\n. This gradient is computed as follows: @J\n0(⌧ 0,✓0,⌘̄) @⌘ = @J 0(⌧ 0,✓0,⌘̄) @✓0 @✓ 0 @⌘ . The outer loss is essentially the objective we are trying to optimize. This could be a policy gradient loss, a temporal difference loss, a combination of the two etc (Xu et al., 2018; Zahavy et al., 2020). Meta-gradients have been previously used to learn intrinsic rewards for policy gradient (Zheng et al., 2018) and auxiliary tasks (Veeriah et al., 2019). Meta-gradients have also been used to adapt optimizer parameters (Young et al., 2018; Franceschi et al., 2017). In our setup, we consider the continuous control setting, provide the first implementation of metagradients for an algorithm that uses an experience replay, and focus on adapting meta-parameters that encourage soft constraint satisfaction while maximizing expected return.\nD4PG is a state-of-the-art continuous control RL algorithm with a deterministic policy (BarthMaron et al., 2018). It is an incremental improvement to DDPG (Lillicrap et al., 2015). The overall objective of DDPG is to maximize J(✓a, ✓c) = E[Q✓c(s, a)|s = st, a = ⇡✓a(st)] where ⇡✓a(st) is a deterministic policy with parameters ✓a and Q✓c(s, a) is an action value function with parameters ✓c. The actor loss is defined as: Lactor = kSG(raQ✓c(st, at)|at=⇡✓a (s) + a✓a,t) a✓a,tk2 where SG is a stop gradient. The corresponding gradient update is defined as r✓aJ(✓a) = E[raQ✓c(s, a)r✓a⇡✓a(st)]. The critic is updated using the standard temporal difference error loss: Lcritic = (r(s, a) + QT (s0,⇡T (s0)) Q✓c(s, a))2 where QT ,⇡T are the target critic and actor networks respectively. In D4PG, the critic is a distributional critic based on the C51 algorithm (Bellemare et al., 2017) and the agent is run in a distributed setup with multiple actors executed in parallel, n-step returns and with prioritized experience replay. We will use the non-distributional critic update in our notation for ease of visualization and clarity for the reader2." }, { "heading": "3 REWARD CONSTRAINED D4PG (RC-D4PG)", "text": "This section describes our modifications required to transform D4PG into Reward Constrained D4PG (RC-D4PG) such that it maximizes the expected return and satisfies constraints.\nThe constrained optimisation objective is defined as: max⇡✓ J ⇡✓ R subject to J⇡✓ C , where J⇡✓\nR = E[Q(s, a)|s = st, a = ⇡✓(st)] and J⇡✓C = E[C(s, a)|s = st, a = ⇡✓(st)]; the param-\neter ✓ = h✓a, ✓ci from here on in; C(s, a) is a long-term penalty value function (e.g., sum of discounted immediate penalties) corresponding to constraint violations. The Lagrangian relaxation objective is defined as J⇡✓\nR + ( J⇡✓ C ). As in RCPO, a proxy objective J⇡✓ R J⇡✓ C is used\nthat converges to the same set of locally optimal solutions as the relaxed objective (Tessler et al., 2018). Note that the constant does not affect the policy improvement step and is only used for the Lagrange multiplier loss update. To optimize the proxy objective with D4PG, reward shaping of the form r(s, a) c(s, a) is required to yield the reward shaped critic loss defined as: Lcritic(✓c, ) = (r(s, a) c(s, a) + QT (s,⇡T (s0)) Q✓c(s, a))2. The actor loss is defined as before. The Lagrange loss is defined as: Llagrange( ) = ( J⇡✓C ) where 0. Since RCD4PG is off-policy, it requires storing the per time-step penalties, c, inside the transitions stored in the experience replay buffer (ER). For training the Lagrange multiplier an additional penalty buffer is used to store the per-episode penalties J⇡✓\nC . The learner then reads from this penalty buffer for\nupdating the Lagrange multiplier. RC-D4PG updates the actor/critic parameters and the Lagrange multipliers using alternating optimization. The full algorithm for this setup can be found in the Appendix, Algorithm 3.\n2This can easily be extended to include the distributional critic." }, { "heading": "4 META-GRADIENTS FOR THE LAGRANGE LEARNING RATE (METAL)", "text": "In this section, we introduce the MetaL algorithm which extends RC-D4PG to use meta-gradients for adapting the learning rate of the Lagrangian multiplier3. The idea is to update the learning rate such that the outer loss (as defined in the next subsection) is minimized. Our intuition is that a learning rate gradient that takes into account the overall task objective and constraint thresholds will lead to improved overall performance.\nAlgorithm 1 MetaL 1: Input: penalty c(·), constraint C(·), threshold , learning rates ↵1,↵✓a ,↵✓c ,↵⌘ , max. number\nof episodes M 2: Initialize actor, critic parameters ✓a and ✓c, Lagrange multiplier = 0, meta-parameter ⌘ = ↵ 3: for 1 . . .M do 4: Inner loss: 5: Sample episode penalty J⇡\nC from the penalty replay buffer\n6: 0 [ ↵1 exp(↵ ) ( J⇡C)]+ . Lagrange multiplier update 7: Sample a batch with tuples hst, at, rt, ctiTt=1 from ER and split into training/validation sets 8: Accumulate and apply actor and critic updates over training batch Ttrain by: 9: r✓c = 0,r✓a = 0 10: for t = 1 . . . Ttrain do 11: R̂t = rt 0ct + Q̂( , st+1, at+1 ⇠ ⇡T (st+1); ✓c) 12: r✓c += ↵✓c@(R̂t Q̂( , st, at; ✓c))2/@✓c . Critic update 13: r✓a += ↵✓aE[raQ(st, at)r✓a⇡✓a(st)|at=⇡(st)] . Actor update 14: ✓0\nc ✓c 1Ttrain\nP r✓c\n15: ✓0 a ✓a + 1Ttrain\nP r✓a\n16: Outer loss: Compute outer loss and meta-gradient update using validation set Tvalidate: 17: ↵0 ↵ ↵⌘ @J 0(✓0c(↵ ), 0(↵ )) @↵ . Meta-parameter update - Theorem 1" }, { "heading": "18: 0, ✓a ✓0a, ✓c ✓0c,↵ ↵0", "text": "19: return ✓a, ✓c,\nMeta-parameters, inner and outer losses: The meta-parameter is defined as ⌘ = ↵ . The inner loss is composed of three losses, the actor, critic and Lagrange loss respectively. The actor and critic losses are the same as in RC-D4PG. The Lagrange multiplier loss is defined as: Llagrange( ) = exp(↵ ) ( J⇡C) where ↵ is the meta-parameter as defined above. The meta-parameter is wrapped inside an exponential function to magnify the effect of ↵ while also ensuring non-negativity of the effective learning rate. The inner loss updates are \" ✓0 a\n✓0 c\n0\n# = \" ✓a ✓c # \" f(⌧, ✓a, ⌘) f(⌧, ✓c, ⌘) f(⌧, , ⌘) # = \" ✓a ✓c #\n2 64 ↵✓a dLactor(✓a) d✓a\n↵✓c dLcritic(✓c, )\nd✓c\n↵1 dLlagrange( )\nd\n3\n75 where ↵✓a ,↵✓c ,↵1 are the fixed ac-\ntor critic and Lagrange multiplier learning rates respectively. The outer loss is defined as J 0(✓0\nc (↵ ), 0(↵ )) = Louter = Lcritic(✓0c(↵ ), 0(↵ )). We tried different variants of outer losses and found that this loss empirically yielded the best performance; we discuss this in more detail in the experiments section. This is analogous to formulating MetaL as the following nested optimization problem: min↵ J 0(✓(↵ ), (↵ )), s.t. ✓, 2 argmin✓, 0{ J ⇡✓ R (↵ )( J⇡✓C )}. We treat the lower level optimization problem as the Lagrange relaxation objective (inner loss). We then treat the upper level optimization as the meta-gradient objective J 0(✓(↵ ), (↵ )) (outer loss). This transforms the optimization problem into soft-constrained optimization since the meta-parameter ↵ guides the learning of the Lagrange multiplier to minimize the outer loss while attempting to find a good trade-off between minimizing constraint violations and maximizing return (inner loss).\nAs shown in Algorithm 1, the inner loss gradients are computed for (line 6), ✓c (line 12) and ✓a (line 13) corresponding to the Lagrange multiplier, critic and actor parameters respectively. The Lagrange multiplier is updated by sampling episode penalties which is an empirical estimate of J⇡\nC\nfrom a separate penalty replay buffer (line 5) to compute the gradient update. The updated multiplier\n3We plan on releasing the source code for MetaL in the near future.\nis then utilized in the critic inner update (lines 11 and 12) to ensure that the critic parameters are a function of this new updated Lagrange multiplier. The actor and critic parameters are updated using the training batch, and these updated parameters along with a validation batch are used to compute the outer loss (line 17). The meta-parameter ↵ is then updated along the gradient of this outer loss with respect to ⌘ = ↵ . We next derive the meta-gradient update for ↵ , and present it in the following theorem (see the Appendix, Section A for the full derivation). Intuition for this meta-gradient update is provided in the experiments section. Theorem 1. MetaL gradient update: Let 0 be a pre-defined constraint violation threshold, meta-parameter ⌘ = ↵ and J⇡✓C = E C(s, a)|s = st, a = ⇡✓(st) is the discounted constraint\nviolation function, then, the meta-gradient update is:\n↵0 ↵ ↵⌘ ✓ 2 ·c(s, a)·↵1 exp(↵ )· ✓ J⇡✓ C\n◆✓ 2↵✓c(r✓0cQ✓0c(s, a)) Tr✓cQ✓c(s, a)+1 ◆◆ ,\nwhere is the TD error; ↵✓c is the critic learning rate and ↵⌘ is the meta-parameter learning rate." }, { "heading": "5 EXPERIMENTS", "text": "The experiments were performed using domains from the Real-World Reinforcement Learning (RWRL) suite4, namely cartpole:swingup, walker:walk, quadruped:walk and humanoid:walk. We will refer to these domains as cartpole, walker, quadruped and humanoid from here on in.\nWe focus on two types of tasks with constraints: (1) solvable constraint tasks - where the task is solved and the constraints can be satisfied; (2) unsolvable constraint tasks - where the task can be solved but the constraints cannot be satisfied. Unsolvable constraint tasks correspond to tasks where the constraint thresholds are incorrectly set and cannot be satisfied, situations which occur in many real-world problems as motivated in the introduction. The specific constraints we focused on for each domain can be found in the Appendix (Section C). The goal is to showcase the soft-constrained performance of MetaL, with respect to reducing constraint violations and maximizing the return in both of these scenarios (solvable and unsolvable constraint tasks) with respect to the baselines.\nThe baseline algorithms we focused on for each experiment are D4PG without any constraints, RCD4PG (i.e., hard constraint satisfaction) and Reward Shaping D4PG (RS-D4PG) (i.e., soft constraint satisfaction). RS-D4PG uses a fixed for the duration of training. We compare these baselines to MetaL. Note that D4PG, RC-D4PG and MetaL have no prior information regarding the Lagrange multiplier. RC-D4PG and MetaL attempt to learn a suitable multiplier value from scratch, i.e. the initial Lagrange multiplier value is set to 0.0. In contrast, RS-D4PG has prior information (i.e. it uses a pre-selected fixed Lagrange multiplier).\nExperimental Setup: For each domain, the action and observation dimensions are shown in the Appendix, Table 4. The episode length is 1000 steps, the base reward function is computed within the dm control suite (Tassa et al., 2018). The upper bound reward for each task is 1000. Each task was trained for 20000 episodes. Each variant of D4PG uses the same network architecture (see the Appendix, Table 5 for more details).\nWe use different performance metrics to compare overall performance. We track the average episode return (R), but we also define the penalized return: Rpenalized = R · ,C , which captures the trade-off between achieving optimal performance and satisfying the constraints. Here, R is the average return for the algorithm upon convergence (computed as an average over the previous 100 episodes); is a fixed constant that determines how much to weight the constraint violation penalty. For the purposes of evaluation, we want to penalize algorithms that consistently violate the constraints and therefore set = 1000. Since the upper bound of rewards for each domain is 1000, we are essentially weighing equally attaining high performance and satisfying constraints. Finally, ,C = max(0, J⇡C ) is defined as the overshoot. Here is the constraint violation threshold and defines the allowable average constraint violations per episode; J⇡\nC is the average\nconstraint violation value per episode upon convergence for a policy ⇡. The overshoot, ,C , tracks the average constraint violations that are above the allowed constraint violation threshold .\nWe investigate each algorithm’s performance along a variety of dimensions which include different constraint violation thresholds (see the Appendix, Table 3 for a list of thresholds used), safety\n4https://github.com/google-research/realworldrl_suite\ncoefficients and domains. The safety coefficient is a flag in the RWRL suite (Dulac-Arnold et al., 2020a). This flag contains values between 0.0 and 1.0. Reducing the value of the flag ensures that more constraint violations occur per domain per episode. As such, we searched over the values {0.05, 0.1, 0.2, 0.3}. These values vary from solvable constraint tasks (e.g., 0.3) to unsolvable constraint tasks (e.g., 0.05). We wanted to see how the algorithms behaved in these extreme scenarios. In addition, we analysed the performance across a variety of different constraint violation thresholds (see Appendix, Table 6). All experiments are averaged across 8 seeds." }, { "heading": "5.1 MAIN RESULTS", "text": "We begin by analyzing the performance of our best variant, MetaL, with different outer losses. Then we analyse the overall performance of all methods, followed by dissecting performance along the dimensions of safety coefficient and domain respectively. Finally, we investigate the derived gradient update for MetaL from Theorem 1 and provide intuition for the algorithm’s behaviour.\nMetaL outer loss: We wanted to determine whether different outer losses would result in improved overall performance. We used the actor loss (Lactor) and the combination of the actor and critic losses as the outer loss (Lactor + Lcritic) and compared them with the original MetaL outer loss (Lcritic) as well as the other baselines. Figure 1 shows that using just the actor loss results in the worst performance; while using the critic loss always results in better performance. The best performance is achieved by the original critic-only MetaL outer loss.\nThere is some intuition for choosing a critic-only outer loss. In MetaL, the critic loss is a function of lambda. As a result, the value of lambda affects the agents ability to minimize this loss and therefore learn an accurate value function. In D4PG, an accurate value function (i.e., the critic) is crucial for learning a good policy (i.e., the actor). This is because the policy relies on an accurate estimate of the value function to learn good actions that maximize the return (see D4PG actor loss). This would explain why adding the actor loss to the outer loss does not have much effect on the final quality of the solution. However, removing the critic loss has a significant effect on the overall solution.\nOverall performance: We averaged the performance of MetaL across all safety coefficients, thresholds and domains and compared this with the relevant baselines. As seen in Table 1, MetaL outperforms all of the baseline approaches by achieving the best trade-off of minimizing constraint violations and maximizing return5. This includes all of the soft constrained optimization baselines (i.e., RS-D4PG variants), D4PG as well as the hard-constrained optimization algorithm RC-D4PG. It is interesting to analyze this table to see that the best reward shaping variants are (1) RS 0.1 which achieves comparable return, but higher overshoot and therefore lower penalized return; (2) RS 1.0 which attains significantly lower return but lower overshoot resulting in lower penalized return. D4PG has the highest return, but this results in significantly higher overshoot. While RCD4PG attains lower overshoot, it also yields significantly lower overall return. We now investigate this performance in more detail by looking at the performance per safety coefficient and per domain.\nPerformance as a function of safety coefficient: We analyzed the average performance per safety coefficient, while averaging across all domains and thresholds. As seen in Figure 1 (right), MetaL achieves comparable average return to that of D4PG. In addition, it significantly outperforms both\n5MetaL’s penalized reward (Rpenalized) performance is significantly better than the baselines with all pvalues smaller than 10 9 using Welch’s t-test.\nD4PG and RC-D4PG in terms of penalized return. Figure 2 includes the reward shaping baselines. As can be seen in this figure, choosing a different reward shaping value can lead to drastically different performance. This is one of the drawbacks of the RS-D4PG variants. It is possible however, to find comparable RS variants (e.g., RS 0.1 for the lowest safety coefficient of 0.05). However, as can be seen in Figure 3, for the highest safety coefficient and largest threshold, this RS variant fails completely at the humanoid task, further highlighting the instability of the RS approach. Figure 3 which presents the performance of MetaL and the baselines on the highest safety coefficient and largest threshold (to ensure that the constraint task is solvable), shows that MetaL has comparable performance to RC-D4PG (a hard constrained optimization algorithm). This further highlights the power of MetaL whereby it can achieve comparable performance when the constraint task is solvable compared to hard constrained optimization algorithms and state-of-the-art performance when the constraint task is not solvable.\nPerformance per domain: When analyzing the performance per domain, averaging across safety coefficients and constraint thresholds, we found that MetaL has significantly better penalized return compared to D4PG and RC-D4PG across the domains. A table of the results can be seen in the Appendix, Figure 7. Note that, as mentioned previously, the RS-D4PG variants fluctuate drastically in performance across domains.\nAlgorithm behaviour analysis: Since MetaL is a soft-constrained adaptation of RC-D4PG, we next analyze MetaL’s gradient update in Theorem 1 to understand why the performance of MetaL differs\nfrom that of RC-D4PG in two types of scenarios: (1) solvable and (2) unsolvable constraint tasks. For both scenarios, we investigate the performance on cartpole for a constraint threshold of 0.1156.\nFor (1), we set the safety coefficient to a value of 0.3. The learning curve for this converged setting can be seen in Figure 4 (left). We track 4 different parameters here: the Lagrangian multiplier (red curve), the mean penalty value J⇡\nC (orange curve), the meta-parameter ↵ (black curve) and the\nscaled Lagrangian learning rate ↵1 · exp(↵ ) (green curve). The threshold is shown as the blue dotted line. Initially there are many constraint violations. This corresponds to a large difference for J⇡ C (orange curve minus blue dotted line) which appears in the gradient in Theorem 1. As a result, the meta-parameter ↵ increases in value as seen in the figure, and therefore increases the scaled learning rate to modify the value of such that an improved solution can be found. Once J⇡\nC\nis satisfying the constraint in expectation (J⇡ C ⇡ 0), the scaled learning rate drops in value due to J⇡ C being small. This is an attempt by the algorithm to slow down the change in since a reasonable solution has been found (see the return for MetaL (green curve) in Figure 4 (right)).\nFor (2), we set the safety coefficient to a value of 0.05 making the constraint task unsolvable in this domain. The learning curves can be seen in Figure 4 (middle). Even though the constraint task is unsolvable, MetaL still manages to yield a reasonable expected return as seen in Figure 4 (right). This is compared to RC-D4PG that overfits to satisfying the constraint and, in doing so, results in poor average reward performance. This can be seen in Figure 4 (middle) where RC-D4PG has lower overshoot than MetaL for low safety coefficients. However, this is at the expense of poor expected return and penalized return performance as seen in Figure 4 (left). We will now provide some intuition for MetaL performance and relate it to the ↵ gradient update.\nIn this setting, there are consistent constraint violations leading to a large value for J⇡ C . At this point an interesting effect occurs. The value of ↵ decreases, as seen in the figure, while it tries to adapt the value of to satisfy the constraint. However, as seen in the gradient update, there is an exponential term exp(↵ ) which scales the Lagrange multiplier learning rate. This quickly drives the gradient down to 0, and consequently the scaled Lagrange multiplier learning rate too, as seen in Figure 4 (middle). This causes to settle on a value as seen in the figure. At this point the algorithm optimizes for a stable fixed and as a result finds the best trade-off for expected return at this value. In summary, MetaL will maximize the expected return for an ‘almost’ fixed , whereas RC-D4PG will attempt to overfit to satisfying the constraint resulting in a poor overall solution." }, { "heading": "6 DISCUSSION", "text": "In this paper, we presented a soft-constrained RL technique called MetaL that combines metagradients and constrained RL to find a good trade-off between minimizing constraint violations and maximizing returns. This approach (1) matches the return and constraint performance of a hardconstrained optimization algorithm (RC-D4PG) on ”solvable constraint tasks”; and (2) obtains an improved trade-off between maximizing return and minimizing constraint overshoot on ”unsolvable constraint tasks” compared to the baselines. (This includes a hard-constrained RL algorithm where the return simply collapses in such a case). MetaL achieves this by adapting the learning rate for the Lagrange multiplier update. This acts as a proxy for adapting the lagrangian multiplier. By amplifying/dampening the gradient updates to the lagrangian during training, the agent is able to influence the tradeoff between maximizing return and satisfying the constraints to yield the behavior of (1) and (2). We also implemented a meta-gradient approach called MeSh that scales and offsets the\n6This threshold was chosen as varying the safety coefficient at this threshold yields both solvable and unsolvable constraint tasks which is important for our analysis.\nshaped rewards. This approach did not outperform MetaL but is a direction of future work. The algorithm, derived meta-gradient update and a comparison to MetaL can be found in the Appendix, Section B. We show that across safety coefficients, domains and constraint thresholds, MetaL outperforms all of the baseline algorithms. We also derive the meta-gradient updates for MetaL and perform an investigative study where we provide empirical intuition for the derived gradient update that helps explain this meta-gradient variant’s performance. We believe the proposed techniques will generalize to other policy gradient algorithms but leave this for future work." } ]
2,021
null
SP:2dcfc5ac82356d824b2c4892372c73e678924caa
[ "The authors study the problem of out-of-distribution (OoD) generalization. The key question authors seek to answer is when given access to data from multiple training environments, can one only rely on test accuracy? or does one have to rely on some new measures to estimate the out-of-distribution performance of the model. The authors develop a metric based on influence functions, which authors claim is a better reflection of OoD accuracy than test accuracy. The metric proposed by the authors measures the variance in the model when the data from each environment is upweighted. The authors show that the proposed metric empirically correlates to the OoD performance of the models." ]
The mismatch between training and target data is one major challenge for current machine learning systems. When training data is collected from multiple domains and the target domains include all training domains and other new domains, we are facing an Out-of-Distribution (OOD) generalization problem that aims to find a model with the best OOD accuracy. One of the definitions of OOD accuracy is worst-domain accuracy. In general, the set of target domains is unknown, and the worst over target domains may be unseen when the number of observed domains is limited. In this paper, we show that the worst accuracy over the observed domains may dramatically fail to identify the OOD accuracy. To this end, we introduce Influence Function, a classical tool from robust statistics, into the OOD generalization problem and suggest the variance of influence function to monitor the stability of a model on training domains. We show that the accuracy on test domains and the proposed index together can help us discern whether OOD algorithms are needed and whether a model achieves good OOD generalization.
[]
[ { "authors": [ "Kartik Ahuja", "Karthikeyan Shanmugam", "Kush Varshney", "Amit Dhurandhar" ], "title": "Invariant risk minimization games", "venue": "arXiv preprint arXiv:2002.04692,", "year": 2020 }, { "authors": [ "Kei Akuzawa", "Yusuke Iwasawa", "Yutaka Matsuo" ], "title": "Domain generalization via invariant representation under domain-class dependency, 2019", "venue": "URL https://openreview.net/forum? id=HJx38iC5KX", "year": 2019 }, { "authors": [ "Ahmed Alaa", "Mihaela Van Der Schaar" ], "title": "Validating causal inference models via influence functions", "venue": "Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "Martin Arjovsky", "Léon Bottou", "Ishaan Gulrajani", "David Lopez-Paz" ], "title": "Invariant risk minimization", "venue": "arXiv preprint arXiv:1907.02893,", "year": 2019 }, { "authors": [ "J Andrew Bagnell" ], "title": "Robust supervised learning", "venue": "In Proceedings of the national conference on artificial intelligence,", "year": 1999 }, { "authors": [ "Samyadeep Basu", "Philip Pope", "Soheil Feizi" ], "title": "Influence functions in deep learning are fragile", "venue": "arXiv preprint arXiv:2006.14651,", "year": 2020 }, { "authors": [ "Sara Beery", "Grant Van Horn", "Pietro Perona" ], "title": "Recognition in terra incognita", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Yoshua Bengio", "Tristan Deleu", "Nasim Rahaman", "Nan Rosemary Ke", "Sebastien Lachapelle", "Olexa Bilaniuk", "Anirudh Goyal", "Christopher Pal" ], "title": "A meta-transfer objective for learning to disentangle causal mechanisms", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Peter Bühlmann" ], "title": "Invariance, causality and robustness", "venue": "Statistical Science,", "year": 2020 }, { "authors": [ "Daniel C Castro", "Ian Walker", "Ben Glocker" ], "title": "Causality matters in medical imaging", "venue": "Nature Communications,", "year": 2020 }, { "authors": [ "Weiyu Cheng", "Yanyan Shen", "Linpeng Huang", "Yanmin Zhu" ], "title": "Incorporating interpretability into latent factor models via fast influence analysis", "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2019 }, { "authors": [ "Gilad Cohen", "Guillermo Sapiro", "Raja Giryes" ], "title": "Detecting adversarial samples using influence functions and nearest neighbors", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "R Dennis Cook" ], "title": "Regression graphics: Ideas for studying regressions through graphics, volume 482", "venue": null, "year": 2009 }, { "authors": [ "R Dennis Cook", "Sanford Weisberg" ], "title": "Characterizations of an empirical influence function for detecting influential cases in regression", "venue": null, "year": 1980 }, { "authors": [ "R Dennis Cook", "Bing Li" ], "title": "Dimension reduction for conditional mean in regression", "venue": "The Annals of Statistics,", "year": 2002 }, { "authors": [ "Minghong Fang", "Neil Zhenqiang Gong", "Jia Liu" ], "title": "Influence function based data poisoning attacks to top-n recommender systems", "venue": "In Proceedings of The Web Conference", "year": 2020 }, { "authors": [ "Ishaan Gulrajani", "David Lopez-Paz" ], "title": "In search of lost domain generalization", "venue": "arXiv preprint arXiv:2007.01434,", "year": 2020 }, { "authors": [ "Pang Wei Koh", "Percy Liang" ], "title": "Understanding black-box predictions via influence functions", "venue": "arXiv preprint arXiv:1703.04730,", "year": 2017 }, { "authors": [ "Pang Wei W Koh", "Kai-Siang Ang", "Hubert Teo", "Percy S Liang" ], "title": "On the accuracy of influence functions for measuring group effects", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Masanori Koyama", "Shoichiro Yamaguchi" ], "title": "Out-of-distribution generalization with maximal invariant predictor", "venue": "arXiv preprint arXiv:2008.01883,", "year": 2020 }, { "authors": [ "David Krueger", "Ethan Caballero", "Joern-Henrik Jacobsen", "Amy Zhang", "Jonathan Binas", "Remi Le Priol", "Aaron Courville" ], "title": "Out-of-distribution generalization via risk extrapolation (rex)", "venue": "arXiv preprint arXiv:2003.00688,", "year": 2020 }, { "authors": [ "Kun Kuang", "Ruoxuan Xiong", "Peng Cui", "Susan Athey", "Bo Li" ], "title": "Stable prediction with model misspecification and agnostic distribution shift", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Da Li", "Yongxin Yang", "Yi-Zhe Song", "Timothy M Hospedales" ], "title": "Deeper, broader and artier domain generalization", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Haoliang Li", "Sinno Jialin Pan", "Shiqi Wang", "Alex C Kot" ], "title": "Domain generalization with adversarial feature learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Sara Magliacane", "Thijs van Ommen", "Tom Claassen", "Stephan Bongers", "Philip Versteeg", "Joris M Mooij" ], "title": "Domain adaptation by using causal inference to predict invariant conditional distributions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Sudipto Mukherjee", "Himanshu Asnani", "Sreeram Kannan" ], "title": "Ccmi: Classifier based conditional mutual information estimation", "venue": "In Uncertainty in Artificial Intelligence,", "year": 2020 }, { "authors": [ "Jonas Peters", "Peter Bühlmann", "Nicolai Meinshausen" ], "title": "Causal inference by using invariant prediction: identification and confidence intervals", "venue": "Journal of the Royal Statistical Society: Series B (Statistical Methodology),", "year": 2016 }, { "authors": [ "James Robins", "Lingling Li", "Eric Tchetgen", "Aad van der Vaart" ], "title": "Higher order influence functions and minimax estimation of nonlinear functionals. In Probability and statistics: essays in honor of David A", "venue": "Institute of Mathematical Statistics,", "year": 2008 }, { "authors": [ "James M Robins", "Lingling Li", "Rajarshi Mukherjee", "Eric Tchetgen Tchetgen", "Aad van der Vaart" ], "title": "Minimax estimation of a functional on a structured high-dimensional model", "venue": "The Annals of Statistics,", "year": 2017 }, { "authors": [ "Mateo Rojas-Carulla", "Bernhard Schölkopf", "Richard Turner", "Jonas Peters" ], "title": "Invariant models for causal transfer learning", "venue": "The Journal of Machine Learning Research,", "year": 2018 }, { "authors": [ "Shiori Sagawa", "Pang Wei Koh", "Tatsunori B Hashimoto", "Percy Liang" ], "title": "Distributionally robust neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Rajat Sen", "Ananda Theertha Suresh", "Karthikeyan Shanmugam", "Alexandros G Dimakis", "Sanjay Shakkottai" ], "title": "Model-powered conditional independence test", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Adarsh Subbaswamy", "Peter Schulam", "Suchi Saria" ], "title": "Preventing failures due to dataset shift: Learning predictive models that transport", "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Daniel Ting", "Eric Brochu" ], "title": "Optimal subsampling with influence functions", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Antonio Torralba", "Alexei A Efros" ], "title": "Unbiased look at dataset bias", "venue": null, "year": 2011 }, { "authors": [ "Anastasios Tsiatis" ], "title": "Semiparametric theory and missing data", "venue": "Springer Science & Business Media,", "year": 2007 }, { "authors": [ "Mark J Van der Laan", "MJ Laan", "James M Robins" ], "title": "Unified methods for censored longitudinal data and causality", "venue": "Springer Science & Business Media,", "year": 2003 }, { "authors": [ "Aad W Van der Vaart" ], "title": "Asymptotic statistics, volume 3", "venue": "Cambridge university press,", "year": 2000 }, { "authors": [ "Vladimir Vapnik" ], "title": "Principles of risk minimization for learning theory", "venue": "In Advances in neural information processing systems,", "year": 1992 }, { "authors": [ "Yufei Wang", "Haoliang Li", "Alex C Kot" ], "title": "Heterogeneous domain generalization via domain mixup", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Sewall Wright" ], "title": "Correlation and causation", "venue": "J. agric. Res.,", "year": 1921 }, { "authors": [ "Chuanlong Xie", "Fei Chen", "Yue Liu", "Zhenguo Li" ], "title": "Risk variance penalization: From distributional robustness to causality", "venue": "arXiv preprint arXiv:2006.07544,", "year": 2020 }, { "authors": [ "Minghao Xu", "Jian Zhang", "Bingbing Ni", "Teng Li", "Chengjie Wang", "Qi Tian", "Wenjun Zhang" ], "title": "Adversarial domain adaptation with domain mixup", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Shen Yan", "Huan Song", "Nanxiang Li", "Lincan Zou", "Liu Ren" ], "title": "Improve unsupervised domain adaptation with mixup training", "venue": "arXiv preprint arXiv:2001.00677,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Most machine learning systems assume both training and test data are independently and identically distributed, which does not always hold in practice (Bengio et al. (2019)). Consequently, its performance is often greatly degraded when the test data is from a different domain (distribution). A classical example is the problem to identify cows and camels (Beery et al. (2018)), where the empirical risk minimization (ERM, Vapnik (1992)) may classify images by background color instead of object shape. As a result, when the test domain is “out-of-distribution” (OOD), e.g. when the background color is changed, its performance will drop significantly. The OOD generalization is to obtain a robust predictor against this distribution shift.\nSuppose that we have training data collected from m domains:\nS = {Se : e ∈ Etr, |Etr| = m}, Se = {ze1, ze2, . . . ,zene} with zei ∼ P e, (1)\nwhere P e is the distribution corresponding to domain e, Etr is the set of all available domains, including validation domains, and zei is a data point. The OOD problem we considered is to find a model fOOD such that\nfOOD = arg min f sup P e∈Eall\n`(f, P e), (2)\nwhere Eall is the set of all target domains and `(f, P e) is the expected loss of f on the domain P e. Recent algorithms address this OOD problem by recovering invariant (causal) features and build the optimal model on top of these features, such as Invariant Risk Minimization (IRM, Arjovsky et al. (2019)), Risk Extrapolation (REx, Krueger et al. (2020)), Group Distributionally Robust Optimization (gDRO, Sagawa et al. (2019)) and Inter-domain Mixup (Mixup, Xu et al. (2020); Yan et al. (2020); Wang et al. (2020)). Most works evaluate on Colored MNIST (see 5.1 for details) where we can directly obtain the worst domain accuracy over Eall. Gulrajani & Lopez-Paz (2020) has assembled many algorithms and multi-domain datasets, and finds that OOD algorithms can’t outperform ERM in some domain generalization tasks (Gulrajani & Lopez-Paz (2020)), e.g. VLCS (Torralba & Efros (2011)) and PACS (Li et al. (2017)). This is not surprising, since these tasks only require high performance on certain domains, while an OOD algorithm is expected to learn truly invariant\nfeatures and be excellent on a large set of target domains Eall. This phenomenon is described as “accuracy-vs-invariance trade-off” in Akuzawa et al. (2019).\nTwo questions arise in the min-max problem (2). First, previous works assume that there is sufficient diversity among the domains in Eall. Thus the supremacy of `(f, P e) may be much larger than the average, which implies that ERM may fail to discover fOOD. But in reality, we do not know whether it is true. If not, the distribution of `(f, P e) is concentrated on the expectation of `(f, P e), and ERM is sufficient to find an invariant model for Eall. Therefore, we call for a method to judge whether an OOD algorithm is needed. Second, how to judge a model’s OOD performance? Traditionally, we consider test domains Etest ⊂ Etr and use the worst-domain accuracy over Etest (which we call test accuracy) to approximate the OOD accuracy. However, test accuracy is a biased estimate of the OOD accuracy unless Etr is closed to Eall. More seriously, It may be irrelevant or even negatively correlated to the OOD accuracy. This phenomenon is not uncommon, especially when there are features virtually spurious in Eall but show a strong correlation to the target in Etr. We give a toy example in Colored MNIST when the test accuracy fails to approximate the OOD accuracy. For more details, please refer to Section 5.1 and Appendix A.4. We choose three domains from Colored MNIST and use cross-validation (Gulrajani & Lopez-Paz (2020)) to select models, i.e. we take turns to select a domain S ∈ Etr as the test domain and train on the rest, and select the model with max average test accuracy. Figure 1 shows the comparison between ERM and IRM. One can find that no matter which domain is the test domain, ERM model uniformly outperforms IRM model on the test domain. However, IRM model achieves consistently better OOD accuracy. Shortcomings of the test accuracy here are obvious, regardless of whether cross-validation is used. In short, the naive use of the test accuracy may result in a non-OOD model.\nTo address this obstacle, we hope to find a metric that correlates better with model’s OOD property, even when Etr is much smaller than Eall and the “worst” domain remains unknown. Without any assumption to Eall, our goal is unrealistic. Therefore, we assume that features that are invariant across Etr should also be across Eall. This assumption is necessary. Otherwise, the only thing we can do is to collect more domains. Therefore, we need to focus on what features the model has learnt. Specifically, we want to check whether the model learns invariant features and avoid varying features.\nThe influence function (Cook & Weisberg (1980)) can serve our purpose. Influence function was proposed to measures the parameter change when a data point is removed or upweighted by a small perturbation (details in 3.2). When modified\nit to domain-level, it measures the influence of a domain instead of a data point on the model. Note that we are not emulating the changes of the parameter when a domain is removed. Instead, we are exactly caring about upweighting the domain by δ → 0+ (will be specified later). Base on this, the variance of influence function allows us to measure OOD property and solve the obstacle.\nContributions we summarize our contributions here: (i) We introduce influence function to domain-level and propose index Vγ|θ (formula 6) based on influence function of the model fθ. Our index can measure the OOD extent of available domains, i.e. how different these domains (distributions) are. This measurement provides a basis for whether to adopt an OOD algorithm and to collect more diverse domains. See Section 4.1 and Section 5.1.1 for details. (ii) We point out that the\nproposed index Vγ|θ can solve the weakness of test accuracy. Specifically, under most OOD generalization problems, using test accuracy and our index together, we can discern the OOD property of a model. See Section 4.2 for details. (iii) We propose to use only a small but important part of the model to calculate the influence function. This overcomes the huge computation cost of solving the inverse of Hessian. It is not merely for calculation efficiency and accuracy, but it coincides with our understanding that only these parameters capture what features a model has learnt (Section 4.3).\nWe organize our paper as follows: Section 2 reviews related works and Section 3 introduces the preliminaries of OOD methods and influence function. Section 4 presents our proposal and detailed analysis. Section 5 shows our experiments. The conclusion is given in Section 6." }, { "heading": "2 RELATED WORK", "text": "The mismatch between the development dataset and the target domain is one major challenge in machine learning (Castro et al. (2020); Kuang et al. (2020)). Many works assume that the ground truth can be represented by a causal Direct Acyclic Graph (DAG), and they use the DAG structure to discuss the worst-domain performance (Rojas-Carulla et al. (2018); Peters et al. (2016); Subbaswamy et al. (2019); Bühlmann et al. (2020); Magliacane et al. (2018)). All these works employ multiple domain data and causal assumptions to discover the parents of the target variable. Rojas-Carulla et al. (2018) and Magliacane et al. (2018) also apply this idea to Domain Generalization and Multi-Task Learning setting. Starting from multiple domain data rather than model assumptions, Arjovsky et al. (2019) proposes Invariant Risk Minimization (IRM) to extract causal (invariant) features and learn invariant optimal predictor on the top of the causal features. It analyzes the generalization properties of IRM from the view of sufficient dimension reduction (Cook (2009); Cook et al. (2002)). Ahuja et al. (2020) considers IRM as finding the Nash equilibrium of an ensemble game among several domains and develops a simple training algorithm. Krueger et al. (2020) derives the Risk Extrapolation (REx) to extract invariant features and further derives a practical objective function via variance penalization. Xie et al. (2020) employs a framework from distributional robustness to interpret the benefit of REx comparing to robust optimization (Ben-Tal et al. (2009); Bagnell (2005)). Besides, Adversarial Domain Adaption (Li et al. (2018); Koyama & Yamaguchi (2020)) uses discriminator to look for features that are independent of domains and uses these features for further prediction.\nInfluence function is a classic method from the robust statistics literature (Robins et al. (2008; 2017); Van der Laan et al. (2003); Tsiatis (2007)). It can be used to track the impact of a training sample on the prediction. Koh & Liang (2017) proposes a second-order optimization technique to approximate the influence function. They verify their method with different assumptions on the empirical risk ranging from being strictly convex and twice-differentiable to non-convex and non-differentiable losses. Koh et al. (2019) also estimates the effect of removing a subgroup of training points via influence function. They find out that the approximation computed by the influence function is correlated with the actual effect. Influence function has been used in many machine learning tasks. Cheng et al. (2019) proposes an explanation method, Fast Influence Analysis, that employs influence function on Latent Factor Model to solve the lack of interpretability of the collaborative filtering approaches for recommender systems. Cohen et al. (2020) uses influence function to detect adversarial attacks. Ting & Brochu (2018) proposes an asymptotically optimal sampling method via an asymptotically linear estimator and the associated influence function. Alaa & Van Der Schaar (2019) develops a model validation procedure that estimates the estimation error of causal inference methods. Besides, Fang et al. (2020) leverages influence function to select a subset of normal users who are influential to the recommendations." }, { "heading": "3 PRELIMINARIES", "text": "" }, { "heading": "3.1 ERM, IRM AND REX", "text": "In this section, we give some notations and introduce some recent OOD methods. Recall the multiple domain setup (1) and OOD problem (2). For a domain P e and a hypothetical model f , the population loss is `(f, P e) = Ez∼P e [L(f, z)] where L(f, z) is the loss function on z. The empirical loss, which is the objective of ERM, is `(f, S) = (1/m) ∑ e∈Etr `(f, S e) with `(f, Se) = (1/n) ∑n i=1 L(f, z e i ).\nRecent OOD methods propose some novel regularized objective functions in the form:\nL(f, S) = `(f, S) + λR(f, S) (3) to discover fOOD in (2). Here R(f, S) is a regularization term and λ is the tuning parameter which controls the degree of penalty. Note that ERM is a special case by setting λ = 0. For simplicity, we will use L(f, S) to represent the total loss in case of no ambiguity. Arjovsky et al. (2019) focuses on the stability of fOOD and considers the IRM regularization:\nR(f, S) = ∑ e∈Etr ‖∇w` ( wf),Se )∣∣ w=1.0 ‖2 (4)\nwhere w is a scalar and fixed “dummy” classifier. Arjovsky et al. (2019) shows that the scalar fixed classifier w is sufficient to monitor invariance and responds to the idealistic IRM problem which decomposes the entire predictor into data representation and one shared optimal top classifier for all training domains. On the other hand, Krueger et al. (2020) encourages the uniform performance of fOOD and proposes the V-REx penalty:\nR(f, S) = ∑ e∈Etr (`(f, Se)− `(f, S))2.\nKrueger et al. (2020) derives the invariant prediction by the robustness to spurious features and figure out that REx is more robust than group distributional robustness (Sagawa et al. (2019)). In this work, we also decompose the entire predictor into a feature extractor and a classifier on the top of the learnt features. As we will see, different from Arjovsky et al. (2019) and Krueger et al. (2020), we directly monitor the invariance of the top model." }, { "heading": "3.2 INFLUENCE FUNCTION AND GROUP EFFECT", "text": "Consider a parametric hypothesis f = fθ and the corresponding solution: θ̂ = arg minθ L(fθ,S). By a quadratic approximation of L(fθ,S) around θ̂, the influence function takes the form\nIF(θ̂, z) = −H−1 θ̂ ∇θL(fθ̂, z) with Hθ̂ = ∇ 2 θL(fθ̂,S).\nWhen the sample size of S is sufficiently large, the parameter change due to removing a data point z can be approximated by −I(z)/ ∑ e∈Etr |S\ne| without retraining the model. Here |Se| = ne stands for the cardinal of the set Se. Furthermore, Koh et al. (2019) shows that the influence function can also predict the effects of large groups of training points (i.e. Z = {z1, ..., zk}), although there are significant changes in the model. The parameter change due to removing the group can be approximated by\nIF(θ̂,Z) = −H−1 θ̂ ∇θ\n1 |Z| ∑ z∈Z L(fθ̂, z).\nMotivated by the work of Koh et al. (2019), we introduce influence function to OOD problem to address our obstacles." }, { "heading": "4 METHODOLOGY", "text": "" }, { "heading": "4.1 INFLUENCE OF DOMAINS", "text": "We decompose a parametric hypothesis fθ(x) into a top model g and a feature extractor Φ, i.e. fθ(x) = g(Φ(x,β),γ) and θ = (γ,β). Such decomposition coincides the understanding of most DNN, i.e. a DNN extracts the features and build a top model based on the extracted features. When upweighting a domain e by a small perturbation δ, we do not upweight the regularized term, i.e.\nL+(θ,S, δ) = L(θ,S) + δ · `(f, Se), since the stability across different domains, which is encouraged by the regularization, should not depend on the sample size of a domain. For a learnt model fθ̂, fixing the feature extractor Φ, i.e. fixing β = β̂, the change of top model g caused by upweighting the domain is\nIF(γ̂,Se|θ̂):= lim δ→0+\n∆θ\nδ = −H−1γ̂ ∇γ`(fθ̂,S e), e ∈ Etr. (5)\nHere Hγ̂ = ∇2γ̂L(fθ̂,S), and we assume L is twice-differentiable in γ. Please see Appendix A.3 for detailed derivation and why β should be fixed. For a regularized method, e.g. IRM and REx, the influence of their regularized term is reflected in H and in learnt model fθ̂. As mentioned above, IF(γ̂,Se|θ̂) measures change of model caused by upweighting domain e. Therefore, if g(Φ, γ̂) is invariant across domains, the entire model fθ̂ treats all domains equally. As a result, a small perturbation on different domains should cause the same model change. This leads to our proposal." }, { "heading": "4.2 PROPOSED INDEX AND ITS UTILITY", "text": "On basis of the domain-level influence function IF(γ̂,Se|θ̂), we propose our index to measure the fluctuation of the parameter change when different domains are upweighted:\nVγ̂|θ̂ := ln ( ‖Cove∈Etr ( IF(γ̂,Se|θ̂) ) ‖2 ) . (6)\nHere ‖ · ‖2 is the 2-norm for matrix, i.e. the largest eigenvalue of the matrix, Cove∈Etr (·) refers to the covariance matrix of the domain-level influence function over Etr and ln(·) is a nonlinear transformation that works well in practice.\nOOD Model Under the OOD problem in (2), a good OOD model should (i) learn invariant and useful features; (ii) avoid spurious and varying features. Learning useful and invariant features means the model should have high accuracy over a set of test domains Etest, no matter which test domain it is. In turn, high accuracy over Etest also means the model truly learns some useful features for the test domains. However, this is not enough, since we do not know whether the useful features are invariant features across Eall or just spurious features on Etest. On the other hand, avoiding varying features means that different domains are actually the same to the learnt model, so according to the arguments in Section 4.1, Vγ|θ should be small. Combined this, we derive our proposal: if a learnt model fθ̂ manage to simultaneously achieve small Vγ̂|θ̂ and high accuracy over Etest, it should have good OOD accuracy. We prove our proposal in a simple but illuminating case, and we conduct various experiments (Section 5) to support our proposal. Several issues should be clarified. First, not all OOD problems demand models to learn invariant features. For example, the set of all target domains is small such that the varying features are always strongly correlated to the labels, or the objective is the mean of the accuracy over Eall rather than the worst-domain accuracy. But to our concern, we regard the OOD problem in (2) as a bridge to causal discover. Thus the set of the target domains is large, and the “weak” OOD problems are out of our consideration. To a large extent, invariant features are still the major target and our proposal is still a good criterion to model’s OOD property. Second, we admit that the gap between being stable in Etr (small Vγ|θ) and avoiding all spurious features on Eall does exist. However, to our knowledge, for features that are varying in Eall but are invariant in Etr, demanding a model to avoid them is somehow unrealistic. Therefore, we make a step forward that we measure whether the learnt model successfully avoids features that vary across Etr. We leave index about varying features over Eall in our future work.\nThe Shuffle Vγ|β As mentioned above, smaller metric Vγ|θ means strong stablility across Etr, and hence should have better OOD accuracy. However, the proposed metric depends on the dataset S and the learnt model fθ̂. Therefore, there is no uniform baseline to check whether the metric is “small” enough. To this end, we propose a baseline value of the proposed metric by shuffling the multi-domain data. Consider pooling all data points in S and randomly redistributed to m new synthetic domains {S̃1, S̃2, ..., S̃m} := S̃. We compute the shuffle version of Vγ|θ for a learnt model fθ̂ over the shuffled data S̃:\nṼγ̂|θ̂ := ln ( ‖Cove∈Etr ( IF(γ̂,Se|θ̂) ) ‖2 ) . (7)\nand denote the standard version and shuffle version of the metric as Vγ̂|θ̂ and Ṽγ̂|θ̂ respectively. For any algorithm that obtains relatively good test accuracy, if Vγ̂|θ̂ is much larger than Ṽγ̂|θ̂, fθ̂ has learnt features that vary across e ∈ Etr, and cannot treat domains in Etr equally. This implies that fθ̂ may not be an invariant predictor over Eall. Otherwise, if the two values are similar, the model has avoided varying features in Etr and maybe invariant across Etr. Therefore, either the model capture the invariance over the diverse domains, or the domains are not diverse at all. Note that\nthis process is suitable for any algorithm, hence providing a baseline to see whether Vγ̂|θ̂ is small. Here we also obtain a method to judge whether an OOD algorithm is needed. Consider fθ̂ learnt by ERM. If Vγ̂|θ̂ is relatively larger than Ṽγ|θ, then ERM fails to avoid varying features. In this case, one should consider an OOD algorithm to achieve better OOD generalization. Otherwise, ERM is enough, and any attempt to achieve better OOD accuracy should start with finding more domains instead of using OOD algorithms. This coincides experiments in Gulrajani & Lopez-Paz (2020) (Section5.2). Our understanding is that domains in S̃ are similar. Therefore, the difference between shuffle and standard version of the metric reflects how much varying features a learnt model uses. We show how to use the two version of Vγ|θ in Section 5.1.1 and Section 5.2." }, { "heading": "4.3 INFLUENCE CALCULATION", "text": "There is a question surrounding the influence function: how to efficiently calculate and inverse Hessian? Koh & Liang (2017) suggests Conjugate Gradient and Stochastic estimation solve the problem. However, when θ̂ is obtained by running SGD, it could hardly arrive at the global minimum. Although adding a damping term (i.e. let Ĥθ̂ = Hθ̂ + λI) can moderately alleviate the problem by transforming it into a convex situation, under large neural-network with non-linear activation function like ReLU, this method may still work poorly since the damping term in order to satisfy the transform is so large that it will influence the performance significantly. Most importantly, the variation of the eigenvalue of Hessian is huge, making the convergence of influence function calculation quite slow and inaccurate (Basu et al. (2020)).\nIn our metric, we circumvent the problem by excluding most parameters β and directly calculate Hessian of γ to get accurate influence function. This modification not only speed up the calculation, but it also coincides our expectation, that an OOD algorithm should learn invariant features does not mean that the influence function of all parameters should be identical across domains. For example, if g(Φ) wants to extract the same features in different domains, the influence function should be different on Φ(·). Therefore, if we use all parameters to calculate the influence, given that γ is relatively insignificant in size compared with β, the information of learnt features provided by γ is hard to be captured. On the contrary, only considering the influence of the top model will manifest the influence of different domains in the aspect of features, thus enabling us to achieve our goal.\nAs our experiments show, after this modification, the influence function calculation speed can be 2000 times faster, and the utility (correlation with OOD property) could be even higher. One may not feel surprised given the huge number of parameters in the embedding model Φ(·). They slow down the calculation and overshadow the top model’s influence value." }, { "heading": "5 EXPERIMENT", "text": "In this section, we experimentally show that: (1) A model fθ̂ reaches small Vγ̂|θ̂ if it has good OOD property, while a non-OOD model won’t. (2) The metric Vγ|θ provides additional information on the stability of a learnt model, which overcomes the weakness of the test accuracy. (3) The comparison of Vγ|β and Ṽγ|β can check whether a better OOD algorithm is needed. We consider experiments in Bayesian Network, Colored MNIST and VLCS. The synthetic data generated by Bayesian Network includes domain-dependent noise and fake associations between features and response. For Colored MNIST, we already know that the digit is the causal feature and the color is non-causal. The causal relationships help us to determine the worst domain and obtain the OOD accuracy. VLCS is a real dataset, in which we show utility of Vγ|θ step by step. Due to the space limitation, we put the experiments in Bayesian Network to the appendix.\nGenerally, cross-validation (Gulrajani & Lopez-Paz (2020)) is used to judge a model’s OOD property. In the introduction, we have already shown that the leave-one-domain-out cross-validation may fail to discern OOD properties. We also consider another two potential competitors: conditional mutual information and IRM penalty. The comparison between our metric and the two competitors are postponed into Appendix.\n5.1 COLORED MNIST\nColored MNIST (Arjovsky et al. (2019)) introduces a synthetic binary classification task. The images are colored according to their label, making color a spurious feature in predicting the label. Specifically, for a domain e, we assign a preliminary binary label ỹ = 1digits≤4 and randomly flip ỹ with p = 0.25. Then, we color the image according to ỹ but with a flip rate of pe. Clearly, when pe < 0.25 or pe > 0.75, color is more correlated with ỹ than real digit. Therefore, the oracle OOD model fOOD will attain accuracy 0.75 in all domains while an ERM model may attain high training accuracy and low OOD property if pe in training domains is too small or too large. Throughout the Colored MNIST experiments, we use three-layer MLP with ReLU activation and hidden dimension 256. Although our MLP model has relatively many parameters and is non-convex due to the activation layer, due to the technique mentioned in Section 4.3, the in-\nfluence calculation is still fast and accurate, with directly calculating influence once spends less than 2 seconds." }, { "heading": "5.1.1 IDENTIFY OOD PROBLEM", "text": "In this section, we show that Vγ|θ can discern whether the training domains are sufficiently diverse as mentioned in Section 4.2. Assume Etr has five training domains with\npe ∈ {0.2− 2x, 0.2− x, 0.2, 0.2 + x, 0.2 + 2x},\nwhere x ∈ [0.0, 0.1] is positively related to the diversity among the training domains. If x is zero, all data points are generated from the same domain (pe = 0.2) and so the learning task on Etr is not an OOD problem. On the contrary, larger x means that the training domains are more diverse. We repeat 501 times to learn the model with ERM. Given the learnt model fθ̂ and the training data, we compute Vγ̂|θ̂ and check the correlation between Vγ̂|θ̂ and x. Figure 2 presents the results. Our index Vγ|θ is highly related to x. The Pearson coefficient is 0.9869, and the Spearman coefficient is 0.9873. Also, the benchmark of Vγ|θ that learns on the same training domains (S̃ in 4.2) can be derived from the raw data by pooling and redistributing all data points, and we mark it by the black dashed line. If Vγ̂|θ̂ is much higher than the benchmark, indicating that x is not small, an OOD algorithm should be considered if better OOD generalization is demanded. Otherwise, the present algorithm (like ERM) is sufficient. The results coincide our expectation that Vγ|θ can discern whether P e is different." }, { "heading": "5.1.2 RELATIONSHIP BETWEEN V AND OOD ACCURACY", "text": "In this section, we use an experiment to support our proposal in Section 4.2. As previously proposed, if a model shows high test accuracy and small Vγ|θ simultaneously, it captures invariant features and avoids varying features, so it deserves to be an OOD model. In this experiment, we consider a model with high test accuracy and show that smaller Vγ|θ generally corresponds to better OOD accuracy, which supports our proposal.\nConsider two setups: pe ∈ {0.0, 0.1} and pe ∈ {0.1, 0.15, 0.2, 0.25, 0.3}. We implement IRM and REx with different penalty (note that ERM is λ = 0) to check relationship between Vγ|θ and OOD accuracy. For IRM and REx, we run 190 epochs pre-training with λ = 1 and use early stopping to prevent over-fitting. With this technique, all models successfully achieve good test accuracy (within 0.1 of the oracle accuracy) and meet our requirement. Figure 3 presents the results. We can see that Vγ|θ are highly correlated to OOD accuracy in IRM and REx, with the absolute of Pearson Coefficient never less than 0.8417. Those models learned with larger λ present better OOD property, learning less varying features, and showing smaller Vγ|θ. The results are consistent with our proposal, except that when λ is large in IRM, Vγ|θ is a little bit unstable. We have carefully\nexamined the phenomenon and found that it is caused by computational instability when inversing Hessian with eigenvalue quite close to 0. The problem of unstable inversing happens with a low probability and can be addressed by repeating the experiment once or twice." }, { "heading": "5.2 DOMAIN GENERALIZATION: VLCS", "text": "In this section, we implement the proposed metric for 4 algorithms: ERM, gDRO, Mixup and IRM on the VLCS image dataset, which is widely used for doamin generalization. We emulate a real scenario with Eall = {V,L,C, S} and Etr = Eall\\{S}. As mentioned in Gulrajani & Lopez-Paz (2020), we use “trainingdomain validation set” method, i.e. we split a validation set for each S ∈ Etr and the test accuracy is defined as the average accuracy amount the three validation sets. Note that, our goal is to use the test accuracy and Vγ|β to measure the OOD generalization, rather than to tune for the SOTA performance on a unseen domain {S}. Therefore, we do not apply any model selection method and just use the default hyper-parameters in Gulrajani & LopezPaz (2020)." }, { "heading": "5.2.1 STEP", "text": "" }, { "heading": "1: TEST ACCURACY COMPARISON", "text": "For each algorithm, we run the naive training process 12 times and show the average of test accuracy of each algorithm in Table 1. Before\ncalculating Vγ|β, the learnt model should at least arrive a good test accuracy. Otherwise, there is no need to discuss its OOD performance since OOD accuracy is smaller than test accuracy. In the table, the test accuracy of ERM, Mixup and gDRO is good, but that of IRM is not. In this case, IRM will be eliminated. If an algorithm fails to reach high test accuracy first, we should first change the hyper-parameters until we observe a relatively high test accuracy." }, { "heading": "5.2.2 STEP 2: SHUFFLE AND STANDARD METRIC COMPARISON", "text": "Now we are ready to check whether the learnt models are invariant across Etr. As mentioned in 4.2, the difference of Vγ|β and Ṽγ|β represents whether how much a model is invariant across Etr. We calculate the value and the results are in Figure 4. For ERM and Mixup, the two value is nearly the same. In this case, we expect that ERM and Mixup models are invariant and should have a relatively high OOD accuracy, so no more algorithm is needed. For\ngDRO, we can clearly see that Ṽγ|β is uniformly smaller than Vγ|β. Therefore, gDRO models don’t treat different domains equally, and hence we predict that the OOD accuracy will be relatively low. In this case, one who starts with gDRO should turn to other algorithms if a better OOD performance is demanded.\nNote that, in the whole process, we know nothing about {S}, so the OOD accuracy is unseen. However, from the above analysis, we know that (1) in this settings, ERM and Mixup is better than gDRO; (2) one who uses gDRO can turn to other algorithms (like Mixup) for better OOD performance; (3) one who uses ERM should consider collecting more environments if he (she) still\nwants to improve OOD performance. So far, we finish the judgement using test accuracy and the proposed metric." }, { "heading": "5.2.3 STEP 3: OOD ACCURACY RESULTS (ORACLE)", "text": "In this step, we fortunately obatin Eall and can check whether our judgement is reasonable. Normally, this step will not happen. We now show the OOD accuracy of four algorithms in table 2. Similar to our judgement, ERM and Mixup models achieve a higher OOD accuracy\nthan gDRO. The performance of IRM (under this hyper-parameters) is lower than test accuracy. During the above process, we can also compare the metric of the model from the same algorithm but with different hyper-parameters (as the same in section 5.1.2). Besides, one may notice that even the highest OOD accuracy is just 63.91%. That is to say, to obtain OOD accuracy larger than 70%, we should consider collecting more environments. In the appendix A.6, we continue our real scenario to see that, if initially Etr is more diverse, what will our metric lead us to. The whole results in VLCS can also be found in the same appendix, and the comparison of the proposed metric with the IRM penalty in formula 4 can be found there too. Besides, we show the comparison with Conditional Mutual Information in the appendix A.5. In summary, we use a realistic task to see how to judge the OOD property of learnt model using the proposed metric and test accuracy. The judgement coincides well with the real OOD performance." }, { "heading": "6 CONCLUSION", "text": "In this paper, we focus on two presently unsolved problems, that how can we discern the OOD property of multiple domains and of learnt models. To this end, we introduce influence function into OOD problem and propose our metric to help solve these issues. Our metric can not only discern whether a multi-domains problem is OOD but can also judge a model’s OOD property when combined with test accuracy. To make our calculation more meaningful, accurate and efficient, we modify influence function to domain-level and propose to use only the top model to calculate the influence. Our method is proved in simple cases and it works well in experiments. We sincerely hope that, with the help of this index, our understanding of OOD generalization will become more and more precise and thorough." }, { "heading": "A APPENDIX", "text": "A.1 SIMPLE BAYESIAN NETWORK\nIn this section, we show that the model with better OOD accuracy achieves smaller Vγ|θ. We assume the data is generated from the following Bayesian network:\nx1 ← N (0, σ2e), y← xe1W1→y +N (0, 1), x2 ← yeWy→2 +N (0, σ2e). (8)\nwhere x1,x2 ∈ R5 are the features, y ∈ R5 is the target vector,W1→y ∈ R5×5 andWy→2 ∈ R5×5 are the underlying parameters that are invariant across domains. The variance of gaussian noise is σ2e that depends on domain. For simplicity, we denote e = σe to represent a domain. The goal here is to linearly regress the response y on the input vector (x1,x2), i.e. ŷ = x1Ŵ1 + x2Ŵ2. According to the Bayesian network (8), x1 is the invariant feature, while the correlation between x2 and y is spurious and unstable since e = σe varies across domains. Clearly, the model based only on x1 is an invariant model. Any invariant estimator should achieve Ŵ1 ≈W1→y and Ŵ2 ≈ 0.\nNow consider five training domains e ∈ Etr = {0.2, 0.7, 1.2, 1.7, 2.2} , each containing 1000 data points. We estimate three linear models using ERM, IRM and REx respectively and record the parameter error as well as Vγ|θ (note that γ is θ here). Table 3 presents the results among 500 repetitions. As expected, IRM and REx learn more invariant relationships than ERM (smaller causal error) and better avoid non-causal variables (Ŵ2 ≈ 0). Furthermore, the proposed measurement Vγ|θ is highly related to invariance, i.e. model with better OOD property achieves smaller Vγ|θ. This results coincides our understanding.\nA.2 PROOF OF AN EXAMPLE\nIn this section, we use a simple model to illuminate the validity of Vγ|θ proposed in Section 4. Consider a structural equation model (Wright (1921)):\nx1 ∼ P ex , y← x1 +N (0, 1), x2 ← y +N (0, σ2e) where P ex is a distribution with a finite second-order moment, i.e. Ex21 < +∞, and σ2e is the variance of the noise term in x2. Both P ex and σ 2 e vary across domains. For simplicity, we assume there are infinite training data points collected from two training domains Etr = {(P 1x , σ21), (P 2x , σ22)}. Our goal is to predict y from x := (x1, x2)> using a least-squares predictor ŷ = x>β̂ := x1β̂1 + x2β̂2. Here we consider two algorithms: ERM and IRM with λ → +∞. According to Arjovsky et al. (2019), using IRM we obtain βIRM → (1, 0)>. Intuitively, ERM will exploit both x1 and x2, thus achieving a better regression model. However, since relationship between y and x2 varies across domains, our index will be huge in such condition. Conversely, βIRM only uses invariant features x1, thus Vγ|θ → −∞. Note that we do not have an embedding model here, so Vγ|θ = Vβ. ERM we denote\n`(β) = 1 |Etr| ∑ e∈Etr `e(β) with `e(β) = Ee(y− xβ)2.\nNote that in Ee, x1 is sample from P ex . We then have\n∂`(β)\nβ = − ∑ e∈Etr Ee[x(y− x>β)] = − 2 |Etr| ∑ e∈Etr ( Ee[x1(y− x>β)] Ee[x2(y− x>β)] ) To proceed further, we denote\nd̄ = 1 |Etr| ∑ e∈Etr Eex21, s = ∑ e∈Etr σ2e = σ 2 1 + σ 2 2 .\nBy solving the following equations:\n1 |Etr| ∑ e∈Etr Ee[x1(y− x>β)] = d̄(1− β1 − β2) = 0\nand 1 |Etr| ∑ e∈Etr Ee[x2(y− x>β)] = (d̄+ 1)(1− β1 − β2) + β1 − s |Etr| β2 = 0\nwe have β̂ = (β̂1, β̂2)> with\nβ̂1 = s\ns+ 2 , β̂2 =\n2\ns+ 2 .\nNow we calculate our index. It is easy to see that\n∂`e(β)\nβ1 = −2Ee[x1(y− x>β)] = −2Eex21(1− β1 − β2)\n∂`e(β)\nβ2 = −2Ee[x2(y− x>β)] = −2[(Eex21 + 1)(1− β1 − β2) + β1 − σ2eβ2].\nTherefore, ∇`1(β)−∇`2(β) = (\n0 2β2(σ 2 1 − σ22)\n) and ∇`1(β̂)−∇`1(β̂) = ( 0\n4(σ21−σ 2 2)\ns+2\n) (9)\nOn the other hand, calculate the hessian and we have\nHERM = ( 2d̄ 2d̄ 2d̄ 2d̄+ s+ 2 ) and H−1 =\n1\n2d̄(s+ 2)\n( 2d̄+ s+ 2 −2d̄ −2d̄ 2d̄ ) .\nThen we have (note that IF(β̂,Se) = H−1∇`e(β̂))\nVβ̂ = ln(‖Cove∈E(IF(β̂,S e))‖2)\n= ln( 1\n4 ‖(IF1 − IF2)(IF1 − IF2)>‖2)\n= ln( 1\n4 ‖IF1 − IF2‖2)\n= 2 ln( 1\n2 ‖H−1(∇`1(β̂)−∇`2(β̂))‖)\n= 2 ln( 1 4d̄(s+ 2) ‖ ( 2d̄+ s+ 2 −2d̄ −2d̄ 2d̄ )( 0 4(σ21−σ 2 2)\ns+2\n) ‖)\n= 2 ln( 2 √\n2|σ21 − σ22 | (s+ 2)2 )\nwhere the third equation holds because the rank of matrix is 1. Clearly, when |σ21 − σ22 | → 0 (means two domains become identical), our index Vβ → −∞. Otherwise, given σ1 6= σ2, we have Vβ > −∞, showing that ERM captures varied features. IRM We now turn to IRM model and show that Vβ → −∞ when λ → +∞, thus proving IRM learnt model β̂IRM does achieve smaller Vβ compared with β̂ in ERM. Under IRM model, assuming the tuning parameter is λ, we have\nL(β) = 1 |Etr| ∑ e∈Etr Ee[(y− x>β)2] + 4λ‖Ee[x>β(y− x>β)]‖2.\nThen we have the gradient with respect β:\n∇L(β) = 1 |Etr| ∑ e∈Etr ( − 2Ee[x(y− x>β)] + 8λEe[x>β(y− x>β)]Ee[x(y− 2x>β)] ) ,\nand the Hessian matrix\nH = HERM + 8λ |E| ∑ e∈E ( Ee[x(y− 2x>β)]Ee[x(y− 2x>β)]> − 2Ee[x>β(y− x>β)]Ee[xx>] ) .\n(10)\nDenote βλ the solution of IRM algorithm on Etr when penalty is λ. From Arjovsky et al. (2019) we know βλ → βIRM := (1, 0)>. To show limλ→+∞ Vβλ = −∞, we only need to show that\nlim λ→+∞\nH−1(∇`1(βλ)−∇`2(βλ)) = 0\nWe prove this by showing that\nlim λ→+∞ H−1(λ) = 0 and lim λ→+∞ ∇`1(βλ)−∇`2(βλ) = 0 (11)\nsimultaneously. We add (λ) afterH−1 to show thatH−1 is a continuous function of λ. RewriteH in formula 10 as\nH(λ,βλ) = HERM + λF (βλ)\nwhere\nF (β) = 8 |Etr| ∑ e∈Etr ( Ee[x(y− 2x>β)]Ee[x(y− 2x>β)]> − 2Ee[x>β(y− x>β)]Ee[xx>] ) lim\nβλ→βIRM F (βλ) =\n4 |Etr| ∑ e∈Etr ( −Eex21 1− Eex21 )( −Eex21 1− Eex21 ) = F (βIRM) exists.\nObviously, F (βIRM) is positive definite. Therefore, we have\nlim λ→+∞\nH(λ,βλ) −1 = lim\nλ→+∞ lim βλ→βIRM [HERM + λF (βλ)]\n−1\n= lim λ→+∞\n[HERM + λF (βIRM)] −1\n= 0\nThe first equation holds because limλ→+∞ F (βλ) = F (βIRM) has the limit and is not 0, and the last equation holds because the eigenvalue ofH goes to +∞ when λ→ +∞. Now consider ∇`1(βλ)−∇`2(βλ). According to formula 9, we have\nlim λ→+∞ ∇`1(βλ)−∇`2(βλ) = lim βλ→βIRM ∇`1(βλ)−∇`2(βλ)\n= ∇`1(βIRM)−∇`2(βIRM)\n=\n( 0\n2β2(σ 2 1 − σ22) ) = 0\nHence we finish proof of formula 11 and show that Vβ → −∞ in IRM.\nA.3 FORMULA (5)\nThis section shows the derivation of the expression (5). Recall that the training dataset S = {S1, ...,Sm} and the objective function\nL(f, S) = `(f, S) + λR(f, S), where the second term on the right hand side is the regularization. As to ERM, the regularization term is zero. With the feature extractor (β) fixed, we upweight a domain Se. The new objective function is\nL+(θ,S, δ) = L(θ,S) + δ · `(θ,Se) Notice that when upweight an domain, we only upweight the empirical loss on the corresponding domain. Further, we denote γ̂, γ̂+ as the optimal solutions before and after upweighting a domain. It is easy to see that ‖γ̂+ − γ̂‖ → 0 when δ → 0. Following the derivation in Koh & Liang (2017), according to the first-order Taylor expansion of∇γL+(θ,S, δ) with respect to γ on γ̂,\n0 = ∇γ [L(θ̂+,S) + δ`(θ̂+,Se)] = ∇γ(L(θ̂,S) + δ`(θ̂,Se)) +∇2γ [L(θ̂,S) + δ`(θ̂,Se)](γ̂+ − γ̂) + o(‖γ̂+ − γ̂‖)\n= δ∇γ`(θ̂,Se) +∇2γ [L(θ̂,S) + δ`(θ̂,Se)](γ̂+ − γ̂) + o(‖γ̂+ − γ̂‖)\nAssume that∇2[L(θ̂,S) + δ`(θ̂,Se)] is invertible, we have γ̂+ − γ̂\nδ = [∇2γL(θ̂,S) + δ`(θ̂,Se)]−1∇γ`(θ̂,Se) + o(‖ γ̂+ − γ̂ δ ‖)\nlim δ→0 γ̂+ − γ̂ δ = [∇2γL(θ̂,S)]−1∇γ`(θ̂,Se)\nNote that this derivation is not fully rigorous. Please refer to Van der Vaart (2000) for more rigorous discussions about influence function.\nThe reason that β should be fixed is as follows. First, if β can be varied, then the change of θ will become: (\nHγγ Hγβ Hβγ Hββ )−1(∇γ l(θ̂,Se) ∇βl(θ̂,Se) ) .\nTherefore, the computational cost is similar to calculate and inverse the whole hessian matrix. Most importantly, without fixing β, the change of γ is somehow useless. Say when upweighting Se, the use of a feature decreases. It’s possible, however, that the parameter in γ corresponding to the feature increases while β decreases a larger scale. In this case, the use of the feature decreases but γ increases. Without fixing β, the change of γ calculated by influence function may provide no information about the use a feature. Therefore, we argue that, fixing β is a “double-win” choice.\nA.4 ACCURACY IS NOT ENOUGH\nIn Introduction, we have given an example where test accuracy misleads us. In this section, we will first supplement some examples where test accuracy not only misjudge different algorithms, but it also misjudges the OOD property of models learnt with different penalty within the same algorithm. After that, we will show the universality of these problems and why test accuracy fails.\nConsider two training domains pe ∈ {0.0, 0.1},\nand a test domain with flip rate denoted by ptest. We implement IRM and REx with penalty λ ∈ {0, 50, 100, 500, 1000} to check the relationship between test accuracy and OOD accuracy. The training process is identical to the experiment in section 5.1.2. As results showed in Figure 5, when OOD property of model gradually improves (caused by gradually increasing λ), its relationship with test accuracy is either completely (when ptest is 0.2) or partly (when ptest is 0.3) negatively correlated. This phenomenon reveals the weakness of test accuracy. If one wants to select a λ when ptest is 0.3, judged by test accuracy, λ = 50 may be the best choice, no matter in IRM or REx. However, the model learnt with λ = 50 has OOD accuracy even less than a random guess model.\nWhether test accuracy is positively, negatively correlated or irrelevant to model’s OOD property mainly depends on the “distance” between test domain and the “worst” domain for the model. If test accuracy happens to be the lowest among all the domains, we directly have OOD accuracy equals to test accuracy. In practice, however, their distance may be huge, and this is precisely the difficulty of OOD generalization. For example, we are accessible to images of cows in grasslands, woods and forests, but cows in desert are rare. At this point, the “worst” domain is certainly far from what we can get. If we expect a model to capture the real feature of cows, the model should avoid any usage of background color. However, a model based on color will perform consistently well (better than any OOD model) no matter in grasslands, woods and forests since all of the available domains are green background in general. In Colored MNIST, test accuracy fails in the same way.\nSuch situations are quite common. Generally, within domains we have, there may be some features that are strongly correlated to the prediction but are slightly varied across domains. These features are spurious, given that their relationship with prediction is significantly disparate in other domains to which we want to generalize. However, using these features in prediction will easily achieve high test accuracy. Consequently, it will be extremely risky to judge models merely by test accuracy.\nA.5 CONDITIONAL MUTUAL INFORMATION\nA possible alternative of Vγ|θ may be Conditional Mutual Information (CMI). For three continuous random variables X , Y , Z, the CMI is defined as\nI(X;Y |Z) = ∫ ∫ ∫ p(x, y, z) log p(x, y, z)\np(x, z)p(y|z) dxdydz (12)\nwhere p(·) is the probability density function. Consider I(e; y|Φ(x)) or I(e; y|ŷ), i.e. the mutual information of e and true label y, given the features or the prediction ŷ of x. The insight is that, if the model is invariant across different domains, then little information about e should be contained in y given Φ(x). Otherwise, if the prediction ŷ is highly correlated to e, then the mutual information will be high.\nThis metric seems to be promising. However, the numerical estimation of CMI remains a challenge. To this end, previous works have done a lot to solve this problem, including CCMI proposed in Mukherjee et al. (2020) and CCIT proposed in Sen et al. (2017). In this part, we will first calculate\ntrue I(e; y|ŷ) in a simple Colored MNIST experiment to show that if there is no estimation problem, CMI could be a potential metric to judge the OOD property of the learnt model, at least in a simple, discrete task. We then run the code provided by Sen et al. (2017) (https://github.com/ rajatsen91/CCIT) to show that even in this simple task, the estimation of CMI may severely influence its performance.\nSpecifically, the experimental setting is similar to that in subsection 5.1.2, with two OOD algorithm and number of training domains in {2, 5}. For each algorithm, we consider the penalty weight λ ∈ {0, 10, 100, 1000}, run the algorithm 50 times, and record their OOD accuracy as well as true CMI value or CCIT value. The results are shown in Figure 6. We can see that in the case when true CMI can be easily calculated, especially in the case when the number of domains is small and the task is discrete (not continuous), CMI is highly correlated to OOD accuracy. However, in a regression task or in a task when directly calculating the value of CMI becomes impractical, the estimation process may severely destroy the correlation, and may also result in an inverse correlation. Therefore, we summarized the estimation of CMI has limited its utility. We leave the fine-grained analysis of the relationship between CMI, estimated CMI and OOD property to future works.\nA.6 RESULTS ON VLCS\nA.6.1 CONTINUED SCENARIO\nThis is a continuation of section 5.2. Say in this task, Eall remains the four domains but cEtr = {L, S, V } (empirically we find it more diverse). Similarly, we start with test accuracy shown in table 4. In this step, the situation is the same, i.e. IRM should be eliminated until proper hyper-parameters are found. In step 2, we show the comparison between Vγ|β and Ṽγ|β of three algorithms in Figure 7. As we can see, this time the two value are similar for\nall three algorithms, including gDRO. This is different from the case when S is unseen. In this case, we predict that all of the three algorithms should achieve high OOD accuracy. In fact, if we act as the oracle and calculate their OOD performance, we will find that our judgement is close to the reality: ERM, Mixup and gDRO achieve OOD accuracy from 70.55% to 72.87%. According to the confidence interval, they difference are not satistically significant. As for IRM, the OOD accuracy is 38.64%. One who use ERM, Mixup or gDRO should be satisfied for the performance, since higher demand is somehow impractical!\nA.6.2 FULL RESULTS AND COMPARISON WITH IRM PENALTY\nAs mentioned in Section 5.2, we consider ERM, gDRO, Mixup and IRM on VLCS image dataset. We report the full results here, and compare the performance of out metric Vγ|θ with IRM penalty in\nformula 4. Thorough the whole experiments, Eall = {V,L,C, S}. We construct four experimental settings. In each setting, one domain is removed and the rest consists of Etr. For each domain in Etr, we split a validation set, and test accuracy is the average accuracy amount validation sets. The results are shown in Table 5. First, our results coincide with Gulrajani & Lopez-Paz (2020) that ERM nearly outperforms any algorithms. We can see that the OOD accuracy of ERM is either the highest or only slightly lower than Mixup. Meanwhile, it has a relatively small Vγ|θ. Second, higher OOD accuracy corresponds to lower Vγ|θ. In addition, we notice that IRM has a relatively low test accuracy and OOD accuracy. We explain the phenomenon by an improper hyper-parameters in IRM, although we didn’t change the default hyper-parameters in the code of Gulrajani & Lopez-Paz (2020) (https://github.com/facebookresearch/DomainBed). No matter what, this phenomenon provides a good example in which we can compare our metric with IRM penalty and discuss their advantages and disadvantages.\nDespite that IRM could be a good OOD algorithm, using IRM penalty as the metric to judge the OOD property of a learnt model still has much weakness, and some are severe. First, in different tasks, the value of λ to obtain an OOD model may be different, so as other hyper-parameters like “anneal steps” in IRM code. Without exhaustive search on the proper value of the hyper-parameters, it’s easy that IRM overfits on the penalty term (which is the situation in VLCS). When IRM overfits, the IRM penalty will become quite small (higher λ often leads to smaller penalty), but absolutely overfitting on penalty term will not result in good OOD accuracy. Therefore, the balance between loss and penalty is important. However, how to find a balanced point? This is a model selection problem, and Gulrajani & Lopez-Paz (2020) propose that an OOD algorithm without model selection is not complete. No matter what to be used as the metric, it cannot be IRM penalty since we cannot use what is included in the training process as the metric to select training hyper-parameters.\nSecond, IRM penalty shows a bias on different algorithms. In the Table 5, the IRM penalty of IRM is smaller than most algorithms. Besides, although the OOD accuracy of Mixup is similar to ERM, its IRM penalty is significantly higher. This is not strange but will limit the usage of IRM penalty. As for our metric, we mention that small Vγ|θ is better. However, the understanding of “smallness” is based on the relative value of the shuffle version and standard version of Vγ|θ. As mentioned in section 5.2, when Eall\\Etr = {S}, we can see that shuffle section 5.2 is obviously smaller than standard version in gDRO, but in ERM and Mixup, these value are relatively close or indistinguishable. In this case, we know that gDRO captures less invariant features and is not OOD\nthan the other two algorithms. During the whole process, we can circumvent the direct comparison of Vγ|θ in different algorithms, which is quite important. In summary, IRM penalty makes IRM a good algorithm, but using it as the general metric of OOD performance is completely another picture." } ]
2,020
OUT-OF-DISTRIBUTION GENERALIZATION ANALYSIS VIA INFLUENCE FUNCTION
SP:df0e5190360b8dd9f9ddc35a6f7c57834f483fbb
[ "In this paper, the authors propose a cross-modal (audio-video) self-supervised representation learning method with a contrastive learning framework. To overcome the high redundancy in the negative samples, they propose an active negative sampling method. They use a gradient with respect to the pseudo label to measure the uncertainty of a negative sample. They use K-means clustering to maximize the negative sample diversity when constructing a new negative set for queueing. They show their method's efficacy on the public benchmarks: Kinetics, AudioSet for retraining, and UCF-101, HMDB-51, ESC-50 for downstream tasks. " ]
Contrastive learning has been shown to produce generalizable representations of audio and visual data by maximizing the lower bound on the mutual information (MI) between different views of an instance. However, obtaining a tight lower bound requires a sample size exponential in MI and thus a large set of negative samples. We can incorporate more samples by building a large queue-based dictionary, but there are theoretical limits to performance improvements even with a large number of negative samples. We hypothesize that random negative sampling leads to a highly redundant dictionary that results in suboptimal representations for downstream tasks. In this paper, we propose an active contrastive learning approach that builds an actively sampled dictionary with diverse and informative items, which improves the quality of negative samples and improves performances on tasks where there is high mutual information in the data, e.g., video classification. Our model achieves state-of-the-art performance on challenging audio and visual downstream benchmarks including UCF101, HMDB51 and ESC50.1
[ { "affiliations": [], "name": "Shuang Ma" }, { "affiliations": [], "name": "Zhaoyang Zeng" } ]
[ { "authors": [ "Humam Alwassel", "Dhruv Mahajan", "Lorenzo Torresani", "Bernard Ghanem", "Du Tran" ], "title": "Selfsupervised learning by cross-modal audio-video clustering", "venue": "arXiv preprint arXiv:1911.12667,", "year": 2019 }, { "authors": [ "Relja Arandjelovic", "Andrew Zisserman" ], "title": "Look, listen and learn", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Relja Arandjelovic", "Andrew Zisserman" ], "title": "Objects that sound", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Sanjeev Arora", "Hrishikesh Khandeparkar", "Mikhail Khodak", "Orestis Plevrakis", "Nikunj" ], "title": "Saunshi. A theoretical analysis of contrastive unsupervised representation learning", "venue": null, "year": 2019 }, { "authors": [ "David Arthur", "Sergei Vassilvitskii" ], "title": "K-means++: The advantages of careful seeding", "venue": "In SODA,", "year": 2007 }, { "authors": [ "Yuki M Asano", "Mandela Patrick", "Christian Rupprecht", "Andrea Vedaldi" ], "title": "Labelling unlabelled videos from scratch with multi-modal self-supervision", "venue": "arXiv preprint arXiv:2006.13662,", "year": 2020 }, { "authors": [ "Jordan T Ash", "Chicheng Zhang", "Akshay Krishnamurthy", "John Langford", "Alekh Agarwal" ], "title": "Deep batch active learning by diverse, uncertain gradient lower bounds", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Yusuf Aytar", "Carl Vondrick", "Antonio Torralba" ], "title": "Soundnet: Learning sound representations from unlabeled video", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Mohamed Ishmael Belghazi", "Aristide Baratin", "Sai Rajeshwar", "Sherjil Ozair", "Yoshua Bengio", "Aaron Courville", "Devon Hjelm" ], "title": "Mutual information neural estimation", "venue": null, "year": 2018 }, { "authors": [ "Uta Buchler", "Biagio Brattoli", "Bjorn Ommer" ], "title": "Improving spatiotemporal self-supervision by deep reinforcement learning", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Yue Cao", "Zhenda Xie", "Bin Liu", "Yutong Lin", "Zheng Zhang", "Han Hu" ], "title": "Parametric instance classification for unsupervised visual feature learning", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Joao Carreira", "Andrew Zisserman" ], "title": "Quo vadis, action recognition? a new model and the kinetics dataset", "venue": null, "year": 2017 }, { "authors": [ "Joao Carreira", "Eric Noland", "Chloe Hillier", "Andrew Zisserman" ], "title": "A short note on the kinetics-700 human action dataset", "venue": null, "year": 1907 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": "In ACL,", "year": 2019 }, { "authors": [ "Debidatta Dwibedi", "Yusuf Aytar", "Jonathan Tompson", "Pierre Sermanet", "Andrew Zisserman" ], "title": "Temporal cycle-consistency learning", "venue": null, "year": 2019 }, { "authors": [ "Ariel Ephrat", "Inbar Mosseri", "Oran Lang", "Tali Dekel", "Kevin Wilson", "Avinatan Hassidim", "William T Freeman", "Michael Rubinstein" ], "title": "Looking to listen at the cocktail party: A speaker-independent audio-visual model for speech separation", "venue": "ACM Transactions on Graphics,", "year": 2018 }, { "authors": [ "Fartash Faghri", "David J Fleet", "Jamie Ryan Kiros", "Sanja Fidler" ], "title": "VSE++: Improving visualsemantic embeddings with hard negatives", "venue": "In BMVC,", "year": 2017 }, { "authors": [ "Satoru Fujishige" ], "title": "Submodular functions and optimization", "venue": null, "year": 2005 }, { "authors": [ "Chuang Gan", "Deng Huang", "Hang Zhao", "Joshua B Tenenbaum", "Antonio Torralba" ], "title": "Music gesture for visual sound separation", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Ruohan Gao", "Kristen Grauman" ], "title": "2.5D visual sound", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Ruohan Gao", "Kristen Grauman" ], "title": "Co-separating sounds of visual objects", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Ruohan Gao", "Rogerio Feris", "Kristen Grauman" ], "title": "Learning to separate object sounds by watching unlabeled video", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Jort F Gemmeke", "Daniel PW Ellis", "Dylan Freedman", "Aren Jansen", "Wade Lawrence", "R Channing Moore", "Manoj Plakal", "Marvin Ritter" ], "title": "Audio set: An ontology and human-labeled dataset for audio", "venue": null, "year": 2017 }, { "authors": [ "Walter R Gilks", "Sylvia Richardson", "David Spiegelhalter" ], "title": "Markov chain Monte Carlo in practice", "venue": "Chapman and Hall/CRC,", "year": 1995 }, { "authors": [ "Michael Gutmann", "Aapo Hyvärinen" ], "title": "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models", "venue": "In AISTATS,", "year": 2010 }, { "authors": [ "Tengda Han", "Weidi Xie", "Andrew Zisserman" ], "title": "Video representation learning by dense predictive coding", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Wangli Hao", "Zhaoxiang Zhang", "He Guan" ], "title": "Cmcgan: A uniform framework for cross-modal visual-audio mutual generation", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Kensho Hara", "Hirokatsu Kataoka", "Yutaka Satoh" ], "title": "Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet", "venue": null, "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Olivier J Hénaff", "Ali Razavi", "Carl Doersch", "SM Eslami", "Aaron van den Oord" ], "title": "Data-efficient image recognition with contrastive predictive coding", "venue": null, "year": 1905 }, { "authors": [ "R Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Phil Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Ahmet Iscen", "Giorgos Tolias", "Yannis Avrithis", "Ondřej Chum" ], "title": "Mining on manifolds: Metric learning without labels", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Longlong Jing", "Yingli Tian" ], "title": "Self-supervised spatiotemporal feature learning by video geometric transformations", "venue": "arXiv preprint arXiv:1811.11387,", "year": 2018 }, { "authors": [ "Will Kay", "Joao Carreira", "Karen Simonyan", "Brian Zhang", "Chloe Hillier", "Sudheendra Vijayanarasimhan", "Fabio Viola", "Tim Green", "Trevor Back", "Paul Natsev" ], "title": "The kinetics human action video dataset", "venue": "arXiv preprint arXiv:1705.06950,", "year": 2017 }, { "authors": [ "Dahun Kim", "Donghyeon Cho", "In So Kweon" ], "title": "Self-supervised video representation learning with space-time cubic puzzles", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Bruno Korbar", "Du Tran", "Lorenzo Torresani" ], "title": "Cooperative learning of audio and video models from self-supervised synchronization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Hildegard Kuehne", "Hueihan Jhuang", "Estı́baliz Garrote", "Tomaso Poggio", "Thomas Serre" ], "title": "Hmdb: a large video database for human motion recognition", "venue": "In ICCV,", "year": 2011 }, { "authors": [ "Alex Kulesza", "Ben Taskar" ], "title": "k-dpps: Fixed-size determinantal point processes", "venue": "In ICML,", "year": 2011 }, { "authors": [ "Brenden M Lake", "Ruslan Salakhutdinov", "Joshua B Tenenbaum" ], "title": "Human-level concept learning through probabilistic program induction", "venue": null, "year": 2015 }, { "authors": [ "Hsin-Ying Lee", "Jia-Bin Huang", "Maneesh Singh", "Ming-Hsuan Yang" ], "title": "Unsupervised representation learning by sorting sequences", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Gen Li", "Nan Duan", "Yuejian Fang", "Ming Gong", "Daxin Jiang", "Ming Zhou" ], "title": "Unicoder-vl: A universal encoder for vision and language by cross-modal pre-training", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Liunian Harold Li", "Mark Yatskar", "Da Yin", "Cho-Jui Hsieh", "Kai-Wei Chang" ], "title": "VisualBERT: A simple and performant baseline for vision and language", "venue": null, "year": 1908 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized bert pretraining approach", "venue": null, "year": 1907 }, { "authors": [ "Jiasen Lu", "Dhruv Batra", "Devi Parikh", "Stefan Lee" ], "title": "ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Odile Macchi" ], "title": "The coincidence approach to stochastic point processes", "venue": "Advances in Applied Probability,", "year": 1975 }, { "authors": [ "David McAllester", "Karl Stratos" ], "title": "Formal limitations on the measurement of mutual information", "venue": "In AISTATS,", "year": 2020 }, { "authors": [ "Ishan Misra", "C. Lawrence Zitnick", "Martial Hebert" ], "title": "Shuffle and learn: Unsupervised learning using temporal order verification", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Pedro Morgado", "Nuno Vasconcelos", "Ishan Misra" ], "title": "Audio-visual instance discrimination with cross-modal agreement", "venue": "arXiv preprint arXiv:2004.12943,", "year": 2020 }, { "authors": [ "Hyeonseob Nam", "Bohyung Han" ], "title": "Learning multi-domain convolutional neural networks for visual tracking", "venue": null, "year": 2016 }, { "authors": [ "George L Nemhauser", "Laurence A Wolsey", "Marshall L Fisher" ], "title": "An analysis of approximations for maximizing submodular set functions—i", "venue": "Mathematical programming,", "year": 1978 }, { "authors": [ "Aäron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Andrew Owens", "Alexei A Efros" ], "title": "Audio-visual scene analysis with self-supervised multisensory features", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Andrew Owens", "Jiajun Wu", "Josh H McDermott", "William T Freeman", "Antonio Torralba" ], "title": "Ambient sound provides supervision for visual learning", "venue": null, "year": 2016 }, { "authors": [ "Sherjil Ozair", "Corey Lynch", "Yoshua Bengio", "Aaron Van den Oord", "Sergey Levine", "Pierre Sermanet" ], "title": "Wasserstein dependency measure for representation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Jiangmiao Pang", "Kai Chen", "Jianping Shi", "Huajun Feng", "Wanli Ouyang", "Dahua Lin" ], "title": "Libra RCNN: Towards balanced learning for object detection", "venue": null, "year": 2019 }, { "authors": [ "Mandela Patrick", "Yuki M Asano", "Ruth Fong", "João F Henriques", "Geoffrey Zweig", "Andrea Vedaldi" ], "title": "Multi-modal self-supervision from generalized data transformations", "venue": "arXiv preprint arXiv:2003.04298,", "year": 2020 }, { "authors": [ "Karol J Piczak" ], "title": "Environmental sound classification with convolutional neural networks", "venue": "In International Workshop on Machine Learning for Signal Processing (MLSP),", "year": 2015 }, { "authors": [ "Karol J Piczak" ], "title": "ESC: Dataset for environmental sound classification", "venue": "In Proceedings of the 23rd ACM international conference on Multimedia,", "year": 2015 }, { "authors": [ "Hardik B Sailor", "Dharmesh M Agrawal", "Hemant A Patil" ], "title": "Unsupervised filterbank learning using convolutional restricted boltzmann machine for environmental sound classification", "venue": null, "year": 2017 }, { "authors": [ "Steffen Schneider", "Alexei Baevski", "Ronan Collobert", "Michael Auli" ], "title": "wav2vec: Unsupervised pre-training for speech recognition", "venue": "arXiv preprint arXiv:1904.05862,", "year": 2019 }, { "authors": [ "Pierre Sermanet", "Corey Lynch", "Jasmine Hsu", "Sergey Levine" ], "title": "Time-contrastive networks: Selfsupervised learning from multi-view observation", "venue": "In CVPRW,", "year": 2017 }, { "authors": [ "Burr Settles" ], "title": "Active learning literature survey", "venue": "Technical report, University of Wisconsin-Madison Department of Computer Sciences,", "year": 2009 }, { "authors": [ "Abhinav Shrivastava", "Abhinav Gupta", "Ross Girshick" ], "title": "Training region-based object detectors with online hard example mining", "venue": null, "year": 2016 }, { "authors": [ "Khurram Soomro", "Amir Roshan Zamir", "Mubarak Shah" ], "title": "Ucf101: A dataset of 101 human actions classes from videos in the wild", "venue": "arXiv preprint arXiv:1212.0402,", "year": 2012 }, { "authors": [ "Weijie Su", "Xizhou Zhu", "Yue Cao", "Bin Li", "Lewei Lu", "Furu Wei", "Jifeng Dai" ], "title": "Vl-bert: Pre-training of generic visual-linguistic representations", "venue": null, "year": 1908 }, { "authors": [ "Chen Sun", "Fabien Baradel", "Kevin Murphy", "Cordelia Schmid" ], "title": "Contrastive bidirectional transformer for temporal representation learning", "venue": "arXiv preprint arXiv:1906.05743,", "year": 2019 }, { "authors": [ "Chen Sun", "Austin Myers", "Carl Vondrick", "Kevin Murphy", "Cordelia Schmid" ], "title": "Videobert: A joint model for video and language representation learning", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Hao Tan", "Mohit Bansal" ], "title": "LXMERT: Learning cross-modality encoder representations from transformers", "venue": "arXiv preprint arXiv:1908.07490,", "year": 2019 }, { "authors": [ "Yonglong Tian", "Dilip Krishnan", "Phillip Isola" ], "title": "Contrastive multiview coding", "venue": "arXiv preprint arXiv:1906.05849,", "year": 2019 }, { "authors": [ "Du Tran", "Heng Wang", "Lorenzo Torresani", "Jamie Ray", "Yann LeCun", "Manohar Paluri" ], "title": "A closer look at spatiotemporal convolutions for action recognition", "venue": null, "year": 2018 }, { "authors": [ "Jiangliu Wang", "Jianbo Jiao", "Linchao Bao", "Shengfeng He", "Yunhui Liu", "Wei Liu" ], "title": "Self-supervised spatio-temporal representation learning for videos by predicting motion and appearance statistics", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Xiaolong Wang", "Allan Jabri", "Alexei A Efros" ], "title": "Learning correspondence from the cycleconsistency of time", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Mike Wu", "Chengxu Zhuang", "Milan Mosse", "Daniel Yamins", "Noah Goodman" ], "title": "On mutual information in contrastive learning for visual representations", "venue": "arXiv preprint arXiv:2005.13149,", "year": 2020 }, { "authors": [ "Dejing Xu", "Jun Xiao", "Zhou Zhao", "Jian Shao", "Di Xie", "Yueting Zhuang" ], "title": "Self-supervised spatiotemporal learning via video clip order prediction", "venue": null, "year": 2019 }, { "authors": [ "Karren Yang", "Bryan Russell", "Justin Salamon" ], "title": "Telling left from right: Learning spatial correspondence of sight and sound", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Yiming Yang", "Jaime Carbonell", "Russ R Salakhutdinov", "Quoc V Le" ], "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Hang Zhao", "Chuang Gan", "Andrew Rouditchenko", "Carl Vondrick", "Josh McDermott", "Antonio Torralba" ], "title": "The sound of pixels", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Hang Zhao", "Chuang Gan", "Wei-Chiu Ma", "Antonio Torralba" ], "title": "The sound of motions", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Yipin Zhou", "Zhaowen Wang", "Chen Fang", "Trung Bui", "Tamara L Berg" ], "title": "Visual to sound: Generating natural sound for videos in the wild", "venue": null, "year": 2018 }, { "authors": [ "B work" ], "title": "ADDITIONAL EXPERIMENTS Effect of mutual information. We investigate the impact of the amount of MI on contrastive learning using the Spatial-MultiOmniglot dataset (Ozair et al., 2019). It contains paired images (x, y) of Omniglot characters (Lake et al., 2015) with each image arranged in an m × n grid (each grid", "venue": null, "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Contrastive learning of audio and visual representations has delivered impressive results on various downstream scenarios (Oord et al., 2018; Hénaff et al., 2019; Schneider et al., 2019; Chen et al., 2020). This self-supervised training process can be understood as building a dynamic dictionary per mini-batch, where “keys” are typically randomly sampled from the data. The encoders are trained to perform dictionary look-up: an encoded “query” should be similar to the value of its matching key and dissimilar to others. This training objective maximizes a lower bound of mutual information (MI) between representations and the data (Hjelm et al., 2018; Arora et al., 2019). However, such lower bounds are tight only for sample sizes exponential in the MI (McAllester & Stratos, 2020), suggesting the importance of building a large and consistent dictionary across mini-batches.\nRecently, He et al. (2020) designed Momentum Contrast (MoCo) that builds a queue-based dictionary with momentum updates. It achieves a large and consistent dictionary by decoupling the dictionary size from the GPU/TPU memory capacity. However, Arora et al. (2019) showed that simply increasing the dictionary size beyond a threshold does not improve (and sometimes can even harm) the performance on downstream tasks. Furthermore, we find that MoCo can suffer when there is high redundancy in the data, because only relevant – and thus limited – parts of the dictionary are updated in each iteration, ultimately leading to a dictionary of redundant items (we show this empirically in Fig. 3). We argue that random negative sampling is much responsible for this: a randomly constructed dictionary will contain more “biased keys” (similar keys that belong to the same class) and “ineffective keys” (keys that can be easily discriminated by the current model) than a carefully constructed one. Furthermore, this issue can get aggravated when the dictionary size is large.\nIn this paper, we focus on learning audio-visual representations of video data by leveraging the natural correspondence between the two modalities, which serves as a useful self-supervisory signal (Owens & Efros, 2018; Owens et al., 2016; Alwassel et al., 2019). Our starting point is contrastive learning (Gutmann & Hyvärinen, 2010; Oord et al., 2018) with momentum updates (He et al., 2020).\n∗Equal contribution 1Code is available at: https://github.com/yunyikristy/CM-ACC\nHowever, as we discussed above, there are both practical challenges and theoretical limits to the dictionary size. This issue is common to all natural data but is especially severe in video; successive frames contain highly redundant information, and from the information-theoretic perspective, audiovisual channels of video data contain higher MI than images because the higher dimensionality – i.e., temporal and multimodal – reduces the uncertainty between successive video clips. Therefore, a dictionary of randomly sampled video clips would contain highly redundant information, causing the contrastive learning to be ineffective. Therefore, we propose an actively sampled dictionary to sample informative and diverse set of negative instances. Our approach is inspired by active learning (Settles, 2009) that aims to identify and label only the maximally informative samples, so that one can train a high-performing classifier with minimal labeling effort. We adapt this idea to construct a non-redundant dictionary with informative negative samples.\nOur approach, Cross-Modal Active Contrastive Coding (CM-ACC), learns discriminative audiovisual representations and achieves substantially better results on video data with a high amount of redundancy (and thus high MI). We show that our actively sampled dictionary contains negative samples from a wider variety of semantic categories than a randomly sampled dictionary. As a result, our approach can benefit from large dictionaries even when randomly sampled dictionaries of the same size start to have a deleterious effect on model performance. When pretrained on AudioSet (Gemmeke et al., 2017), our approach achieves new state-of-the-art classification performance on UCF101 (Soomro et al., 2012), HMDB51 (Kuehne et al., 2011), and ESC50 (Piczak, 2015b)." }, { "heading": "2 BACKGROUND", "text": "Contrastive learning optimizes an objective that encourages similar samples to have similar representations than with dissimilar ones (called negative samples) (Oord et al., 2018):\nmin θf ,θh ExvpX\n[ −log ( ef(x;θf ) ᵀh(x+;θh)\nef(x;θf ) ᵀh(x+;θh) + ef(x;θf ) ᵀh(x−;θh)\n)] (1)\nThe samples x+ and x− are drawn from the same distribution as x ∈ X , and are assumed to be similar and dissimilar to x, respectively. The objective encourages f(·) and h(·) to learn representations of x such that (x, x+) have a higher similarity than all the other pairs of (x, x−).\nWe can interpret it as a dynamic dictionary look-up process: Given a “query” x, it finds the correct “key” x+ among the other irrelevant keys x− in a dictionary. Denoting the query by q = f(x), the correct key by k+ = h(x+), and the dictionary of K negative samples by {ki = h(xi)}, i ∈ [1,K], we can express equation 1 in a softmax form, minθq,θk ExvpX [ −log e q·k+/τ∑K i=0 e q·ki/τ ] , where θq and θk are parameters of the query and key encoders, respectively, and τ is a temperature term that controls the shape of the probability distribution computed by the softmax function.\nMomentum Contrast (MoCo) decouples the dictionary size from the mini-batch size by implementing a queue-based dictionary, i.e., current mini-batch samples are enqueued while the oldest are dequeued (He et al., 2020). It then applies momentum updates to parameters of a key encoder θk with respect to parameters of a query encoder, θk ← mθk + (1 −m)θq , where m ∈ [0, 1) is a momentum coefficient. Only the parameters θq are updated by back-propagation, while the parameters θk are defined as a moving average of θq with exponential smoothing. These two modifications allow MoCo to build a large and slowly-changing (and thus consistent) dictionary.\nTheoretical Limitations of Contrastive Learning. Recent work provides theoretical analysis of the shortcomings of contrastive learning. McAllester & Stratos (2020) show that lower bounds to the MI are only tight for sample size exponential in the MI, suggesting that a large amount of data are required to achieve a tighter lower bound on MI. He et al. (2020) empirically showed that increasing negative samples has shown to improve the learned presentations. However, Arora et al. (2019) showed that such a phenomenon does not always hold: Excessive negative samples can sometimes hurt performance. Also, when the number of negative samples is large, the chance of sampling redundant instances increases, limiting the effectiveness of contrastive learning. One of our main contributions is to address this issue with active sampling of negative instances, which reduces redundancy and improves diversity, leading to improved performance on various downstream tasks." }, { "heading": "3 APPROACH", "text": "" }, { "heading": "3.1 CROSS-MODAL CONTRASTIVE REPRESENTATION LEARNING", "text": "Our learning objective encourages the representations of audio and visual clips to be similar if they come from the same temporal block of a video. LetA = {a0, · · · , aN−1} and V = {v0, · · · , vN−1} be collections of audio and visual clips, where each pair (ai, vi) is from the same block of a video. We define query encoders fa and fv and key encoders ha and hv for audio and visual clips, respectively, with learnable parameters {θaq , θvq} for the query encoders and {θak , θvk} for the key encoders. These encoders compute representations of audio and visual clips as queries and keys,\nqv = fv(v query), kv = hv(v key), qa = fa(a query), ka = ha(a key) (2)\nWe train our encoders to perform cross-modal dictionary look-up, e.g., given a query video clip vquery, we find the corresponding audio clip akey from a dictionary Da. Adapting MoCo (He et al., 2020) to our cross-modal setup, we implement a queue-based dictionaryDa that stores keys of audio clips {kai }Ki=1, where K is the dictionary size. We compute the contrastive loss and backpropagate the gradients only to the visual query encoder fv and update the parameters θvq . For the audio encoder ha, we apply the momentum update (He et al., 2020),\nθak ← mθak + (1−m)θaq (3) The parameter θaq is not updated in this contrastive coding step; we update it during the audio-tovisual step (similar as above with the opposite modalities). Here we explain the visual-to-audio step only; we perform bi-directional contrastive coding and train the whole model end-to-end." }, { "heading": "3.2 ACTIVE SAMPLING OF NEGATIVE INSTANCES: UNCERTAINTY AND DIVERSITY", "text": "The quality of negative samples is crucial in contrastive learning. Existing work typically adopts random negative sampling. However, we want a diverse set of negative samples so that comparisons between positive and negative pairs are the most informative they can be. Motivated by active learning (Settles, 2009), we propose a gradient-based active sampling approach to improve the quality of negative samples. In active learning, the learner chooses samples that seem maximally informative and queries an oracle for labels to obtain an optimal solution with a minimal labeling budget. Adapting this to our setting, we can empower the learner to choose the maximally informative negative samples to construct a dictionary; the main question is how to measure the informativeness of samples without labels.\nOne way to measure informativeness is through the lens of uncertainty: If a model is highly uncertain about its prediction of a sample, we can ensure the maximum update to the model by including the sample in a mini-batch (conversely, if the uncertainly is low for all samples in a mini-batch, the model update will be small). Ash et al. (2020) showed that gradients of a loss function with respect to the model’s most confident predictions can approximate the uncertainty of samples, demonstrating its effectiveness in active learning. They provide a theoretical justification by showing that gradient\nnorms of the last layer of a neural network with respect to pseudo-labels provides a lower bound on gradient norms induced by any other labels. In this work, we use gradients of the last layer to measure the uncertainty and encourage our model to include samples that have the highest gradient magnitudes to constitute a dictionary.\nWhile the uncertainty of each individual samples is important, the diversity of samples is also a critical measure of informativeness. Intuitively, it is possible that a model is highly uncertain about samples from particular semantic categories, but constructing a mini-batch of samples from just those categories can severely bias gradients and ultimately lead to a bad local minima. There are several principled approaches to ensure diversity, e.g., submodular optimization (Fujishige, 2005) and Determinantal Point Processes (DPP) (Macchi, 1975; Kulesza & Taskar, 2011). Unfortunately, those methods are typically inefficient because of the combinatorial search space (Nemhauser et al., 1978; Gilks et al., 1995). In this work, instead of using the expensive solutions, we opt to the fast solution of Ash et al. (2020) and use the initialization scheme of the k-MEANS++ seeding algorithm (Arthur & Vassilvitskii, 2007) to sample a diverse set of negative samples." }, { "heading": "3.3 CROSS-MODAL ACTIVE CONTRASTIVE CODING", "text": "Algorithm 1 describes our proposed cross-modal active contrastive coding (we provide a simplified version here; we include a more detailed version and another version without active sampling in Appendix). At a high-level, we initialize the dictionaries Dv and Da with K randomly drawn samples from V and A, respectively (lines 3-4). For each epoch, we construct “negative candidate pools” Uv and Ua with N random samples from V and A, respectively (lines 6-7). For each iteration within an epoch, we actively select the most informative negative samples Sv and Sa from the pools Uv and Ua, respectively, and enqueue them into the dictionaries Dv and Da, respectively (lines 9-21). We then perform cross-modal contrastive coding, update the parameters of query encoders θvq and θ a q via backpropagation, and apply momentum updates to the parameters of key encoders θvk and θ a k (lines 22-27).\nAlgorithm 1 Cross-Modal Active Contrastive Coding 1: Require: Audio-visual clips A, V ; encoders fv , fa, hv , ha; dictionary size K; pool size N ; batch size M 2: Initialize parameters, θvq , θvk, θ a q , θ a k v Uniform(0, 1)\n3: Draw random dictionary, Dv ← {v1, · · · , vK} v V , Da ← {a1, · · · , aK} v A 4: Encode dictionary samples, kvi ← hv(vi), ∀vi ∈ Dv , kai ← ha(ai), ∀ai ∈ Da 5: for epoch = 1 to #epochs: do 6: Draw random pool, Uv ← {v1, · · · , vN} v V , Ua ← {a1, · · · , aN} v A 7: Encode pool samples, kvn ← hv(vn), ∀vn ∈ Uv , kan ← ha(an), ∀an ∈ Ua 8: for t = 1 to #mini-batches: do 9: Draw mini-batch, Bv ← {v1, · · · , vM} v V , Ba ← {a1, · · · , aM} v A\n10: . Active sampling of negative video keys for Dv 11: Encode mini-batch samples, qai ← fa(ai), ∀ai ∈ Ba 12: Compute pseudo-labels, ỹvn ← arg max p(ŷvn|vn, Ba), ∀vn ∈ Uv\\Dv 13: Compute gradients gvn using the pseudo-labels ỹ v n, ∀n ∈ [1, N ] 14: Obtain Sv ← k-MEANS++INIT ({gvn : vn ∈ Uv\\Dv},#seeds = M) 15: Update Dv ← ENQUEUE(DEQUEUE(Dv), Sv) 16: . Active sampling of negative audio keys for Da 17: Encode mini-batch samples, qvi ← fv(vi), ∀vi ∈ Bv 18: Compute pseudo-label, ỹan ← arg max p(ŷan|an, Bv), ∀an ∈ Ua\\Da 19: Compute gradients gan using the pseudo-labels ỹ a n, ∀n ∈ [1, N ] 20: Obtain Sa ← k-MEANS++INIT ({gan : an ∈ Ua\\Da},#seeds = M) 21: Update Da ← ENQUEUE(DEQUEUE(Da), Sa) 22: . Cross-modal contrastive predictive coding 23: Encode mini-batch samples, kvi ← hv(vi), ∀vi ∈ Bv , kai ← ha(ai), ∀ai ∈ Ba 24: Compute p(yvi |vi, ai, Da) and p(yai |ai, vi, Dv), ∀i ∈ [1,M ] 25: . Update model parameters 26: Update parameters of query encoders θvq and θaq with backpropagation 27: Momentum update parameters of key encoders θvk and θ a k 28: end for 29: end for 30: return Optimal solution θvq , θvk, θaq , θak\nActive sampling. To measure uncertainty, we define a pseudo-label space induced by the queries from the other modality, and take the gradient of the last layer of a query encoder with respect to the most confident prediction, which we call the pseudo-label ỹ. For instance, in the case of sampling negative video keys from the pool Uv (lines 10-15), we compute the pseudo-posterior of a video key vn ∈ Uv\\Da,\np(ŷvn|vn, Ba) = exp(kvn · qaj )∑M i=1 exp(k v n · qai ) ,∀j ∈ [1,M ] (4)\nwhereBa is the current mini-batch of audio queries and defines the pseudo-label space. Note that we consider only the samples in Uv\\Dv to rule out samples already inDv . Intuitively, this computes the posterior by the dot-product similarity between vn and all qai ∈ Ba, producing an M -dimensional probability distribution. We then take the most confident class category as the pseudo-label ỹvn (line 12) and compute the gradient according to the cross-entropy loss\ngvn = ∂\n∂θlast LCE (p(ŷvn|vn, Ba), ỹvn) |θ=θaq (5)\nwhere θlast is the parameters of the last layer of θ (in this case, θaq of the audio query encoder ha). Intuitively, the gradient gvn measures the amount of change – and thus, the uncertainty – vn will bring to the audio query encoder ha.\nOne can interpret this as a form of online hard negative mining: The gradient is measured with respect to the most probable pseudo-label ỹvn induced by the corresponding audio query q a j . When we compute the contrastive loss, the same audio query will be maximally confused by vn with its positive key v+ per dot-product similarity, and vn in this case can serve as a hard negative sample.\nNext, we obtain the most diverse and highly uncertain subset Sv ⊆ Uv\\Dv using the initialization scheme of k-MEANS++ (Arthur & Vassilvitskii, 2007) over the gradient embeddings gv (line 14). The k-MEANS++ initialization scheme finds the seed cluster centroids by iteratively sampling points with a probability in proportion to their squared distances from the nearest centroid that has already been chosen (we provide the exact algorithm in the Appendix). Intuitively, this returns a diverse set of instances sampled in a greedy manner, each of which has a high degree of uncertainty measured as its squared distances from other instances that have already been chosen. Finally, we enqueue Sv into Dv and dequeue the oldest batch from Dv (line 15). We repeat this process to sample negative audio keys (lines 16-21); this concludes the active sampling process for Dv and Da.\nCross-modal contrastive coding. Given the updated Dv and Da, we perform cross-modal contrastive coding. For visual-to-audio coding, we compute the posteriors of all video samples vi ∈ Bv with respect to the negative samples in the audio dictionary Da,\np(yvi |vi, ai, Da) = exp(qvi · kai /τ)∑K j=0 exp(q v i · kaj /τ) ,∀i ∈ [1,M ] (6)\nwhere the posterior is defined over a cross-modal space with one positive and K negative pairs (line 24). Next, we backpropagate gradients only to the query encoders fv and fa (line 26),\nθvq ← θvq − γ∇θLCE(p(yv| · ), yvgt)|θ=θvq , θ a q ← θaq − γ∇θLCE(p(ya| · ), yagt)|θ=θaq (7)\nwhile applying momentum update to the parameters of the key encoders hv and ha (line 27),\nθvk ← mθvk + (1−m)θvq , θak ← mθak + (1−m)θaq (8)\nThe momentum update allows the dictionaries to change their states slowly, thus making them consistent across iterations. However, our cross-modal formulation can cause inconsistency in dictionary states because the gradient used to update query encoders are not directly used to update the corresponding key encoders. To improve stability, we let the gradients flow in a cross-modal fashion, updating part of fv and ha using the same gradient signal from the contrastive loss. We do this by adding one FC layer on top of all encoders and applying momentum update to their parameters. For example, we apply momentum update to the parameters of the FC layer on top of ha using the parameters of the FC layer from fv . We omit this in Alg. 1 for clarity but show its importance in our ablation experiments (XMoCo (w/o fcl) in Table 1)." }, { "heading": "4 RELATED WORK", "text": "Self-supervised learning has been studied in vision, language, and audio domains. In the image domain, one popular idea is learning representations by maximizing the MI between different views of the same image (Belghazi et al., 2018; Hjelm et al., 2018; Tian et al., 2019; He et al., 2020). In the video domain, several approaches have exploited the spatio-temporal structure of video data to design efficient pretext tasks, e.g. by adopting ordering (Sermanet et al., 2017; Wang et al., 2019b), temporal consistency (Dwibedi et al., 2019), and spatio-temporal statistics (Xu et al., 2019; Wang et al., 2019a; Han et al., 2019). In the language domain, the transformer-based approaches trained with the masked language model (MLM) objective has been the most successful (Devlin et al., 2019; Liu et al., 2019; Yang et al., 2019). Riding on the success of BERT (Devlin et al., 2019), several concurrent approaches generalize it to learn visual-linguistic representations (Lu et al., 2019; Li et al., 2020; Su et al., 2019; Tan & Bansal, 2019; Li et al., 2019). CBT (Sun et al., 2019a) and VideoBERT (Sun et al., 2019b) made efforts on adapting BERT-style pretraining for video.\nBesides vision and language signals, several approaches learn audio-visual representations in a selfsupervised manner (Owens et al., 2016; Arandjelovic & Zisserman, 2017; Owens & Efros, 2018; Owens et al., 2016). Recently, audio-visual learning has been applied to enable interesting applications beyond recognition tasks, such as sound source localization/separation (Zhao et al., 2018; Arandjelovic & Zisserman, 2018; Gao et al., 2018; Gao & Grauman, 2019a;b; Ephrat et al., 2018; Gan et al., 2020; Zhao et al., 2019; Yang et al., 2020) and visual-to-sound generation (Hao et al., 2018; Zhou et al., 2018). The work of Owens & Efros (2018), Korbar et al. (2018), and Alwassel et al. (2019) are similar in spirit to our own, but our technical approach differs substantially in the use of active sampling and contrastive learning.\nHard negative mining is used in a variety of tasks, such as detection (Li et al., 2020), tracking (Nam & Han, 2016), and retrieval (Faghri et al., 2017; Pang et al., 2019), to improve the quality of prediction models by incorporating negative examples that are more difficult than randomly chosen ones. Several recent work have focused on finding informative negative samples for contrastive learning. Wu et al. (2020) show that the choice of negative samples is critical in contrastive learning and propose variational extension to InfoNCE with modified strategies for negative sampling. Iscen et al. (2018) propose hard examples mining for effective finetuning of pretrained networks. Cao et al. (2020) utilize negative sampling to reduce the computational cost. In the context of audio-visual selfsupervised learning, Korbar et al. (2018) sample negatives under the assumption that the smaller the time gap is between audio and visual clips of the same video, the harder it is to differentiate them (and thus they are considered hard negatives). Our proposed approach does not make such an assumption and estimates the hardness of negatives by directly analyzing the magnitude of the gradients with respect to the contrastive learning objective." }, { "heading": "5 EXPERIMENTS", "text": "Experimental Setting. We use 3D-ResNet18 (Hara et al., 2018) as our visual encoders (fv and hv) in most of the experiments. We also use R(2+1)D-18 (Tran et al., 2018) to enable a fair comparison with previous work (see Table 4). For audio encoders (fa and ha), we adapt ResNet-18 (He et al., 2016) to audio signals by replacing 2D convolution kernels with 1D kernels. We employ Batch Normalization (BN) (Ioffe & Szegedy, 2015) with the shuffling BN (He et al., 2020) in all our encoders. All models are trained end-to-end with the ADAM optimizer (Kingma & Ba, 2014) with an initial learning rate γ = 10−3 after a warm-up period of 500 iterations. We use the mini-batch\nsize M = 128, dictionary size K = 30 × 128, pool size N = 300 × 128, momentum m = 0.999, and temperature τ = 0.7. We used 40 NVIDIA Tesla P100 GPUs for our experiments.\nWe pretrain our model on Kinetics-700 (Carreira et al., 2019) and AudioSet (Gemmeke et al., 2017) when comparing with state-of-the-art approaches. For Kinetics-700, we use 240K randomly selected videos that contain the audio channel. On AudioSet, we use both a subset of 240K randomly selected videos and the 1.8M full set. For our ablation study, we use Kinetics-Sound (Arandjelovic & Zisserman, 2017) that contains 22K videos from 34 classes that are potentially manifested both visually and audibly, and thus provides a relatively clean testbed for ablation purposes. As for downstream tasks, we evaluate our models on action recognition using UCF101 (Soomro et al., 2012) and HMDB51 (Kuehne et al., 2011), and on sound classification using ESC50 (Piczak, 2015b).\nUnimodal vs. cross-modal pretraining. To validate the benefits of cross-modal pretraining, we compare it to its unimodal counterparts. We pretrain our model on Kinetics-Sound with a randomly sampled dictionary (similar to MoCo (He et al., 2020)); we call this XMoCo. For the unimodal case, we pretrain two models on visual clips and audio clips, respectively; we call these SMoCo. We also compare ours with a model trained from scratch (Scratch), and a model pretrained on KineticsSound in a fully-supervised manner (Supervised). Lastly, we include XMoCo (w/o fcl) that is identical to XMoCo except that we do not include the additional FC layers on top of the encoders. All these models are finetuned end-to-end on each downstream task using the same protocol.\nTable 1 shows the top-1 accuracy of each downstream task. We observe that all the self-supervised models outperform Scratch on all downstream tasks, suggesting the effectiveness of pretraining with contrastive learning. We also see that our cross-modal objective outperforms the unimodal objective (∆( 4©- 3©)). The comparisons between XMoCo vs. XMoCo (w/o fcl) and CM-ACC vs. CM-ACC (w/o fcl) show the effectiveness of the additional FC layer on top of the encoders (∆( 5©- 4©), ∆( 6©- 7©)). When adding the FC layer, the performance further improves on all three benchmarks. This shows the importance of letting the gradients flow in a cross-modal fashion. Finally, the performance gap with the full-supervised case shows there is still room for improvement in the self-supervised approaches.\nNext, we compare the number of unique categories the sampled instances originally belong to, using the ground-truth labels provided in the dataset. Our logic is that the more categories the samples come from, the more diverse and less redundant the samples are. We train these on UCF-101 over 300 iterations with different mini-batch sizes, M ∈ {32, 64, 128}. Fig. 2 shows that active sampling selects more categories than random sampling across all three mini-batch sizes. AtM = 128, active sampling (with gradient embedding) covers 60-70% of categories on UCF101, which is substantially more diverse than random sampling (30-40%). (A plot showing the probability of sampling unique negatives (instances from different categories) is shown in Appendix Figure 4.) While both sampling schemes perform similarly in early iterations, active sampling starts choosing more diverse instances as the training progresses; this is because the gradient embedding becomes more discriminative with respect to the uncertainty.\nRandom vs. active sampling. To validate the benefit of active sampling over random sampling, we compare models pretrained with different sampling approaches on downstream tasks. As shown in Table 1, our CM-ACC outperforms the XMoCo, which uses random sampling, by large margins, i.e. 3.1%, 1.9%, and 6.2% on UCF101, HMDB51, and ESC50, respectively (∆( 7©- 5©)).\nFeature vs. gradient embedding. We compare two ways to do active sampling: using gradient embeddings (Eqn. 5) and feature embeddings (the outputs from ha and hv) when selecting the seed centroids with k-MEANS++. Fig. 2 shows that gradient embeddings produce a more diverse set of negative samples than feature embeddings; this is consistent across all three batch sizes. Table 2 shows that this diversity helps achieve better downstream performances across all three benchmarks. Fig. 5 (in Appendix) provides further insights, showing that the samples with high gradient magnitudes tend to be more informative negative samples.\nFrom a theoretical aspect, the gradient norm induced by each candidate with computed pseudo labels estimates the candidates influence on the current model. The gradient embeddings convey information both about the model’s uncertainty and potential update direction upon receiving a candidate. However, such messages are missing from the feature embeddings. This shows the importance of considering both uncertainty and diversity when selecting random samples: the k-MEANS++ ensures the diversity in the sample set, but without the uncertainty measure we lose important discriminative information from the candidates.\nOnline hard example mining vs. active sampling. We compare our approach to online hard example mining (OHEM) (Shrivastava et al., 2016), which constructs negative samples by explicitly choosing the ones that incur high loss values. Specifically, we compute the pseudo-labels for all keys (negative sample candidates) with a given mini-batch of queries. We then compute the classification loss based on these pseudo labels and select the top M keys with the highest loss values. We pretrain the models on Kinetics-700 (Kay et al., 2017) and report the top-1 accuracy on UCF101 (Soomro et al., 2012) and HMDB51 (Kuehne et al., 2011). We use the same architecture and hyperparameters; the only difference is the sampling approach.\nTable 3 shows OHEM is generally less effective than both random sampling and our active sampling. Intuitively, OHEM promotes the dictionary to contain the most challenging keys for a given minibatch of queries. Unfortunately, this causes OHEM to produce a redundant and biased dictionary, e.g., negative samples coming from a particular semantic category. Our results show that, when M (mini-batch size) is small, the performance of OHEM is even worse than random sampling, although the gap between OHEM and random sampling decreases as M increases. We believe this is because OHEM has a higher chance of selecting similar negative instances. When M is large, this issue can be mitigated to some extent, but the performance still falls behind ours by a large margin. This suggests the importance of having a diverse set of negative samples, which is unique in our approach.\nComparisons with SOTA. Table 4 shows our approach outperforms various self-supervised approaches on action recognition. For fair comparisons, we group the SOTA approaches by different pretraining dataset sizes, i.e. small-scale (UCF/HMDB), medium-scale (Kinetics), and large-scale (AudioSet). Our gains are calculated according to this grouping. As we can see, our approach outperforms SOTA approaches across all groups. Compared with GDT (Patrick et al., 2020), the current top performing model on cross-modal self-supervised learning, our model outperforms it by 1.6 % on UCF101 and 1.1 % on HMDB51. Table 5 shows audio classification transfer results. For Kinetics and AudioSet (240K), our model outperforms the current state-of-the-art, AVID (79.1%) by 0.1% and 1.8% on Kinetics and AudioSet 240K, respectively. Our approach also outperforms AVID (89.2%) pretrained on AudioSet (1.8M) by 1.6%." }, { "heading": "6 CONCLUSION", "text": "We have shown that random sampling could be detrimental to contrastive learning due to the redundancy in negative samples, especially when the sample size is large, and have proposed an active sampling approach that yields diverse and informative negative samples. We demonstrated this on learning audio-visual representations from unlabeled videos. When pretrained on AudioSet, our approach outperforms previous state-of-the-art self-supervised approaches on various audio and visual downstream benchmarks. We also show that our active sampling approach significantly improves the performance of contrastive learning over random and online hard negative sampling approaches." }, { "heading": "A DETAILS ON DATA PROCESSING", "text": "We preprocess video frames by sampling at 10 FPS and applying random cropping, horizontal flipping, gray-scaling, and temporal jittering. We resize video frames to 3-channel images of 224×224; we set the clip length to 16 frames during pretraining, and 32 frames during finetuning on downstream tasks. For audio channel, we extract mel-spectrograms from the raw waveform using the LibROSA library and get a 80× T matrix with 80 frequency bands; T is proportionate to the length of an audio clip. We then segment the mel-spectrogram according to the corresponding video clips to ensure temporal synchrony. We treat the mel-spectrograms as an 80-channel 1D signal.\nAs for downstream tasks, we evaluate our models on action recognition using UCF101 (Soomro et al., 2012) and HMDB51 (Kuehne et al., 2011), and on sound classification using ESC50 (Piczak, 2015b). UCF101 contains 13K video clips from 101 action categories, HMDB51 contains 7K video clips from 51 categories, and ESC50 has 2K audio clips from 50 categories. UCF101 and HMDB51 have 3 official train/test splits, while ESC50 has 5 splits. We conduct our ablation study using split-1 of each dataset. We report our average performance over all splits when we compare with prior work." }, { "heading": "B ADDITIONAL EXPERIMENTS", "text": "Effect of mutual information. We investigate the impact of the amount of MI on contrastive learning using the Spatial-MultiOmniglot dataset (Ozair et al., 2019). It contains paired images (x, y) of Omniglot characters (Lake et al., 2015) with each image arranged in an m × n grid (each grid cell is 32 × 32 pixels). Let li be the alphabet size for the ith character in each image, then the MI I(x, y) = ∑mn i=1 logli. This way, we can easily control the MI by adding or removing characters. We follow the experimental protocol of Ozair et al. (Ozair et al., 2019), keeping the training dataset size fixed at 50K and using the same alphabet sets: Tifinagh (55 characters), Hiragana (52), Gujarati (48), Katakana (47), Bengali (46), Grantha (43), Sanskrit (42), Armenian (41), and Mkhedruli (41).\nFig. 3(a) shows the results as the number of characters (and thus the MI) increases. We see that all approaches achieve nearly 99% accuracy with less than 3 characters; this is the case when the exponent of the MI is smaller than the dataset size (50K), i.e., eI(x,y) = 55 with one character, eI(x,y) = 2, 860 with 2 characters. However, starting from 3 characters, the performance of the regular MoCo (SMoCo) drops significantly; this is because the exponent of the MI (=137,280 (55×52×48)) is much larger than the dataset size. Although our model also drops performance when the MI is increased, it outperforms the other approaches by a large margin. We also observe that XMoCo outperforms SMoCo in mild conditions (1-5 characters) but performs nearly the same as SMoCo with severe conditions (6-9 characters). This suggests that, while cross-modal prediction helps to learn good representations, it also suffers with the same issue when the MI is large, thus adopting active sampling is beneficial.\nEffect of dictionary size. Fig. 3 (b) shows how the dictionary size affects downstream task performance. Here we pretrain our model on Kinetics-700 and finetune it on UCF-101. Overall, all\nthree approaches benefit from large dictionaries up to a threshold (at about 103), which is consistent with previous empirical findings (He et al., 2020). However, both XMoCo and SMoCo starts deteriorating performance after about 104 (which is consistent with previous theoretical claims of Arora et. al (Arora et al., 2019)), whereas ours do not suffer even after 104. This suggests that there are performance limits by simply increasing the size of a randomly-sampled dictionary, and also shows the benefit of our active sampling approach.\nEffect of pretraining dataset sizes. We investigate the effects of the size of pretraining datasets, using Kinetics-Sound (22k), Kinetics (240K), and AudioSet (1.8M). We vary pretraining conditions while using the same protocol to finetune the models end-to-end on downstream tasks.\nTable 6 shows that our model benefits from pretraining on video data, and that the performance improves as we use a large pretraining video dataset (Kinetics and AudioSet) than the relatively smaller dataset (Kinetics-Sound). Notably, our approach even outperforms the fully-supervised pretraining approaches by pretraining on a larger video dataset (1.0%, 3.6%, and 8.5% improvement on UCF101, HMDB51, and ESC50, respectively.)\nDiversity of random vs. active sampling. To compare the diversity of the chosen negatives by random vs. active sampling, we plot the probability of them on sampling of unique negatives (instances from different categories). The more categories the samples come from, we get more diverse and less redundant samples. We train these on UCF-101 over 300 iterations with different mini-batch sizes, M ∈ {32, 64, 128}. As shown in Figure 4, the active sampling selects more categories than random sampling across all three mini-batch sizes. AtM = 128, active sampling (with gradient embedding) covers 60-70% of categories on UCF101, which is substantially more diverse than random sampling (30-40%).\nC VISUALIZATION OF NEGATIVE INSTANCES\nFigure 5 shows negative instances selected by active sampling and random sampling when we use audio clips as the query. We visualize the center frames of the selected video clips. We can see that our approach selects more challenging examples than the random sampling approach. For instance,\ngiven a query opening bottle, our approach selected video clips from the same or similar semantic categories, e.g. drinking shot and opening bottle. Given snowboarding, our approach selected more video clips related to categories containing the snow scene, e.g. ice fishing, snow kiting, and tobogganing.\nFurthermore, we also find that our approach selects more diverse negative samples. For example, given a query snowboarding, active sampling selected video clips from 4 different categories related to the snow scene: (ice fishing, playing ice hockey, snow kiting, and tobogganing). In comparison, the random sampling approach yields fewer semantic cate-\ngories in general. This suggests that our active sampling approach produces more ‘challenging’ and ‘diverse’ negative instances than the random sampling approach.\nTo clearly investigate the relationship between negative samples and their gradient magnitudes, we show the gradient norm of each visualized sample in Figure 5. We can see that hard negatives tend to have larger gradient norms than easy negatives. Given a query Playing guitar, video clips containing the concept of “playing instruments” yield the higher gradient norms, i.e. playing violin (333.87) and tapping guitar (301.35), while concepts that are easy to discriminate, e.g., riding a camel yield a significantly smaller gradient norm (5.92). This provides evidence showing the gradient magnitude is effective in measuring the uncertainty of the current model, i.e., highly-uncertain samples (hard negatives) tend to yield gradients with larger magnitudes, while highly-confident samples (easy negatives) tend to have smaller gradient magnitudes." }, { "heading": "D WHEN WOULD CROSS-MODAL CONTRASTIVE LEARNING FAIL?", "text": "In general, cross-modal video representation learning is based on an assumption that the natural correspondence between audio and visual channels could serve as a useful source of supervision. While intuitive, this assumption may not hold for certain videos in-the-wild, which may cause the model to learn suboptimal representations. To investigate when our approach succeeds and fails, we conduct a post-hoc analysis by using thehttps://www.overleaf.com/project/5ded2abe1c17bc00011e5da8 ground-truth semantic category labels provided in Kinetics-700 (Carreira et al., 2019) (which is not used during pretraining). Specifically, we use our pretrained model to solve the audio-visual contrastive pretext task (Eqn.(7) in the main paper) and keep track of the prediction results (correct/incorrect). We then average the pretext task accuracy over 100 randomly chosen samples for each action category.\nFigure 6 shows the top-10 and bottom-5 classes by using both audio (left) and video (right) as the query. We observe that, the top ranked classes for both audio and video are the activities that have highly correlated visual-audio signals. For instance, playing bass guitar, play piano, and play violin are all activities related to music. The correlation of audio-visual signals for these activities are obvious; such highly correlated signals are easier to be learned in a cross-modal manner. On the contrary, the bottom ranked classes are those that have subtle audio-visual correlation, e.g. tossing coin, shaking hand, looking at phone, and hugging. We also investigate the distribution of hard-easy classes with that reported in Kinetics-700 (Carreira et al., 2019) learned by the I3D-RGB model (Carreira & Zisserman, 2017). Interestingly, we find that some hard classes (e.g. karaoke and recording music) are listed in our top ranked classes. We suspect that, when only learned within visual modality, some classes with cluttered or complected spatial information will bring difficulties for classification. While, as our cross-modal approach can\nleverage information from both auditory and visual information, so our model does not limited by such problems.\nAlgorithm 2 Cross-Modal Active Contrastive Coding (Detailed version of Algorithm 1) 1: Require: Audio-visual clips A, V ; encoders fv , fa, hv , ha; dictionary size K; pool size N ; batch size M 2: Initialize parameters, θvq , θvk, θ a q , θ a k v Uniform(0, 1)\n3: Draw random dictionary, Dv ← {v1, · · · , vK} v Random(V ), Da ← {a1, · · · , aK} v Random(A) 4: Encode dictionary samples, kvi ← hv(vi), ∀vi ∈ Dv , kai ← ha(ai), ∀ai ∈ Da 5: for epoch = 1 to #epochs: do 6: Draw random pool, Uv ← {v1, · · · , vN} v Random(V ), Ua ← {a1, · · · , aN} v Random(A) 7: Encode pool samples, kvn ← hv(vn), ∀vn ∈ Uv , kan ← ha(an), ∀an ∈ Ua 8: for t = 1 to #mini-batches: do 9: Draw mini-batch, Bv ← {v1, · · · , vM} v V , Ba ← {a1, · · · , aM} v A\n10: . Active sampling of negative video keys for Dv 11: Encode mini-batch samples, qai ← fa(ai), ∀ai ∈ Ba 12: for ∀vn ∈ Uv\\Dv: do 13: Compute pseudo-posterior, p(ŷvn|vn, Ba)← exp(kvn·q a j )∑M\ni=1 exp(k v n·qai ) ,∀j ∈ [1,M ] 14: Compute pseudo-label, ỹvn ← arg max p(ŷvn| · ) 15: end for 16: Compute gradient, gvn ← ∂∂θlastLCE (p(ŷ v n| · ), ỹvn) |θ=θaq ,∀n ∈ [1, N ] 17: Obtain Sv ← k-MEANS++INIT ({gvn : vn ∈ Uv\\Dv},#seeds = M) 18: Update Dv ← ENQUEUE(DEQUEUE(Dv), Sv) 19: . Active sampling of negative audio keys for Da 20: Encode mini-batch samples, qvi ← fv(vi), ∀vi ∈ Bv 21: for ∀an ∈ Ua\\Da: do 22: Compute pseudo-posterior, p(ŷan|an, Bv)← exp(kan·q v j )∑M\ni=1 exp(k a n·qvi ) , ∀j ∈ [1,M ] 23: Compute pseudo-label, ỹan ← arg max p(ŷan| · ) 24: end for 25: Compute gradient, gan ← ∂∂θlastLCE (p(ŷ a n| · ), ỹan) |θ=θvq , ∀n ∈ [1, N ] 26: Obtain Sa ← k-MEANS++INIT ({gan : an ∈ Ua\\Da},#seeds = M) 27: Update Da ← ENQUEUE(DEQUEUE(Da), Sa) 28: . Cross-modal contrastive predictive coding 29: Encode mini-batch samples, kvi ← hv(vi), ∀vi ∈ Bv , kai ← ha(ai),∀ai ∈ Ba 30: Compute p(yvi |·) = exp(qvi ·k a i /τ)∑K\nj=0 exp(q v i ·k a j /τ)\n, p(yai |·) = exp(qai ·k v i /τ)∑K\nj=0 exp(q a i ·k v j /τ) ,∀i ∈ [1,M ] 31: . Update model parameters 32: Update θvq ← θvq − γ∇θLCE(p(yv| · ), yvgt)|θ=θvq , θ a q ← θaq − γ∇θLCE(p(ya| · ), yagt)|θ=θaq 33: Momentum update θvk ← mθvk + (1−m)θvq , θak ← mθak + (1−m)θaq 34: end for 35: end for 36: return Optimal solution θvq , θvk, θaq , θak\nAlgorithm 3 k-MEANS++INIT Seed Cluster Initialization 1: Require: Data X of N samples; number of centroids K 2: Choose one centroid uniformly at random, C[0]← x v Random(X) 3: for k = 1 to K − 1: do 4: . Compute a cumulative probability distribution with a probability in proportion to their squared dis-\ntances from the nearest centroid that has already been chosen 5: for n = 0 to N − 1: do 6: Compute the squared distance, D[n]← (min dist(X[n], C))2 7: end for 8: Compute the cumulative probability distribution, P ← cumsum(D)\nsum(D)\n9: . The next centroid is chosen using P (X) as a weighted probability distribution 10: Choose one centroid at random, C[k]← x v P (X) 11: end for 12: return C containing K centroids\nAlgorithm 4 Cross-Modal Contrastive Coding without Active Sampling 1: Require: Audio-visual clips A, V ; dictionary Dv; encoders fv , fa, hv , ha;\ndictionary size K; mini-batch size M ; learning rate γ; momentum m 2: Initialize parameters, θvq , θvk, θ a q , θ a k v Uniform(0, 1) 3: Load a dictionary at random, Da ← {v1, · · · , vK} v Random(V ) 4: Load a dictionary at random, Dv ← {a1, · · · , aK} v Random(A) 5: Encode dictionary samples, kvi ← hv(vi), ∀vi ∈ Da, kai ← ha(ai), ∀ai ∈ Dv 6: for epoch = 1 to #epochs: do 7: for t = 1 to #mini-batches: do 8: Load a mini-batch of visual clips, Bv ← {v1, · · · , vM} v V 9: Load a mini-batch of audio clips, Ba ← {a1, · · · , aM} v A\n10: . Update dictionaries 11: Encode mini-batch samples, kvi ← hv(vi), ∀vi ∈ Bv 12: Encode mini-batch samples, kai ← ha(ai),∀ai ∈ Ba 13: Update Dv ← ENQUEUE(DEQUEUE(Dv), Bv) 14: Update Da ← ENQUEUE(DEQUEUE(Da), Ba) 15: . Cross-modal contrastive predictive coding 16: Encode mini-batch samples, qvi ← fv(vi), ∀vi ∈ Bv 17: Encode mini-batch samples, qai ← fa(ai),∀ai ∈ Ba 18: Compute the posterior, p(yvi |vi, ai, Dv) = exp(qvi ·k a i /τ)∑K\nj=0 exp(q v i ·k a j /τ)\n, ∀i ∈ [1,M ]\n19: Compute the posterior, p(yai |ai, vi, Dv) = exp(qai ·k v i /τ)∑K\nj=0 exp(q a i ·k v j /τ) , ∀i ∈ [1,M ] 20: . Update model parameters 21: Update θvq ← θvq − γ∇θLCE(− log p(yv| · ), yvgt)|θ=θvq 22: Update θaq ← θaq − γ∇θLCE(− log p(ya| · ), yagt)|θ=θaq 23: Momentum update θvk ← mθvk + (1−m)θvq 24: Momentum update θak ← mθak + (1−m)θaq 25: end for 26: end for 27: return Optimal solution θvq , θvk, θaq , θak" } ]
2,021
ACTIVE CONTRASTIVE LEARNING OF AUDIO-VISUAL VIDEO REPRESENTATIONS
SP:dc90daee29d8bea60a4033e06a9e36e660597ea2
[ "In the paper, the authors adapt CycleGAN, a well-known model for unpaired image-to-image translation, to automatic music arrangement by treating MFCCs extracted from audio recordings as images. Also, the authors propose a novel evaluation metric, which learns how to rate generated audio from the ratings of (some) music experts. The authors make use of two large-scale datasets to train and evaluate the model on two scenarios, namely 1) generating drum accompaniment a given bass line, 2) generating arrangement given a voice line. They report promising results on the first task; however, the model is not as successful on the second (more challenging) task." ]
When talking about computer-based music generation, two are the main threads of research: the construction of autonomous music-making systems, and the design of computer-based environments to assist musicians. However, even though creating accompaniments for melodies is an essential part of every producer’s and songwriter’s work, little effort has been done in the field of automatic music arrangement in the audio domain. In this contribution, we propose a novel framework for automatic music accompaniment in the Mel-frequency domain. Using several songs converted into Mel-spectrograms – a two-dimensional time-frequency representation of audio signals – we were able to automatically generate original arrangements for both bass and voice lines. Treating music pieces as images (Mel-spectrograms) allowed us to reformulate our problem as an unpaired imageto-image translation problem, and to tackle it with CycleGAN, a well-established framework. Moreover, the choice to deploy raw audio and Mel-spectrograms enabled us to more effectively model long-range dependencies, to better represent how humans perceive music, and to potentially draw sounds for new arrangements from the vast collection of music recordings accumulated in the last century. Our approach was tested on two different downstream tasks: given a bass line creating credible and on-time drums, and given an acapella song arranging it to a full song. In absence of an objective way of evaluating the output of music generative systems, we also defined a possible metric for the proposed task, partially based on human (and expert) judgement.
[]
[ { "authors": [ "Asger Heidemann Andersen", "Jan Mark de Haan", "Zheng-Hua Tan", "Jesper Jensen" ], "title": "A non-intrusive short-time objective intelligibility measure", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2017 }, { "authors": [ "Gérard Assayag", "Camilo Rueda", "Mikael Laurson", "Carlos Agon", "Olivier Delerue" ], "title": "Computerassisted composition at ircam: From patchwork to openmusic", "venue": "Computer music journal,", "year": 1999 }, { "authors": [ "Aishwarya Bhave", "Mayank Sharma", "Rekh Ram Janghel" ], "title": "Music generation using deep learning", "venue": "In Soft Computing and Signal Processing,", "year": 2019 }, { "authors": [ "Nicolas Boulanger-Lewandowski", "Yoshua Bengio", "Pascal Vincent" ], "title": "Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription", "venue": "arXiv preprint arXiv:1206.6392,", "year": 2012 }, { "authors": [ "Jean-Pierre Briot", "Gaëtan Hadjeres", "François-David Pachet" ], "title": "Deep learning techniques for music generation", "venue": null, "year": 2020 }, { "authors": [ "Gino Brunner", "Yuyi Wang", "Roger Wattenhofer", "Sumu Zhao" ], "title": "Symbolic music genre transfer with cyclegan", "venue": "IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI),", "year": 2018 }, { "authors": [ "R. Decorsière", "P.L. Søndergaard", "E.N. MacDonald", "T. Dau" ], "title": "Inversion of auditory spectrograms, traditional spectrograms, and other envelope representations", "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing,", "year": 2015 }, { "authors": [ "Michaël Defferrard", "Kirell Benzi", "Pierre Vandergheynst", "Xavier Bresson" ], "title": "FMA: A dataset for music analysis", "venue": "In 18th International Society for Music Information Retrieval Conference (ISMIR),", "year": 2017 }, { "authors": [ "Michaël Defferrard", "Sharada P. Mohanty", "Sean F. Carroll", "Marcel Salathé" ], "title": "Learning to recognize musical genre from audio", "venue": "In The 2018 Web Conference Companion. ACM Press,", "year": 2018 }, { "authors": [ "Alexandre Défossez", "Nicolas Usunier", "Léon Bottou", "Francis Bach" ], "title": "Demucs: Deep extractor for music sources with extra unlabeled data remixed", "venue": null, "year": 1909 }, { "authors": [ "Prafulla Dhariwal", "Heewoo Jun", "Christine Payne", "Jong Wook Kim", "Alec Radford", "Ilya Sutskever" ], "title": "Jukebox: A generative model for music", "venue": "arXiv preprint arXiv:2005.00341,", "year": 2020 }, { "authors": [ "Sander Dieleman", "Aaron van den Oord", "Karen Simonyan" ], "title": "The challenge of realistic music generation: modelling raw audio at scale", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "D. Griffin", "Jae Lim" ], "title": "Signal estimation from modified short-time fourier transform", "venue": "IEEE Transactions on Acoustics, Speech, and Signal Processing,", "year": 1984 }, { "authors": [ "Gaëtan Hadjeres", "Frank Nielsen" ], "title": "Interactive music generation with positional constraints using anticipation-rnns", "venue": "arXiv preprint arXiv:1709.06404,", "year": 2017 }, { "authors": [ "Gaëtan Hadjeres", "François Pachet", "Frank Nielsen" ], "title": "Deepbach: a steerable model for bach chorales generation", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Curtis Hawthorne", "Erich Elsen", "Jialin Song", "Adam Roberts", "Ian Simon", "Colin Raffel", "Jesse Engel", "Sageev Oore", "Douglas Eck" ], "title": "Onsets and frames: Dual-objective piano transcription", "venue": "arXiv preprint arXiv:1710.11153,", "year": 2017 }, { "authors": [ "Curtis Hawthorne", "Andriy Stasyuk", "Adam Roberts", "Ian Simon", "Cheng-Zhi Anna Huang", "Sander Dieleman", "Erich Elsen", "Jesse Engel", "Douglas Eck" ], "title": "Enabling factorized piano music modeling and generation with the maestro dataset", "venue": "arXiv preprint arXiv:1810.12247,", "year": 2018 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Cheng-Zhi Anna Huang", "Ashish Vaswani", "Jakob Uszkoreit", "Ian Simon", "Curtis Hawthorne", "Noam Shazeer", "Andrew M Dai", "Matthew D Hoffman", "Monica Dinculescu", "Douglas Eck" ], "title": "Music transformer: Generating music with long-term structure", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Phillip Isola", "Jun-Yan Zhu", "Tinghui Zhou", "Alexei A Efros" ], "title": "Image-to-image translation with conditional adversarial networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Natasha Jaques", "Shixiang Gu", "Richard E Turner", "Douglas Eck" ], "title": "Tuning recurrent neural networks with reinforcement learning", "venue": "arXiv preprint arXiv:1611.02796,", "year": 2016 }, { "authors": [ "Nan Jiang", "Sheng Jin", "Zhiyao Duan", "Changshui Zhang" ], "title": "Rl-duet: Online music accompaniment generation using deep reinforcement learning", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Daniel D Johnson" ], "title": "Generating polyphonic music using tied parallel networks. In International conference on evolutionary and biologically inspired music and art", "venue": null, "year": 2017 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Stefan Lattner", "Maarten Grachten", "Gerhard Widmer" ], "title": "Imposing higher-level structure in polyphonic music generation using convolutional restricted boltzmann machines and constraints", "venue": "Journal of Creative Music Systems,", "year": 2018 }, { "authors": [ "Dimos Makris", "Maximos Kaliakatsos-Papakostas", "Ioannis Karydis", "Katia Lida Kermanidis" ], "title": "Combining lstm and feed forward neural networks for conditional rhythm composition", "venue": "In International Conference on Engineering Applications of Neural Networks,", "year": 2017 }, { "authors": [ "Sanidhya Mangal", "Rahul Modak", "Poorva Joshi" ], "title": "Lstm based music generation system", "venue": "arXiv preprint arXiv:1908.01080,", "year": 2019 }, { "authors": [ "Rachel Manzelli", "Vijay Thakkar", "Ali Siahkamari", "Brian Kulis" ], "title": "An end to end model for automatic music generation: Combining deep raw and symbolic audio networks", "venue": "In Proceedings of the Musical Metacreation Workshop at 9th International Conference on Computational Creativity,", "year": 2018 }, { "authors": [ "Huanru Henry Mao", "Taylor Shin", "Garrison Cottrell" ], "title": "Deepj: Style-specific music generation", "venue": "IEEE 12th International Conference on Semantic Computing (ICSC),", "year": 2018 }, { "authors": [ "Soroush Mehri", "Kundan Kumar", "Ishaan Gulrajani", "Rithesh Kumar", "Shubham Jain", "Jose Sotelo", "Aaron Courville", "Yoshua Bengio" ], "title": "Samplernn: An unconditional end-to-end neural audio generation model", "venue": "arXiv preprint arXiv:1612.07837,", "year": 2016 }, { "authors": [ "Olof Mogren" ], "title": "C-rnn-gan: Continuous recurrent neural networks with adversarial training", "venue": "arXiv preprint arXiv:1611.09904,", "year": 2016 }, { "authors": [ "Noam Mor", "Lior Wolf", "Adam Polyak", "Yaniv Taigman" ], "title": "A universal music translation network", "venue": "arXiv preprint arXiv:1805.07848,", "year": 2018 }, { "authors": [ "Meinard Müller" ], "title": "Fundamentals of music processing: Audio, analysis, algorithms, applications", "venue": null, "year": 2015 }, { "authors": [ "Peter Neubäcker" ], "title": "Sound-object oriented analysis and note-object oriented processing of polyphonic sound", "venue": "recordings, September", "year": 2011 }, { "authors": [ "Aaron van den Oord", "Sander Dieleman", "Heiga Zen", "Karen Simonyan", "Oriol Vinyals", "Alex Graves", "Nal Kalchbrenner", "Andrew Senior", "Koray Kavukcuoglu" ], "title": "Wavenet: A generative model for raw audio", "venue": "arXiv preprint arXiv:1609.03499,", "year": 2016 }, { "authors": [ "Alexandre Papadopoulos", "Pierre Roy", "François Pachet" ], "title": "Assisted lead sheet composition using flowcomposer", "venue": "In International Conference on Principles and Practice of Constraint Programming,", "year": 2016 }, { "authors": [ "Ryan Prenger", "Rafael Valle", "Bryan Catanzaro" ], "title": "Waveglow: A flow-based generative network for speech synthesis", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2019 }, { "authors": [ "Zafar Rafii", "Antoine Liutkus", "Fabian-Robert Stöter", "Stylianos Ioannis Mimilakis", "Rachel Bittner" ], "title": "The MUSDB18 corpus for music separation, December 2017", "venue": "URL https://doi.org/10. 5281/zenodo.1117372", "year": 2017 }, { "authors": [ "Yi Ren", "Jinzheng He", "Xu Tan", "Tao Qin", "Zhou Zhao", "Tie-Yan Liu" ], "title": "Popmag: Pop music accompaniment generation", "venue": "arXiv preprint arXiv:2008.07703,", "year": 2020 }, { "authors": [ "Adam Roberts", "Jesse Engel", "Colin Raffel", "Curtis Hawthorne", "Douglas Eck" ], "title": "A hierarchical latent vector model for learning long-term structure in music", "venue": "arXiv preprint arXiv:1803.05428,", "year": 2018 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "M Senior" ], "title": "Celemony melodyne dna editor", "venue": "Sound on Sound,", "year": 2009 }, { "authors": [ "Siddharth Sigtia", "Emmanouil Benetos", "Simon Dixon" ], "title": "An end-to-end neural network for polyphonic piano music transcription", "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing,", "year": 2016 }, { "authors": [ "Stanley Smith Stevens", "John Volkmann", "Edwin B Newman" ], "title": "A scale for the measurement of the psychological magnitude pitch", "venue": "The Journal of the Acoustical Society of America,", "year": 1937 }, { "authors": [ "Sean Vasquez", "Mike Lewis" ], "title": "Melnet: A generative model for audio in the frequency domain", "venue": "arXiv preprint arXiv:1906.01083,", "year": 2019 }, { "authors": [ "Xin Wang", "Shinji Takaki", "Junichi Yamagishi" ], "title": "Neural source-filter waveform models for statistical parametric speech synthesis", "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing,", "year": 2019 }, { "authors": [ "Ivan P Yamshchikov", "Alexey Tikhonov" ], "title": "Music generation with variational recurrent autoencoder supported by history", "venue": "arXiv preprint arXiv:1705.05458,", "year": 2017 }, { "authors": [ "Li-Chia Yang", "Alexander Lerch" ], "title": "On the evaluation of generative models in music", "venue": "Neural Computing and Applications,", "year": 2020 }, { "authors": [ "Li-Chia Yang", "Szu-Yu Chou", "Yi-Hsuan Yang" ], "title": "Midinet: A convolutional generative adversarial network for symbolic-domain music generation", "venue": "arXiv preprint arXiv:1703.10847,", "year": 2017 }, { "authors": [ "Yi Zhao", "Xin Wang", "Lauri Juvela", "Junichi Yamagishi" ], "title": "Transferring neural speech waveform synthesizers to musical instrument sounds generation", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Hongyuan Zhu", "Qi Liu", "Nicholas Jing Yuan", "Chuan Qin", "Jiawei Li", "Kun Zhang", "Guang Zhou", "Furu Wei", "Yuanchun Xu", "Enhong Chen" ], "title": "Xiaoice band: A melody and arrangement generation framework for pop music", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 }, { "authors": [ "Jun-Yan Zhu", "Taesung Park", "Phillip Isola", "Alexei A Efros" ], "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "The development of home music production has brought significant innovations into the process of pop music composition. Software like Pro Tools, Cubase, and Logic – as well as MIDI-based technologies and digital instruments – provide a wide set of tools to manipulate recordings and simplify the composition process for artists and producers. After recording a melody, maybe with the aid of a guitar or a piano, song writers can now start building up the arrangement one piece at a time, sometimes not even needing professional musicians or proper music training. As a result, singers and song writers – as well as producers – have started asking for tools that could facilitate, or to some extent even automate, the creation of full songs around their lyrics and melodies. To meet this new demand, the goal of designing computer-based environments to assist human musicians has become central in the field of automatic music generation (Briot et al., 2020). IRCAM OpenMusic (Assayag et al., 1999), Sony CSL-Paris FlowComposer (Papadopoulos et al., 2016), and Logic Pro X Easy Drummer are just some examples. In addition, more solutions based on deep learning techniques, such as RL-Duet (Jiang et al., 2020) – a deep reinforcement learning algorithm for online accompaniment generation – or PopMAG, a transformer-based architecture which relies on a multi-track MIDI representation of music (Ren et al., 2020), continue to be studied. A comprehensive review of the most relevant deep learning techniques applied to music is provided by (Briot et al., 2020). Most of these strategies, however, suffer from the same critical issue, which makes them less appealing in view of music production for commercial purposes: they rely on a symbolic/MIDI representation of music. The approach proposed in this paper, instead, is a first attempt at automatically generating an euphonic arrangement (two or more sound patterns that produce a pleasing and harmonious piece of music) in the audio domain, given a musical sample encoded in a two-dimensional time-frequency representation (in particular, we opted for the Mel-spectrogram time-frequency representation). Al-\nthough arrangement generation has been studied in the context of symbolic audio, indeed, switching to Mel-spectrograms allows us to preserve the sound heritage of other musical pieces (allowing operations such as sampling) and is more suitable for real-life cases, where voice, for instance, cannot be encoded in MIDI.\nWe focused our attention on two different tasks of increasing difficulty: (i) given a bass line to create credible and on-time drums, and (ii) given the voice line, to output a new and euphonic musical arrangement. Incidentally, we found out that – for training samples – our model was able to reconstruct the original arrangement pretty well, even though no pairing among the Mel-spectrograms of the two domains was performed. By means of the Mel-spectrogram representation of music, we can consider the problem of automatically generating an arrangement or accompaniment for a specific musical sample equivalent to an image-to-image translation task. For instance, if we have the Mel-spectrogram of an acapella song, we may want to produce the Mel-spectrogram of the same song including a suitable arrangement. To solve this task, we tested an unpaired image-to-image translation strategy known as CycleGAN (Zhu et al., 2017), which consists of translating an image from a source domain X to a target domain Y in the absence of paired examples, by training both the mapping from X to Y and from Y to X simultaneously, with the goal of minimizing a cycle consistency loss. The aforementioned system was trained on 5s pop music samples (equivalent to 256×256 Mel-spectrograms) coming both from the Free Music Archive (FMA) dataset (Defferrard et al., 2017; 2018), and from the Demucs dataset (Défossez et al., 2019). The short sample duration does not affect the proposed methodology, at least with respect to the arrangement task we focus on, and inference can be performed also on full songs. Part of the dataset was pre-processed first, since the FMA songs lack source separated channels (i.e. differentiated vocals, bass, drums, etc.). The required channels were extracted using Demucs (Défossez et al., 2019). The main innovations presented in this contribution are as follows: (i.) treating music pieces as images, we developed a framework to automatically generate music arrangement in the Mel-frequency domain, different from any other previous approach; (ii.) our approach is able to generate arrangements with low computational resources and limited inference time, if compared to other popular solutions for automatic music generation (Dhariwal et al., 2020); (iii.) we developed a metric – partially based on or correlated to human (and expert) judgement – to automatically evaluate the obtained results and the creativity of the proposed system, given the challenges of a quantitative assessment of music. To the best of our knowledge, this is the first work to face the automatic arrangement production task in the audio domain by leveraging a two-dimensional time-frequency representation." }, { "heading": "2 RELATED WORKS", "text": "The interest surrounding automatic music generation, translation and arrangement has greatly increased in the last few years, as proven by the high numbers of solutions proposed – see (Briot et al., 2020) for a comprehensive and detailed survey. Here we present a brief overview of the key contributions both in symbolic and audio domain.\nMusic generation & arrangement in the symbolic domain. There is a very large body of research that uses a symbolic representation of music to perform music generation and arrangement. The following contributions used MIDI, piano rolls, chord and note names to feed several deep learning architectures and tackle different aspects of the music generation problem. In (Yang et al., 2017), CNNs are used for generating melody as a series of MIDI notes either from scratch, by following a chord sequence, or by conditioning on the melody of previous bars. In (Mangal et al., 2019; Jaques et al., 2016; Mogren, 2016; Makris et al., 2017), LSTM networks are used to generate musical notes, melodies, polyphonic music pieces, and long drum sequences, under constraints imposed by metrical rhythm information and a given bass sequence. The authors of (Yamshchikov & Tikhonov, 2017; Roberts et al., 2018), instead, use VAE networks to generate melodies. In (Boulanger-Lewandowski et al., 2012), symbolic sequences of polyphonic music are modeled in a completely general pianoroll representation, while the authors of (Hadjeres & Nielsen, 2017) propose a novel architecture to generate melodies satisfying positional constraints in the style of the soprano parts of the J.S. Bach chorale harmonisations encoded in MIDI. In (Johnson, 2017), RNNs are used for prediction and composition of polyphonic music; in (Hadjeres et al., 2017), highly convincing chorales in the style of Bach were automatically generated using note names; (Lattner et al., 2018) added higher-level structure on generated, polyphonic music, whereas (Mao et al., 2018) designed an end-to-end generative model capable of composing music conditioned on a specific mixture of composer styles. The\napproach described in (Hawthorne et al., 2018), instead, relies on notes as an intermediate representation to a suite of models – namely, a transcription model based on a CNN and a RNN network (Hawthorne et al., 2017), a self-attention-based music language model (Huang et al., 2018) and a WaveNet model (Oord et al., 2016) – capable of transcribing, composing, and synthesizing audio waveforms. Finally, (Zhu et al., 2018) proposes an end-to-end melody and arrangement generation framework, called XiaoIce Band, which generates a melody track with several accompaniments played by several types of instruments. As this extensive literature on music generation in the symbolic domain shows, a promising approach would be to work with symbolic music and then use state-of-the-art synthesizers to produce sounds. MIDI, music sheets and piano rolls, however, are not always easy to find or produce. Moreover, many musicians and artists can not read music and would be more comfortable to work in a less formalized setting. Finally, state-of-the-art synthesizers, although increasingly indistinguishable from live recordings, can not yet reproduce the infinite nuances of real voices and instruments. Conversely, raw audio representation could be more appealing for some creators given its flexibility and little music competence required.\nMusic generation & arrangement in the audio domain. Some of the most relevant approaches proposed so far in the field of waveform music generation deal with raw audio representation in the time domain. Many of these approaches draw methods and ideas from the extensive literature on audio and speech synthesis. For instance, in (Prenger et al., 2019) a flow-based network capable of generating high quality speech from mel-spectrograms is proposed, while in (Wang et al., 2019) the authors present a neural source-filter (NSF) waveform modeling framework that is straightforward to train and fast to generate waveforms. In (Zhao et al., 2020) recent neural waveform synthesizers such as WaveNet, WaveG-low, and the neural-source-filter (NSF) models are compared. (Mehri et al., 2016) tested a model for unconditional audio generation based on generating one audio sample at a time, and (Bhave et al., 2019) applied Restricted Boltzmann Machine and LSTM architectures to raw audio files in the frequency domain in order to generate music. A fully probabilistic and autoregressive model, with the predictive distribution for each audio sample conditioned on all previous ones, is used in (Oord et al., 2016) to produce novel and often highly realistic musical fragments. (Manzelli et al., 2018) combined two types of music generation models, namely symbolic and raw audio models, to train a raw audio model based on the WaveNet architecture, but that incorporates the notes of the composition as a secondary input to the network. Finally, in (Dhariwal et al., 2020) the authors tackled the long context of raw audio using a multi-scale VQ-VAE to compress it to discrete codes, and modeled such context through Sparse Transformers, in order to generate music with singing in the raw audio domain. Nonetheless, due to the computational resources required to directly model long-range dependencies in the time domain, either short samples of music can be generated or complex and large architectures and long inference time are required. On the other hand, in (Vasquez & Lewis, 2019) a novel approach is discussed, which proves that long-range dependencies can be more tractably modelled in two-dimensional time-frequency representations such as Mel-spectrograms. More precisely, the authors of this contribution designed a highly expressive probabilistic model and a multiscale generation procedure over Mel-spectrograms capable of generating high-fidelity audio samples which capture structure at timescales. It is worth recalling, as well, that treating spectrograms as images is the current standard for many Music Information Retrieval tasks, such as music transcription (Sigtia et al., 2016) and chord recognition.\nGenerative adversarial networks for music generation. Our work is precisely founded on this novel assumption, thus taking the best from the raw audio representation, while tackling the main issues induced by musical signals long-range dependencies thanks to the waveform-to-spectrograms conversion. Such two-dimensional representation of music paves the way to the application of several image processing techniques and image-to-image translation networks to carry out style transfer and arrangement generation (Isola et al., 2017; Zhu et al., 2017). It is worth recalling that the application of GANs to music generation tasks is not new: in (Brunner et al., 2018), Generative Adversarial Networks are applied on symbolic music to perform music genre transfer; however, to the best of our knowledge, GANs have never been applied to raw audio in the Mel-frequency domain for music generation purposes. As to the arrangement generation task, also in this case the large majority of approaches proposed in literature is based on symbolic representation of music: in (Ren et al., 2020), a novel Multi-track MIDI representation (MuMIDI) is presented, which enables simultaneous multi-track generation in a single sequence and explicitly models the dependency of the notes from different tracks by means of a Transformer-based architecture; in (Jiang et al., 2020), a deep reinforcement learning algorithm for online accompaniment generation is described. Coming to the most relevant issues in the development of music generation systems, both the training and\nevaluation of such systems haven proven challenging, mainly because of the following reasons: (i) the available datasets for music generation tasks are challenging due to their inherent high-entropy (Dieleman et al., 2018), and (ii) the definition of an objective metric and loss is a common problem to generative models such as GANs: at now, generative models in the music domain are evaluated based on the subjective response of a pool of listeners, and just for the MIDI representation a set of simple musically informed objective metrics was proposed (Yang & Lerch, 2020)." }, { "heading": "3 METHOD", "text": "" }, { "heading": "3.1 SOURCE SEPARATION FOR MUSIC", "text": "We present a novel framework for automatic music arrangement generation using an adversarially trained deep learning model. A key challenge to our approach is the scarce availability of music data featuring source separated channels (i.e. differentiated vocals, bass, drums, ...). To this end, we leverage Demucs by Défossez et al., a freely available tool, which separates music into its generating sources. Demucs features a U-NET encoder-decoder architecture with a bidirectional LSTM as middle hidden layer. In particular we used a pre-trained model made available by the original authors, consisting of 6 convolutional encoder and decoder blocks and a middle hidden size of length 3200. Demucs is time–equivariant, meaning that shifts in the input mixture will cause a congruent shifts in the output. The model does not feature this property naturally, but it is achieved through a workaround (randomized equivariant stabilization) as explained by the original authors. Nonetheless, at times this method produces noisy separations – with watered-down harmonics and traces of other instruments in the vocal segment – effectively hindering the ability of later part of the pipeline to properly recognise and reconstruct accompaniment, of which harmonics are a critical part. While better source-separation methods are available [Sota SDR = 5.85, Demucs SDR = 5.67], we chose to use Demucs because it was faster and easier to embed in our pipeline. Moreover, for bass source separation it beats the state of the art [Sota SDR = 5.28, Demucs SDR = 6.21]. Finally, we were adamant about picking a tool using deep learning because this may open the possibility to build an end-to-end trained pipeline in the future. This at least partially solves the challenge of data availability and allows us to feed our model with the appropriate signals." }, { "heading": "3.2 MUSIC REPRESENTATION – FROM RAW AUDIO TO MEL-SPECTROGRAMS", "text": "One of the main features of our method is to choose a two-dimensional time-frequency representation of the audio samples rather than a time representation. The spectrum is a common transformed representation for audio, obtained via a Short-Time Fourier transform (STFT). The discrete STFT of a given signal x : [0 : L− 1] := {0, 1, . . . , L− 1} → R leads to the kth complex Fourier coefficient for the mth time frame X (m, k) := ∑N−1 n=0 x(n + mH) · w(n) · e− 2πikn N , with m ∈ [0 : M ] and K ∈ [0 : K], and where w(n) is a sampled window function of length N ∈ N and H ∈ N is the hop size, which determines the step size in which the window is to be shifted across the signal (Müller, 2015). The spectrogram is a two-dimensional representation of the squared magnitude of the STFT, i.e. Y(m, k) := |X (m, k)|2, with m ∈ [0 : M ] and K ∈ [0 : K]. Figure 1 shows a Mel-spectrogram example (Stevens et al., 1937), which is treated as single channel image, representing the sound intensity with respect to time – x axis – and frequency – y axis (Briot et al., 2020). This decision allows to better deal with long-range dependencies typical of such kind of data and to reduce the computational resources and inference time required. Moreover, the Mel-scale is based on a mapping between the actual frequency f and perceived pitch m = 2595 · log10(1 + f700 ), as the human auditory system does not perceive pitch in a linear manner. Finally, using Mel-spectrograms of pre-existing songs to train our model potentially enables to draw sounds for new arrangements from the vast collection of music recordings accumulated in the last century.\nAfter the source separation task was carried out on our song dataset, each source (and the full song) waveforms were turned into corresponding Mel-spectrograms. This has been done using PyTorch Audio1, to take advantage of robust, GPU accelerated conversion. We decided to discard the phase information in this process, to reduce the dimensionality of the representation. To revert back to the time-domain signal, we: (i.) apply a conversion matrix (using triangular filter banks)\n1Available at: https://pytorch.org/audio/stable/index.html\nto convert the Mel-frequency STFT to a linear scale STFT, where the matrix is calculated using a gradient-based method (Decorsière et al., 2015) to minimize the euclidean norm between the original Mel-spectrogram and the product between reconstructed spectrogram and filter banks; (ii.) use the Griffin-Lim’s algorithm (Griffin & Jae Lim, 1984) to reconstruct phase information.\nIt is worth noticing that Mel-scale conversion and the removal of STFT phases respectively discard frequency and temporal information, that results in a distortion in the recovered signal. To minimize this problem, we use a high-enough resolution of the Mel-spectrograms (Vasquez & Lewis, 2019), whose size can be tweaked with number of mels and STFT hop size parameters. Thus, the optimal parameters we found were the following ones: the sampling rate was initially set to 22050 Hz, the window length N to 2048, the number of Mel-frequency bins to 256 and the hop size H to 512. To fit our model requirements, we cropped out 256 × 256 windows from each Mel-spectrogram with an overlapping of 50 time frames, obtaining multiple samples from each song (each equivalent to 5 seconds of music)." }, { "heading": "3.3 IMAGE TO IMAGE TRANSLATION - CYCLEGAN", "text": "The automatic arrangement generation task was faced through an unpaired image-to-image translation framework, by adapting the CycleGAN model to our purpose. CycleGAN is a framework able to translate between domains without paired input-output examples, by assuming some underlying relationship between the domains and trying to learn that relationship. Based on a set of images in domain X and a different set in domain Y , the algorithm learns both a mapping G : X → Y and a mapping F : Y → X , such that the output ŷ = G(x) for every x ∈ X , is indistinguishable from images y ∈ Y and x̂ = G(y) for every y ∈ Y , is indistinguishable from images x ∈ X . The other relevant assumption is that, given a mapping G : X → Y and another mapping F : Y → X , then G and F should be inverses of each other, and both mappings should be bijections. This assumption is implemented by training both the mapping G and F simultaneously, and adding a cycle consistency loss that encourages F (G(x)) ≈ x and G(F (y)) ≈ y. The cycle consistency loss is then combined with the adversarial losses on domains X and Y (Zhu et al., 2017)." }, { "heading": "3.4 AUTOMATIC MUSIC PRODUCTION", "text": "The method we propose takes as input a set of N music songs in the waveform domain X = {xi}Ni=1, where xi is a waveform whose number of samples depends on the sampling rate and the audio length. Each waveform is then separated by Demucs into three different sources. Thus, we end up having four different WAV files for each song, which means a new set of data of the kind: XNEW = {xi,vi,di,bi}Ni=1, where vi,bi,di represents vocal, bass, and drums respectively. Each track is then converted to its Mel-spectrogram representation. Since the CycleGAN model takes 256 × 256 images as input, each spectrogram is chunked into smaller pieces with an overlapping window of 50 time frames; finally, in order to obtain one channel images from the original spectro-\ngrams, we performed a discretization step in the range [0 − 255]. In the final stage of our pipeline, we feed the obtained dataset to the CycleGan model, that has been adapted to the structure of this data. Even though the discretization step introduces some distortion – original spectrogram values are floats – the impact on the audio quality is negligible.\nAt training time, as the model takes into account two domainsX and Y , we considered two different experimental settings of increasing difficulty: (i.) we feed the model with bass and drums lines in order to create suitable drums given a bass line; (ii.) we take the vocals and the whole song respectively, with the goal of generating an arrangement euphonic to the vocal line. Solving the bass2drums task effectively would represent a first interesting intermediary goal. Drums and bass are usually the first instruments to be recorded when producing a song. Ideally, though, a system the automatically generates a full song given a voice line as input would be far more ambitious and disruptive because it would allow anyone to express her/himself in music." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 DATASET", "text": "For the quality of the generated music samples, it is important to carefully pick the training dataset. To train and test our model we decided to use the Free Music Archive (FMA), and the musdb18 dataset (Rafii et al., 2017) that were both made available quite recently. The Free Music Archive (FMA) is the largest publicly available dataset suitable for music information retrieval tasks (Defferrard et al., 2017; 2018). In its full form it provides 917 GB and 343 days of Creative Commonslicensed audio from 106,574 tracks from 16,341 artists and 14,854 albums, arranged in a hierarchical taxonomy of 161 unbalanced genres. It provides full-length and high-quality audio, pre-computed features, together with track- and user-level metadata, tags, and free-form text such as biographies. Given the size of FMA, we chose to select only untrimmed songs tagged as either pop, soul-RnB, or indie-rock, for a total of approximately 10,000 songs ( 700 hours of audio). It is possible to read the full list of songs at FMA website, selecting the genre. We discarded all songs that were recorded live by filtering out all albums that contained the word ”live” in the title. Finally, in order to better validate and fine-tune our model we decided to also use the full musdb18 dataset. This rather small dataset is made up of 100 tracks taken from the DSD100 dataset, 46 tracks from the MedleyDB, 2 tracks kindly provided by Native Instruments, and 2 tracks from the Canadian rock band The Easton Ellises. It represents a unique and precious source of songs delivered in multi-track fashion. Each song comes as 5 audio files – vocals, bass, drums, others, full song – perfectly separated at the master level. We used the 100 tracks taken from the DSD100 dataset to fine-tune the model ( 6.5 hours), and the remaining 50 songs to test it ( 3.5 hours). It is worth noting that DEMUCS is not a perfect method for source separation: it introduces artefacts and noise of the original song in the separated sources output, making the task easier and inducing the model to learn to amplify it. For this reason our training strategy is to pre-train with the artificial FMA dataset then fine-tune with musdb18. Intuitively, the former, which is much larger, helps the model to create a good representation of musical signal; the latter, which is of higher quality, contributes to reducing bias induced by the noise and to further specializing to generate a base relying only on the (clean) input given." }, { "heading": "4.2 TRAINING OF THE CYCLE-GAN MODEL", "text": "For both the bass2drums and voice2song tasks, we trained our model on 2 Tesla V100 SXM2 GPUs with 32 GB of RAM for 12 epochs (FMA dataset), and fine-tuned it for 20 more epochs (musdb18 dataset). Each task required 6 days of training. For both the settings, as a final step, the spectrograms obtained were converted to the waveform domain, to evaluate the produced music. As to the CycleGAN model used for training, we relied on the default network available at this GitHub repository. As a result, the model uses a resnet 9blocks ResNet generator and a basic 70x70 PatchGAN as a discriminator. The Adam optimizer (Kingma & Ba, 2014) was chosen both for the generators and the discriminators, with betas (0.5, 0.999) and learning rate equal to 0.0002. The batch size was set to 1. The λ weights for cycle losses were both equal to 10." }, { "heading": "4.3 EXPERIMENTAL SETTING", "text": "There is an intrinsic difficulty in objectively evaluating artistic artifacts such as music. As a human construct, there are no objective, universal criteria for appreciating music. Nevertheless, in order to establish some form of benchmark and allow comparisons among different approaches, many generative approaches to raw audio, such as Jukebox (Dhariwal et al., 2020) or Universal Music Translation Network (Mor et al., 2018), try to overcome this obstacle by having the results manually tagged by human experts. Although this rating may be the best in terms of quality, the result is still somehow subjective, thus different people may end up giving different or biased ratings based on their personal taste. Moreover, the computational cost and time required to manually annotate the dataset could become prohibitive even for relatively few samples (over 1000). Aware of the great limits linked to this human-based approach and unable to find a more convincing evaluation procedure, we propose a new metric that highly correlates with human judgment. This could represent a first benchmark for the tasks at hand. The results remain somehow subjective, but at least we were able to automatically replicate our evaluators’ criteria and grades, saving time and money." }, { "heading": "4.4 METRICS", "text": "If we consider as a general objective for a system the capacity to assist composers and musicians, rather than to autonomously generate music, we should also consider as an evaluation criteria the satisfaction of the composer (notably, if the assistance of the computer allowed him to compose and create music that he may consider not having been possible otherwise), rather than the satisfaction of the auditors (who remain too often guided by some conformance to a current musical trend) (Briot et al., 2020).\nHowever, as previously stated, an exclusive human evaluation may be unsustainable in terms of computational cost and time required. Thus we carried out the following quantitative assessment of our model. We first produced 400 test samples – from as many different songs and authors – of artificial arrangements and drum lines starting from voice and bass lines that were not part of the training set. We then asked a professional guitarist who has been playing in a pop-rock band for more than 10 years, a professional drum player from the same band, and two pop and indie-rock music producers with more than 4 years of experience to manually annotate these samples, capturing the following musical dimensions: quality, euphony, coherence, intelligibility. More precisely, for each sample, we asked them to rate from 1 to 10 the following aspects: (i) Quality: a rating from 1 to 10 of the naturalness and absence of artifacts or noise, (ii) Contamination: a rating from 1 to 10 of the contamination by other sources, (iii) Credibility: a rating from 1 to 10 of the credibility of the sample, (iv) Time: a rating from 1 to 10 of whether the produced drums and arrangements are on time the bass and voice lines. The choice fell on these four aspects after we asked the evaluators to\nlist and describe the most relevant dimensions in the perceived quality of a piece of pop-rock music. The correlation matrix for all 4 annotators is shown in Table 1.\nIdeally, we want to produce some quantitative measure whose outputs – when applied to generated samples – highly correlates (i.e. predict) expert average grades. To achieve this goal, we trained a logistic regression model with features obtained through a comparison between the original arrangement and the model output, as well as the original drums and the artificial drums. Here are the details on how we obtained suitable features:\nSTOI-like features. We created a procedure – inspired by the STOI (Andersen et al., 2017) – whose output vector somehow measures the Mel-frequency bins correlation throughout time between the original sample (arrangement/drums) and the fake one. The obtained vector can then be used to feed a multi regression model whose independent variable is the human score attributed to that sample.\nHere is the formalisation: HumanScore = ∑256\ni ai [∑256 t (x (t) i − x̄(t))(y (t) i − ȳ(t)) ] . To simplify,\nto each pair of samples (original and generated one) a 256 element long vector is associated as follows: S(X ,Y, l)(i) = ∑256 t (x (t) i − x̄(t))(y (t) i − ȳ(t)). Where: (i.) X and Y are, respectively, the Mel-spectrogram matrices of original and generated samples; (ii.) ai is the i-th coefficient for the linear regression; (iii.) x(t)i and y (t) i the i-th element of the t-th column of matrices X and Y , respectively; (iv.) x̄(t) and ȳ(t) are the means along the t-th column of matrices X and Y , respectively. Each feature i of the regression model is a sort of Pearson correlation coefficient between row i of X and row i of Y throughout time. FID-based features. In the context of GANs result evaluation, the Fréchet Inception distance (FID) is supposed to improve on the Inception Score by actually comparing the statistics of generated samples to real samples (Salimans et al., 2016; Heusel et al., 2017). In other words, FID measures the probabilistic distance between two multivariate Gaussians, where Xr = N(µr,Σr) and Xg = N(µg,Σg) are the 2048-dimensional activations of the Inception-v3 pool3 layer – for real and generated samples respectively – modeled as normal distributions. The similarity between the two distributions is measured as follow: FID = ‖µr − µg‖2 + Tr(Σr + Σg − 2(ΣrΣg)1/2). Nevertheless, since we want to assign a score to each sample, we just estimated the Xr = N(µr,Σr) parameters – using different activation layers of the Inception pre-trained network – and then we calculated the probability density associated to each fake sample. Finally, we added these scores to the regression model predictors." }, { "heading": "4.5 EXPERIMENTAL RESULTS", "text": "For the bass2drums task, Figure 3 shows the distribution of grades for the 400 test samples – averaged among all four independent evaluators and over all the four dimensions. We rounded the results to the closest integer to make the plot more readable. The higher the grade, the better the sample will sound. Additionally, to fully understand what to expect from samples graded similarly, we discussed the model results with the evaluators. We collectively listened to a random set of samples and it turned out that all four raters followed similar principles in assigning the grades. Samples with grade 1-3 are generally silent or very noisy. In samples graded 4-5 few sounds start to emerge, but they are usually not very pleasant to listen to, nor coherent. Grades 6-7 identify drums that sound good, that are coherent, but that are not continuous: they tend to follow the bass line too closely. Finally, samples graded 8 and 9 are almost indistinguishable from real drums, both in terms of sound and timing. In the labeling of non graded samples phase, we therefore assigned a 0 to those samples whose average grade was between 1 and 5, and 1 to those between 6 and 10. Finally, we trained a multi-logistic regression model with both the STOI-like and the FID-based features. The model accuracy on test set was 87%.\nGiven this pretty good result, we could then used this trained logistic model to label 14000 different 5s fake drums clips, produced from as many real bass lines. Two third of these were labeled as good sounding and on time. Here is a private Sound Cloud playlist where you can listen to some of the most interesting results. Regarding instead the voice2song task, results were less encouraging. Even though some nice arrangements were produced, the model failed to properly and euphonically arrange the input voice lines. For this reason, here we limit to report some of the best produced samples, in the hope to improve the model greatly in the following months. As for baselines, initially we thought about comparing our results to three particularly notable works (Dhariwal et al.,\n2020; Vasquez & Lewis, 2019; Mor et al., 2018), but after running some experiments we eventually realized that they could not be properly used for arrangement purposes. All three model produce very nice music samples, but none of them can take as input vocals or bass lines and produce a complementary arrangement. It is possible though that these models could be fine tuned to solve this new task. In addition, we replicated exactly the same experiments using Pix2Pix by Isola et al., a well known paired image-to-image architecture. Despite long training, results were very poor and quite unpleasant to listen to. Due to space concerns we do not report more details about this set of experiments.\nFinally, with respect to the computational resources and time required to generate new arrangements, our approach shows several advantages, compared to auto-regressive models (Dhariwal et al., 2020). Since the output prediction can be fully parallelised, the inference time amounts to a forward pass and a Mel-spectrogram-waveform inverse conversion, whose duration depends on the input length, but it never exceeds few minutes. Indeed, it is worth noting that, at inference time, arbitrary long inputs can be processed and arranged." }, { "heading": "5 CONCLUSIONS AND FUTURE WORK", "text": "In this work, we presented a novel approach to automatically produce euphonic music arrangements starting from a voice line or a bass line. We applied Generative Adversarial Networks to real music pieces, treated as grayscale images (Mel-spectrograms). Given the novelty of the problem, we proposed a reasonable procedure to properly evaluate our model outputs. Notwithstanding the promising results, some critical issues need to be addressed before a more compelling architecture can be developed. First and foremost, a larger and cleaner dataset of source separated songs should be created. In fact, manually separated track always contain a big deal of noise. Moreover, the model architecture should be further improved to focus on longer dependencies and to take into account the actual degradation of high frequencies. Finally, a certain degree of interaction and randomness should be inserted to make the model less deterministic and to give creators some control over the sample generation. Our contribution is nonetheless a first step toward more realistic and useful automatic music arrangement systems and we believe that further significant steps could be made to reach the final goal of human-level automatic music arrangement production. Already now software like Melodyne (Neubäcker, 2011; Senior, 2009) delivers producers a powerful user interface to directly intervene on a spectrogram-based representation of audio signals to correct, perfect, reshape and restructure vocals, samples and recordings of all kinds. It is not unlikely that in the future artists and composers will start creating their music almost like they were drawing." } ]
2,020
AUTOMATIC MUSIC PRODUCTION USING GENERA-
SP:bbada593ac1fae021d96b76f47f62772da50bdce
[ "This paper presents a novel method to remove the selection bias of graph data, which is neglected by previous methods. Specifically, the authors suspect that all variables observed by GNNs can be decomposed into two parts, stable variables and unstable variables. Then, DGNN, a differentiable decorrelation regularization is proposed to reweight each variable pair to eliminate estimation bias. Experiments on three datasets confirm its effectiveness." ]
Most existing Graph Neural Networks (GNNs) are proposed without considering the selection bias in data, i.e., the inconsistent distribution between the training set with test set. In reality, the test data is not even available during the training process, making selection bias agnostic. Training GNNs with biased selected nodes leads to significant parameter estimation bias and greatly impacts the generalization ability on test nodes. In this paper, we first present an experimental investigation, which clearly shows that the selection bias drastically hinders the generalization ability of GNNs, and theoretically prove that the selection bias will cause the biased estimation on GNN parameters. Then to remove the bias in GNN estimation, we propose a novel Debiased Graph Neural Networks (DGNN) with a differentiated decorrelation regularizer. The differentiated decorrelation regularizer estimates a sample weight for each labeled node such that the spurious correlation of learned embeddings could be eliminated. We analyze the regularizer in causal view and it motivates us to differentiate the weights of the variables based on their contribution on the confounding bias. Then, these sample weights are used for reweighting GNNs to eliminate the estimation bias, thus help to improve the stability of prediction on unknown test nodes. Comprehensive experiments are conducted on several challenging graph datasets with two kinds of label selection bias. The results well verify that our proposed model outperforms the state-of-the-art methods and DGNN is a flexible framework to enhance existing GNNs.
[]
[ { "authors": [ "Christopher M Bishop" ], "title": "Pattern recognition and machine learning", "venue": "springer,", "year": 2006 }, { "authors": [ "Andrew Carlson", "Justin Betteridge", "Bryan Kisiel", "Burr Settles", "Estevam R Hruschka", "Tom M Mitchell" ], "title": "Toward an architecture for never-ending language learning", "venue": "In AAAI,", "year": 2010 }, { "authors": [ "Michael Cogswell", "Faruk Ahmed", "Ross Girshick", "Larry Zitnick", "Dhruv Batra" ], "title": "Reducing overfitting in deep networks by decorrelating representations", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Jens Hainmueller" ], "title": "Entropy balancing for causal effects: A multivariate reweighting method to produce balanced samples in observational studies", "venue": "Political analysis,", "year": 2012 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Johannes Klicpera", "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Predict then propagate: Graph neural networks meet personalized pagerank", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Noémi Kreif", "Richard Grieve", "Iván Díaz", "David Harrison" ], "title": "Evaluation of the effect of a continuous treatment: a machine learning approach with an application to treatment for traumatic brain injury", "venue": "Health economics,", "year": 2015 }, { "authors": [ "Kun Kuang", "Ruoxuan Xiong", "Peng Cui", "Susan Athey", "Bo Li" ], "title": "Stable prediction with model misspecification and agnostic distribution shift", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Yuji Nakatsukasa" ], "title": "Absolute and relative weyl theorems for generalized eigenvalue problems", "venue": "Linear Algebra and its Applications,", "year": 2010 }, { "authors": [ "Franco Scarselli", "Marco Gori", "Ah Chung Tsoi", "Markus Hagenbuchner", "Gabriele Monfardini" ], "title": "The graph neural network model", "venue": "IEEE Transactions on Neural Networks,", "year": 2008 }, { "authors": [ "Prithviraj Sen", "Galileo Namata", "Mustafa Bilgic", "Lise Getoor", "Brian Galligher", "Tina Eliassi-Rad" ], "title": "Collective classification in network data", "venue": "AI magazine,", "year": 2008 }, { "authors": [ "Zheyan Shen", "Peng Cui", "Jiashuo Liu", "Tong Zhang", "Bo Li", "Zhitang Chen" ], "title": "Stable learning via differentiated variable decorrelation", "venue": "In KDD,", "year": 2020 }, { "authors": [ "Zheyan Shen", "Peng Cui", "Tong Zhang", "Kun Kuang" ], "title": "Stable learning via sample reweighting", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Petar Veličković", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Lio", "Yoshua Bengio" ], "title": "Graph attention networks", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Felix Wu", "Amauri Souza", "Tianyi Zhang", "Christopher Fifty", "Tao Yu", "Kilian Weinberger" ], "title": "Simplifying graph convolutional networks", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Zhilin Yang", "William W Cohen", "Ruslan Salakhutdinov" ], "title": "Revisiting semi-supervised learning with graph embeddings", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Liuyi Yao", "Zhixuan Chu", "Sheng Li", "Yaliang Li", "Jing Gao", "Aidong Zhang" ], "title": "A survey on causal inference", "venue": "arXiv preprint arXiv:2002.02770,", "year": 2020 }, { "authors": [ "Bianca Zadrozny" ], "title": "Learning and evaluating classifiers under sample selection bias", "venue": "In ICML, pp", "year": 2004 }, { "authors": [ "Fan Zhou", "Tengfei Li", "Haibo Zhou", "Hongtu Zhu", "Ye Jieping" ], "title": "Graph-based semi-supervised learning with non-ignorable non-response", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Yang" ], "title": "A description of each of dataset is given as follows: • Cora (Sen et al., 2008) is a citation network of Machine Learning papers that collected from 7 classes:{Theory, Case Based, Reinforcement Learning, Genetic Algorithms, Neural Networks, Probabilistic Methods, Rule Learning ", "venue": null, "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Graph Neural Networks (GNNs) are powerful deep learning algorithms on graphs with various applications (Scarselli et al., 2008; Kipf & Welling, 2016; Veličković et al., 2017; Hamilton et al., 2017). Existing GNNs mainly learn a node embedding through aggregating the features from its neighbors, and such message-passing framework is supervised by node label in an end-to-end manner. During this training procedure, GNNs will effectively learn the correlation between the structure pattern and node feature with node label, so that GNNs are capable of learning the embeddings of new nodes and inferring their labels.\nOne basic requirement of GNNs making precise prediction on unseen test nodes is that the distribution of labeled training and test nodes is same, i.e., the structure and feature of labeled training and test nodes follow the similar pattern, so that the learned correlation between the current graph and label can be well generalized to the new nodes. However, in reality, there are two inevitable issues. (1) Because it is difficult to control the graph collection in an unbiased environment, the relationship between the collected real-world graph and the labeled nodes is inevitably biased. Training on such graph will cause biased correlation with node label. Taking a scientist collaboration network as an example, if most scientists with “machine learning” (ML) label collaborate with those with “computer vision” (CV) label, existing GNNs may learn spurious correlation, i.e., scientists who cooperate with CV scientist are ML scientists. If a new ML scientist only connects with ML scientists or the scientists in other areas, it will be probably misclassified. (2) The test node in the real scenario is usually not available, implying that the distribution of new nodes is agnostic. Once the distribution is inconsistent with that in the training nodes, the performance of all the current GNNs will be hindered. Even transfer learning is able to solve the distribution shift problem, however, it still needs the prior of test distribution, which actually cannot be obtained beforehand. Therefore, the agnostic label selection bias greatly affects the generalization ability of GNNs on unknown test data.\nIn order to observe selection bias in real graph data, we conduct an experimental investigation to validate the effect of selection bias on GNNs (details can be seen in Section 2.1). We select training nodes with different biased degrees for each dataset, making the distribution of training nodes and test nodes inconsistent. The results clearly show that selection bias drastically hinders the performance of GNNs on unseen test nodes. Moreover, with heavier bias, the performance drops more. Further, we theoretically analyze how the data selection bias results in the estimation bias in GNN parameters (details can be seen in Section 2.2). Based on the stable learning technique (Kuang et al., 2020), we can assume that the learned embeddings consist of two parts: stable variables and unstable variables. The data selection bias will cause the spurious correlation between these two kinds of variables. Thereby we prove that with the inevitable model misspecification, the spurious correlation will further cause the parameter estimation bias. Once the weakness of the current GNNs with selection bias is identified, one natural question is “how to remove the estimation bias in GNNs?”\nIn this paper, we propose a novel Debiased Graph Neural Network (DGNN) framework for stable graph learning by jointly optimizing a differentiated decorrelation regularizer and a weighted GNN model. Specifically, the differentiated decorrelation regularizer is able to learn a set of sample weights under differentiated variable weights, so that the spurious correlation between stable and unstable variables would be greatly eliminated. Based on the causal view analysis of decorrelation regularizer, we theoretically prove that the weights of variables can be differentiated by the regression weights. Moreover, to better combine the decorrelation regularizer with GNNs, we prove that adding the regularizer to the embedding learned by the second to last layer could be both theoretically sound and flexible. Then the sample weights learned by decorrelation regularizer are used to reweight GNN loss so that the parameter estimation could be unbiased.\nIn summary, the contributions of this paper are three-fold: i) We investigate a new problem of learning GNNs with agnostic label selection bias. The problem setting is general and practical for real applications. ii) We bring the idea of variable decorrelation into GNNs to relieve bias influence on model learning and propose a general framework DGNN which could be adopted to various GNNs. iii) We conduct the experiments on real-world graph benchmarks with two kinds of agnostic label selection bias, and the experimental results demonstrate the effectiveness and flexibility of our model." }, { "heading": "2 EFFECT OF LABEL SELECTION BIAS ON GNNS", "text": "In this section, we first formulate our target problem as follows: Problem 1 (Semi-supervised Learning on Graph with Agnostic Label Selection Bias). Given a training graph Gtrain = {Atrain,Xtrain,Ytrain}, where Atrain ∈ RN×N (N nodes) represents the adjacency matrix, Xtrain ∈ R N×D (D features) refers to the node features and Ytrain ∈ R n×C (n labeled nodes, C classes) refers to the available labels for training (n ≪ N ), the task is to learn a GNN gθ(⋅) with parameter θ to precisely predict the label of nodes on test graph Gtest = {Atest,Xtest,Ytest}, where distribution Ψ(Gtrain) ≠ Ψ(Gtest)." }, { "heading": "2.1 EXPERIMENTAL INVESTIGATION", "text": "We conduct an experimental investigation to examine whether the state-of-the-art GNNs are sensitive to the selection bias. The main idea is that we will perform two representative GNNs: GCN (Kipf\n& Welling, 2016) and GAT (Veličković et al., 2017) on three widely used graph datasets: Cora, Citeseer, Pubmed (Sen et al., 2008) with different degrees of bias. If the performance drops sharply in comparison with the scenarios without selection bias, this will demonstrate that GNNs cannot generalize well in selection bias setting.\nTo simulate the agnostic selection bias scenario, we first follow the inductive setting in Wu et al. (2019) that masks the validation and test nodes as the training graph Gtrain in the training phase, and then infer the labels of validation and test nodes with whole graph Gtest. In this way, the distribution of test node can be considered agnostic. Following Zadrozny (2004), we design a biased label selection method on training graph Gtrain. The selection variable e is introduced to control whether the node will be selected as labeled nodes, where e = 1 means selected and 0 otherwise. For node i, we compute its neighbor distribution ratio: ri = ∣{j∣j ∈ Ni, yj ≠ yi}∣/∣Ni∣, where Ni is neighborhood of node i in Gtrain and yj ≠ yi means the label of central node i is not the label of its neighborhood node j. And ri measures the difference between the label of central node i with the labels of its neighborhood. Then we average all the nodes’ r to get a threshold t. For each node, the probability to be selected is: P (ei = 1∣ri) = { ri ≥ t\n1 − ri < t , where ∈ (0.5, 1) is used to control the degree of selection bias and the larger means heavier bias. We set as {0.7, 0.8, 0.9} to get three bias degrees for each dataset, termed as Light, Medium, Heavy, respectively. We select 20 nodes for each class for training and the validation and test nodes are same as Yang et al. (2016). Furthermore, we take the unbiased datasets as baselines, where the labeled nodes are selected randomly.\nFigure 1 is the results of GCN and GAT on biased datasets. The dashed lines mean the performances of GCN/GAT on unbiased datasets and the solid lines refer to the results on biased datasets. We can find that: i) The dashed lines are all above the corresponding coloured solid lines, indicating that selection bias greatly affects the GNNs’ performance. ii) All solid lines decrease monotonically with the increase of bias degree, demonstrating that heavier bias will cause larger performance decrease." }, { "heading": "2.2 THEORETICAL ANALYSIS", "text": "The above experiment empirically verifies the effect of selection bias on GNNs. Here we theoretically analyze the effect of selection bias on estimating the parameters in GNNs. First, because biased labeled nodes have biased neighborhood structure, GNNs will encode this biased information into the node embeddings. Based on stable learning technique (Kuang et al., 2020), we make following assumption: Assumption 1. All the variables of embeddings learned by GNNs for each node can be decomposed as H = {S,V}, where S represents the stable variables and V represents the unstable variables. Specifically, for both training and test environment, E(Y∣S = s,V = v) = E(Y∣S = s).\nUnder Assumption 1, the distribution shift between training set and test set is mainly induced by the variation in the joint distribution over (S, V), i.e., P(Strain,Vtrain) ≠ P(Stest,Vtest). However, there is an invariant relationship between stable variable S and outcome Y in both training and test environments, which can be expressed as P(Ytrain∣Strain) = P(Ytest∣Stest). Assumption 1 can be guaranteed by Y⊥V∣S. Thus, one can solve the stable prediction problem by developing a function f(⋅) based on S. However, one can hardly identify such variables in GNNs. Without loss of generality, we take Y as continuous variable for analysis and have the following assumption: Assumption 2. The true generation process of target variable Y contains not only the linear combination of stable variables S, but also the nonlinear transformation of stable variables.\nBased on the above assumptions, we formalize the label generation process as follows:\nY = f(X,A) + ε = G (X,A; θg)SβS + G (X,A; θg)V βV + g(G (X,A; θg)S) + ε, (1)\nwhere G (X,A; θg) ∈ RN×p denotes an unknown function of X and A that learns node embedding and it can be learned by a GNN, such as GCN and GAT, the output variables of G (X,A; θg) can be decomposed as stable variables G (X,A; θg)S ∈ RN×m and unstable variables G (X,A; θg)V ∈ R N×q (m + q = p), βS ∈ R m×1 and βV ∈ R q×1 are the linear coefficients can be learned by the last layer of GNNs, ε is the independent random noise, and g(⋅) is the nonlinear transformation function\nof stable variables. According to Assumption 1, we know that coefficients of unstable variables G (X,A; θg)V are actually 0 (i.e., βV =0). For a classical GNN model with linear regressor, its prediction function can be formulated as:\nŶ = Ĝ (X,A; θg)S β̂S + Ĝ (X,A; θg)V β̂V + ε. (2) Compared with Eq. (1), we can find that the parameters of GNN could be unbiasedly estimated if the nonlinear term g(G (X,A; θg)S) = 0, because the GNN model will have the same label generation mechanism as Eq. (1). However, limited by the nonlinear power of GNNs (Xu et al., 2019), it is reasonable to assume that there is a nonlinear term g(G (X,A; θg)S) ≠ 0 that cannot be fitted by the GNNs. Under this assumption, next, we taking a vanilla GCN (Kipf & Welling, 2016) as an example to illustrate how the distribution shift will induce parameter estimation bias. A two-layer GCN can be formulated as Âσ(ÂXW(0))W(1), where  is the normalized adjacency matrix, W is the transformation matrix at each layer and σ(⋅) is the Relu activation function. We decompose GCN as two parts: one is embedding learning part Âσ(ÂXW(0)), which can be decomposed as [ST,VT], corresponding to Ĝ (X,A; θg)S and Ĝ (X,A; θg)V in Eq. (2), and the other part is W(1), where the learned parameters can be decomposed as [β̃S , β̃V ], corresponding to [β̂S , β̂V ] in Eq. (2). We aim at minimizing the square loss: LGCN = ∑ni=1(STi β̃S +VTi β̃V −Yi)2. According to the derivation rule of partitioned regression model, we have:\nβ̃V − βV = ( 1 n\nn\n∑ i=1 V T i Vi)−1( 1 n\nn\n∑ i=1 V T i g(Si)) + ( 1 n\nn\n∑ i=1 V T i Vi)−1( 1 n\nn\n∑ i=1 V T i Si)(βS − β̃S), (3)\nβ̃S − βS = ( 1 n\nn\n∑ i=1 S T i Si)−1( 1 n\nn\n∑ i=1 S T i g(Si)) + ( 1 n\nn\n∑ i=1 S T i Si)−1( 1 n\nn\n∑ i=1 S T i Vi)(βV − β̃V ), (4)\nwhere n is labeled node size, Si is i-th sample of S, 1 n ∑ni=1 VTi g(Si) = E(VTg(S)) + op(1), 1 n ∑ni=1 VTi Si = E(VTS) + op(1) and op(1) is the error which is negligible. Ideally, β̃V − βV = 0 indicates that there is no bias between the estimated and the real parameter. However, if E(VTS) ≠ 0 or E(VTg(S)) ≠ 0 in Eq. (3), β̃V will be biased, leading to the biased estimation on β̃S in Eq. (4) as well. Since the correlation between V and S (or g(S)) might shift in test phase, the biased parameters learned in training set is not the optimal parameters for predicting testing nodes. Therefore, to increase the stability of prediction, we need to unbiasedly estimate the parameters of β̃V by removing the correlation between V and S (or g(S)) on training graph, making E(VTS) = 0 or E(VTg(S)) = 0. Note that 1\nn ∑ni=1 STi g(Si) in Eq. (4) can also cause estimation bias, but the relation between S and\ng(S) is stable across environments, which do not influence the stability to some extent." }, { "heading": "3 PROPOSED MODEL", "text": "" }, { "heading": "3.1 REVISITING ON VARIABLE DECORRELATION IN CAUSAL VIEW", "text": "To decorrelate V and S (or g(S)), we should decorrelate the output variables of Ĝ (X,A; θg) (Kuang et al., 2020). They propose a Variable Decorrelation (VD) term with sample reweighting technique to eliminate the correlation between each variable pair, in which the sample weights are learned by jointly minimizing the moment discrepancy between each variable pair:\nLV D(H) = p\n∑ j=1 ∣∣HT.jΛwH.−j/n −HT.jw/n ⋅HT.−jw/n∣∣22, (5)\nwhere H ∈ Rn×p means the variables needed to be decorrelated, H.j is j-th variable of H, H.−j = H\\H.j means all the remaining variables by setting the value of j-th variable in H as zero, w ∈ Rn×1 are sample weights, ∑ni=1 wi = n and Λw = diag(w1,⋯,wn) is the corresponding diagonal matrix. As we can see, LV D(H) can be reformulated as ∑j≠k ∣∣HT.jΛwH.k/n−HT.jw/n ⋅HT.kw/n∣∣22, and it aims to let E(HT.iH.j) = E(HT.i)E(H.j) for each variable pair j and k. LV D(H) decorrelates all\nthe variable pairs equally. However, decorrelating all the variables requires sufficient samples Kuang et al. (2020), i.e., n→∞, which is hard to be satisfied, especially in the semi-supervised setting. In this scenario, we cannot guarantee LV D(H) = 0. Therefore the key challenge is how to remove the correlation influencing the unbiased estimation most when LV D(H) ≠ 0. Inspired by confounding balancing technique in observational studies (Hainmueller, 2012), we revisit the variable decorrelation regularizer in causal view and show how to differentiate each variable pair. Confounding balancing techniques are often used for causal effect estimation of treatment T , where the distributions of confounders X are different between treated (T = 1) and control (T = 0) groups because of non-random treatment assignment. One could balance the distribution of confounders between treatment and control groups to unbiased estimate causal treatment effects (Yao et al., 2020). Most balancing approaches exploit moments to characterize distributions, and balance them by adjusting sample weights w as follows: w = arg min\nw ∣∣∑i∶Ti=1 Xi −∑i∶Ti=0 wi ⋅Xi∣∣ 2 2.\nAfter balancing, the treatment T and confounders X tend to be independent.\nGiven one targeted variable j, under the variables only have linear relation assumption1, its decorrelataion term, LV Dj = ∣∣H T .jΛwH.−j/n −HT.jw/n ⋅HT.−jw/n∣∣22, is to make H.j independent of H.−j , which is same as the confounding balancing term making treatment and confounders independent. Thereby, LV Dj can also be viewed as a confounding balancing term, where H.j is treatment and H.−j is confounders, illustrated in Fig. 2(a). Hence, our target can be explained as unbiasedly estimate causal effect of each variable which is invariant across training and test set. As different variable may contribute unequally to the confounding bias, it is necessary to differentiate the confounders. The target of differentiating confounders exactly matches our target that removes the correlation of variables influencing the unbiased estimation most." }, { "heading": "3.2 DIFFERETIATED VARIABLE DECORRELATION", "text": "Considering the continuous treatment, the causal effect of treatment can be measured by Marginal Treatment Effect Function (MTEF) (Kreif et al., 2015), and defined as: MTEF = E[Yi(t)]−E[Yi(t−∆t)]\n∆t , where Yi(t) represents the potential outcome of sample i with treatment status\nT = t, E(⋅) refers to the expectation function, and ∆t denotes the increasing level of treatment. With the sample weights w decorrelating treatment and confounders, we can estimate the MTEF by:\nM̂TEF = ∑i∶Ti=twi ⋅ Yi(t) −∑j∶Tj=t−∆twj ⋅ Yj(t −∆t)\n∆t . (6)\nNext we theoretically analyze how to differentiate confounders’ weights with following theorem. Theorem 1. In observational studies, different confounders make unequal confounding bias on Marginal Treatment Effect Function (MTEF) with their own weights, and the weights can be learned via regressing outcome Y on confounders X and treatment variable T .\nWe prove Theorem 1 with the following assumption: Assumption 3 (Linearity). The regression of outcome Y on confounders X and treatment variable T is linear, that is Y = ∑k≠t αkX.k + αtT + c + ε, where αk ∈ α is the linear coefficient.\nUnder Assumption 3, we can write estimator of M̂TEF as:\nM̂TEF = ∑i∶Ti=twi ⋅ Yi(t) −∑j∶Tj=t−∆twj ⋅ Yj(t −∆t)\n∆t\n=MTEF +∑ k≠t\nαk( ∑i∶Ti=twi ⋅Xik −∑j∶Tj=t−∆twj ⋅Xjk\n∆t ) + φ(ε),\n(7)\nwhere MTEF is the ground truth, φ(ε) means the noise term, and φ(ε) ≃ 0 with Gaussian noise. The detailed derivation can be found in Appendix A. To reduce the bias of M̂TEF , we need regulate the term ∑k≠t αk( ∑i∶Ti=t wi⋅Xik−∑j∶Tj=t−∆t wj ⋅Xjk ∆t ), where ∑i∶Ti=t wi⋅Xik−∑j∶Tj=t−∆t wj ⋅Xjk ∆t\n1Nonlinear relation between variables can be incorporated by considering high-order moments in Eq. (5).\nH\n(K−1). X is the confounders, corresponding to the remaining variables of the target variable in H\n(K−1). Y is the outcome, corresponding to labels. (b) The framework of GNN-DVD. The same color in the two figures represents the same kind of variable.\nmeans the difference of the k-th confounder between treated and control samples. The parameter αk represents the confounding bias weight of the k-th confounder, and it is the coefficient of X.k. Moreover, because our target is to learn the weight of each variable pair, i.e., between treatment and each confounder, we need to learn the weight αt of treatment that is the coefficient of T . Hence, the confounder weights and treatment weight can be learned from the regression of observed outcome Y on confounders X and treatment T under Linearity assumption.\nDue to the connection between treatment effect estimation with variable decorrelation as analyzed in Section 3.1, we utilize Theorem 1 to reweight the variable weight in variable decorrelation term. When apply the Theorem 1 to GNNs, the confounders X should be H.−j and treatment is H.j , where the embedding H is learned by Ĝ (X,A; θg) in Eq. (2). And the variable weights α could be computed from the regression coefficients for H, hence α is equal to β̂ in Eq. (2). Then the Differentiated Variable Decorrelation (DVD) term can be formulated as follows:\nmin w\nLDVD(H) =∑ p j=1 (αT ⋅ abs(HT.jΛwH.−j/n −HT.jw/n ⋅HT.−jw/n))2\n+ λ1 n ∑ n i=1 w 2 i + λ2( 1 n∑ n i=1 wi − 1)2, s.t.w ⪰ 0\n(8)\nwhere abs(⋅) means the element-wise absolute value operation, preventing positive and negative values from eliminating. Term λ1\nn ∑ni=1 w2i is added to reduce the variance of sample weights to\nachieve stability, and the formula λ2( 1n ∑ n i=1 wi−1) 2 avoids all the sample weights to be 0. The term w ⪰ 0 constrains each sample weight to be non-negative. After variable reweighting, the weighted decorrelation term in Eq. (8) can be rewritten as ∑j≠k α2jα2k∣∣HT.jΛwH.k/n−HT.jw/n⋅HT.kw/n∣∣22, and the weight for variable pair j and k would be α2jα 2 k, hence it considers both the weights of treatment and confounder. We prove the uniqueness property of w in Appendix B, as follows:\nTheorem 2 (Uniqueness). If λ1n≫ p 2+λ2, p 2 ≫ max(λ1, λ2), ∣Hi,j∣ ≤ c and ∣αi∣ ≤ c for some constant c, the solution ŵ ∈ {w ∶ ∣wi∣ ≤ c} to minimize Eq. (8) is unique." }, { "heading": "3.3 DEBIASED GNN FRAMEWORK", "text": "In this section, we describe the framework of Debiased GNN that incorporates DVD/VD term with GNNs in a seamless way. As analyzed in Section 2.2, decorrelating Âσ(ÂXW(0)) could make GCN stable. However, most GNNs follow a layer-by-layer stacking structure, and the output embedding of each layer is more easy to obtain in implementing. Since Âσ(ÂXW(0)) is the aggregation of\nthe first layer embedding σ(ÂXW(0)), decorrelating these variables may lack the flexibility that incorporates DVD/VD term with other GNN structure. Fortunately, we have the following theorem to identify a more flexible way to combine variable decorrelation with GNNs. Theorem 3. Given p pairwise uncorrelated variables Z = (Z1,Z2,⋯,Zp), with a linear aggregation operator Â, the variables of Y = ÂZ are still pairwise uncorrelated.\nProof can be found in Appendix C. The theorem indicates that if the variables of embeddings Z are uncorrelated, after any form of linear neighborhood aggregation Â, e.g., average, attention or sum, the variables of transformed embeddings Y would be also uncorrelated. Therefore, decorrelating σ(ÂXW(0)) can also reduce the estimation bias. For aK layers of GNN, we can directly decorrelate the output of (K − 1)-th layer, i.e., σ(Â⋯σ(ÂXW(0))⋯W(K−2)) for a K layers of GCN. The previous analysis finds a flexible way to incorporate DVD/VD term with GNNs, however, recall that we analyze GNNs based on the least squares loss, and most existing GNNs are designed for classification. Therefore, in the following, we analyze that the previous conclusions are still applicable in classification. We consider the cases that softmax layer is used as the output layer of GNNs and loss is the cross-entropy error function. We use the Newton-Raphson update rule (Bishop, 2006) to bridge the gap between linear regression and multi-classification. According to the Newton-Raphson update rule, the update formula for transformation matrix W(K−1) of the last layer of GCN can be derived:\nW (new) .j =W (old) .j − (H T RH)−1HT(HW(old).j −Y.j)\n= (HTRH)−1{HTRHW(old).j −H T(HW(old).j −Y.j)} = (H T RH)−1HTRz,\n(9)\nwhere Rkj = −∑Nn=1 HnW (old) .k (Ikj −HnW (old) .j ) is a weighing matrix and Ikj is the element of the identity matrix, and z = HW(old).j −R −1(Y.j −W.jH) is an effective target value. Eq. (9) takes the form of a set of normal equations for a weighted least-squares problem. As the weighing matrix R is not constant but depends on the parameter vector W(old).j , we must apply the normal equations iteratively. Each iteration uses the last iteration weight vector W(old).j to compute a revised weighing matrix R and regresses the target value z with HW(new).j . Therefore, the variable decorrelation can also be applied to the GNNs with softmax classifier to reduce the estimation bias in each iteration.\nFigure 2(b) is the framework of GNN-DVD, and we input the labeled nodes’ embeddings H̃(K−1) into the regularizer LDVD(H̃(K−1)). As GCN has the formula softmax(ÂH(K−1)W(K−1)), the variable weights of H̃(K−1) used for differentiating LDVD(H̃(K−1)) can be computed from α = Var(W(K−1), axis = 1), where Var(⋅, axis = 1) refers to calculating the variance of each row of some matrix and it reflects each variable’s weight for classification which is similar to the regression coefficients. Note that when incorporating VD term with GNNs, we do not need compute the variable weights. Then the sample weights w learned by DVD term have the ability to remove the correlation in H̃(K−1). We propose to use this sample weights to reweight softmax loss:\nmin θ LG = ∑ l∈YL wl ⋅ ln(q(H̃ (K) l ) ⋅Yl), (10)\nwhere q(⋅) is the softmax function, YL is the set of labeled node indices and θ is the set of parameters of GCN. The complexity analysis as well as the optimization of whole algorithm are summarized in Appendix D." }, { "heading": "4 EXPERIMENTS", "text": "Datasets Here, we validate the effectiveness of our method on node classification with two kinds of selection biased data, i.e., label selection bias and small sample selection bias. For label selection bias, we empoly three widely used graph datasets: Cora, Citeseer and Pubmed (Sen et al., 2008). As in Section 2.1, we make the inductive setting for each graph and get three biased degrees for each graph. For small sample selection bias, we conduct the experiments on NELL dataset (Carlson\net al., 2010) that each class only has one labeled node for training. Due to the large scale of this dataset, the test nodes are easily to have distribution shift from training nodes. The details of the datasets and experimental setup are given in Appendix E. One can download codes and datasets for all experiments from the supplementary material.\nBaselines Under our proposed framework, we incorporate the VD/DVD term with GCN and GAT called GCN-VD/DVD and GAT-VD/DVD (details in Appendix F), and thus GCN and GAT are two basic baselines. We compare with GNM-GCN/GAT (Zhou et al., 2019) that considers the label selection bias in transductive setting. Moreover, several state-of-the-art GNNs are included: Chebyshev filter (Kipf & Welling, 2016), SGC (Wu et al., 2019) and APPNP (Klicpera et al., 2019). Additionally, we compare with Planetoid (Yang et al., 2016) and MLP trained on the labeled nodes.\nResults on Label Selection Bias Dataset The results are given in Table 1, and we have the following observations. First, the proposed models (i.e., GCN/GAT with VD/DVD terms) always achieve the best performances in most cases, which well demonstrates that the effectiveness of our proposed debiased GNN framework. Second, comparing with base models, our proposed models all achieve up to 17.0% performance improvements, and gain larger improvements under heavier bias scenarios. Since the major difference between our model with base models is the VD/DVD regularizer, we can safely attribute the significant improvements to the effective decorrelation term and its seamless joint with GNN models. Moreover, GCN/GAT-DVD achieve better results that GCN/GAT-VD in most cases. It validates the importance and effectiveness of differentiating variables’ weights in semi-supervised setting. Additional experimental results about the sample weight analysis and parameter sensitivity analysis can be found in Appendix G.\nResults on Small Sample Selection Bias Dataset As NELL is a large-scale graph, we cannot run GAT on a single GPU with 16GB memory. We only perform GCN-VD/DVD and compare with representative methods which can perform on this dataset. The results are shown in Table 2. First, GCN-VD/DVD achieve significant improvements over GCN. It indicates that selection bias could be induced by a small number of labeled nodes and our proposed method can relieve the estimation bias. Moreover, GCN-DVD further improves GCN-VD with a large margin. It further validates that decorrelating all the variable pairs equally is suboptimal, and our differentiated strategy is effective when labeled nodes are scarce. The reason that GNM-GCN fails is the GNM relies on the accuracy of the IPW estimator that predicts the probability of a node to be selected, however, in this dataset, the ratio of positive and negative samples are extremely unbalanced influencing the performance of IPW." }, { "heading": "5 RELATED WORKS", "text": "In the past few years, Graph Neural Networks (GNNs) (Scarselli et al., 2008; Kipf & Welling, 2016; Veličković et al., 2017; Xu et al., 2019; Klicpera et al., 2019) have become the major technology to capture patterns encoded in the graph due to its powerful representation capacity. Although the current GNNs have achieved great success, when applied to inductive setting, they all assume that training nodes and test nodes follow the same distribution. However, this assumption does not always hold in real applications. GNM (Zhou et al., 2019) first pays attention on the label selection problem on graph learning, and it learns a IPW estimator to estimate the probability of each node to be selected and uses this probability to reweight the labeled nodes. However, it heavily relies on the accuracy of IPW estimator, which depends on the label assignment distribution of whole graph, hence it is more suitable for transductive setting.\nTo enhance the stability in unseen varied distributions, some literatures (Shen et al., 2020b; Kuang et al., 2020) have revealed the connection between correlation and prediction stability under model misspecification. However, these methods are built on the simple regressions, but GNNs have more complex structure and properties needed to be considered. We also notice that Shen et al. (2020a) propose a differentiated variable decorrelation term for linear regression. However, this decorrelation term requires multiple environment with different correlations between stable variable and unstable variable available in the training stage while our method do not require." }, { "heading": "6 CONCLUSION", "text": "In this paper, we investigate a general and practical problem: learning GNNs with agnostic selection bias. The selection bias will inevitably cause the GNNs to learn the biased correlation between aggregation mode and class label and make the prediction unstable. We then propose a novel differentiated decorrelated GNN, which combines the debiasing technique with GNNs in a unified framework. Extensive experiments well demonstrate the effectiveness and flexibility of GNN-DVD." }, { "heading": "A DERIVATION OF M̂TEF", "text": "M̂TEF = ∑i∶Ti=twi ⋅ Yi(t) −∑j∶Tj=t−∆twj ⋅ Yj(t −∆t)\n∆t\n= ∑i∶Ti=twi ⋅ (∑k≠t αkXik + αtt + c + ) −∑j∶Tj=t−∆twj ⋅ (∑k≠t αkXjk + αt(t −∆t) + c + )\n∆t\n= ∑i∶Ti=twiαtt −∑j∶Tj=t−∆twjαt(t −∆t)\n∆t\n+ (∑i∶Ti=twi∑k≠t αkXik −∑j∶Tj=t−∆twj∑k≠t αkXik)\n∆t + φ( )\n=MTEF +∑ k≠t\nαk( ∑i∶Ti=twi ⋅Xik −∑j∶Tj=t−∆twj ⋅Xjk\n∆t ) + φ( ),\n(11)\nwhere ∑i∶Ti=t wiαtt−∑j∶Tj=t−∆t wjαt(t−∆t)\n∆t is the ground truth of MTEF , φ( ) means the noise term,\nand φ( ) ≃ 0 with Gaussian noise." }, { "heading": "B PROOF OF THEOREM 2", "text": "ŵ = arg min w\np\n∑ j=1 (αT⋅abs(HT.jΛwH.−j/n−HT.jw/n⋅HT.−jw/n))2+ λ1 n\nn\n∑ i=1 w 2 i +λ2( 1 n\nn\n∑ i=1 wi−1)2\n(12)\nProof For simplicity, we denote L1 = ∑pj=1(αT ⋅ abs(HT.jΛwH.−j/n −HT.jw/n ⋅HT.−jw/n))2, L2 = 1n ∑ n i=1 w 2 i , L3 = ( 1n ∑ n i=1 wi − 1)\n2 and F(w) = L1 + λ1L1 + λ2L2. We first calculate the Hessian matrix of F(w), denoted as He, to prove the uniqueness of the optimal solution ŵ, as follows:\nHe = ∂ 2L1 ∂w2 + λ1 ∂ 2L2 ∂w2 + λ2 ∂ 2L3 ∂w2\nFor the term L1, we can rewrite it as:\nL1 = ∑ j≠k α 2 iα 2 k( 1 n\nn\n∑ i=1 Hi,jHi,kwi − ( 1 n\nn\n∑ i=1 Hi,jwi)( 1 n\nn\n∑ i=1 Hi,kwi))2\n= ∑ j≠k α 2 iα 2 k(( 1 n\nn\n∑ i=1 Hi,jHi,kwi)2 − ( 2 n\nn\n∑ i=1 Hi,jHi,kwi)( 1 n\nn\n∑ i=1 Hi,jwi)( 1 n\nn\n∑ i=1 Hi,kwi)\n+ (( 1n n\n∑ i=1 Hi,jwi)( 1 n\nn\n∑ i=1 Hi,kwi))2)\nAnd when ∣Hi,j∣ ≤ c, for any variable j and k, and ∣wi∣ ≤ c, we have ∂ 2\n∂w2 ( 1 n ∑ni=1 Hi,jHi,kwi)2 = O( 1n2 ), ∂\n2\n∂w2 ( 1 n ∑ni=1 Hi,jwi)( 1n ∑ n i=1 Hi,kwi) = O( 1n2 ) and\n∂ 2\n∂w2 (( 2 n ∑ni=1 Hi,jHi,kwi)( 1n ∑ n i=1 Hi,jwi)( 1n ∑ n i=1 Hi,kwi)) = O( 1n2 ). Then with ∣αi∣ ≤ c, we have α2iα 2 k ∂ 2 ∂w2 ( 1 n ∑ni=1 Hi,jHi,kwi − ( 1n ∑ n i=1 Hi,jwi)( 1n ∑ n i=1 Hi,kwi)) 2 = O( 1 n2\n). L1 is sum of p(p − 1) such terms. Then we have\n∂ 2L1 ∂w2 = O( p 2 n2 ).\nWith some algebras, we can also have ∂\n2L2 ∂w2 = 1 nI,\n∂ 2L3 ∂w2 = 1 n2 11 T ,\nthus,\nHe = O( p\n2\nn2 ) + λ1n I + λ2 n2 11 T = λ1 n I +O( p 2 + λ2 n2 ).\nTherefore, if λ1 n ≫ p 2+λ2 n2 , equivalent to λ1n≫ p 2+λ2, He is an almost diagonal matrix. Hence, He is positive definite (Nakatsukasa, 2010). Then the function F(w) is convex on C = {w ∶ ∣wi∣ ≤ c}, and has unique optimal solution ŵ.\nMoreover, because L1 is our major decorrelation term, we hope L1 to dominate the terms λ1L2 and λ2L3. On C, we have L1 = O(1), L2 = O(1), and α2iα2k( 1n ∑ n i=1 Hi,jHi,kwi − ( 1 n ∑ni=1 Hi,jwi)( 1n ∑ n i=1 Hi,kwi))\n2 = O(1). Thus L1 = O(p2). When p2 ≫ max(λ1, λ2), L1 will dominate the regularization terms L2 and L3." }, { "heading": "C PROOF OF THEOREM 3", "text": "Let Z = {Z1,Z2,⋯,Zp} be p pairwise uncorrelated variables. ∀Zi,Zj ∈ Z, (Z (1) i ,Z (2) i ,⋯,Z (n) i ) and (Z(1)j ,Z (2) j ,⋯,Z (n) j ) are n simple random samples drawn from Zi and Zj respectively, and have same distribution with Zi and Zj . Given a linear aggregation matrix  = (aij),∀s, v ∈ (1, 2,⋯, n), let Y(s)i = ∑nk=1 askZ (k) i and Y (v) j = ∑nl=1 avlZ (l) j , and we have following derivation:\nCov(Y(s)i ,Y (v) j ) = Cov(\nn\n∑ k=1 askZ (k) i ,\nn\n∑ l=1 avlZ (l) j )\n= n\n∑ k=1\nn\n∑ l=1 askavlCov(Z (k) i ,Z (l) j ) =\nn\n∑ k=1\nn\n∑ l=1 askavlδij ,\nwhere δij = 0 when i ≠ j, otherwise δij = 1. Therefore, when i ≠ j, we have Cov(Y (s) i ,Y (v) j ) = 0 and Cov(Yi,Yj) = 0. Extended the conclusion to multiple variable, Y = (Y1,Y2,⋯,Yn) are pairwise uncorrelated. Proof completes." }, { "heading": "D PSEUDOCODE OF GNN-DVD", "text": "Algorithm 1: GNN-DVD Algorithm Input :Training graph Gtrain = {A,X,Y}, and indices of labeled nodes YL; Max iteration:maxIter Output :GNN parameter θ and sample weights w Initialization :Let w = ω ⊙ ω and initialize sample weights ω with 1; Initialize GNN’s\nparameters θ with random uniform distribution; Iteration t← 0\n1 while not converged or t < maxIter do 2 Optimize θ(t) to minimize LG; 3 Calculate variable weights α(t) from W(K−1); 4 Optimize ω(t) to minimize LDVD(H̃(K−1)); 5 t = t + 1; 6 end 7 Return: θ and w = ω ⊙ ω\nTo optimize our GNN-DVD algorithm, we propose an iterative method. Firstly, we let w = ω ⊙ ω to ensure non-negativity of w and initialize sample weight ωi = 1 for each sample i and GNN’s parameters θ with random uniform distribution. Once the initial values are given, in each iteration,\nwe fix the sample weights ω and update the GNN’s parameters θ by LG with gradient descent, then compute the confounder weights α from the linear transform matrix W(K−1). With α and fixing the GNN’s parameters θ, we update the sample weights ω with gradient descent to minimize LDVD(H(K−1)). We iteratively update the sample weights w and GNN’s parameters θ until LG converges.\nComplexity Analysis Compared with base model (e.g., GCN and GAT), the mainly incremental time cost is the complexity from DVD term. The complexity of DVD term is O(np2), where n is the number of labeled nodes and p is the dimension of embedding. And it is quite smaller than the base model (e.g., the complexity of GCN is linear to the number of edges)." }, { "heading": "E DATASET DESCRIPTION AND EXPERIMENTAL SETUP", "text": "E.1 DATASET DESCRIPTION\nSome statistics of datasets used in our paper are presented in Table 3, including the number of nodes, the number of edges, the number of classes, the number of features, the bias degree and bias type. For three citation networks, we conduct the biased labeled node selection process to get three degrees of datasets for each dataset to validate the effect of label selection bias, in which each class in each dataset contains 20 labeled nodes in training set and the validation set and test set are same as Yang et al. (2016). For NELL, because it only has a single labeled node per class in training set, the training nodes are hard to cover all the neighborhood distribution happened in the test set. Hence, we use this dataset to validate the effectiveness of our method on the extreme small labeled nodes size bias. The data splits are also same as Yang et al. (2016). A description of each of dataset is given as follows:\n• Cora (Sen et al., 2008) is a citation network of Machine Learning papers that collected from 7 classes:{Theory, Case Based, Reinforcement Learning, Genetic Algorithms, Neural Networks, Probabilistic Methods, Rule Learning }. Nodes represent papers, edges refer to the citation relationship, and features are bag-of-words vectors for each paper.\n• Citeseer (Sen et al., 2008) is a citation network of Machine Learning papers that collected from 6 classes:{Agents, Artificial Intelligence, Database, Information Retrieval, Machine Learning, Human Computer Interaction }. Nodes represent papers, edges refer to the citation relationship, and features are bag-of-words vectors for each paper.\n• Pubmed (Sen et al., 2008) is a citation network from the PubMed database, which contains a set of articles (Nodes) related to diabetes and the citation relationship among them. The node features are bag-of-words vectors, and the node label are the diabetes type researched in the articles.\n• NELL Carlson et al. (2010) is a dataset extracted from the knowledge graph, which is a set of entities connected with directed, labeled edges (relations). Our pre-processing scheme is same as Yang et al. (2016), where each entity pair (e1, r, e2) is assigned with separate relation nodes r1 and r2 as (e1, r1) and (e2, r2). We use text bag-of-words representation as feature vector of the entities.\nE.2 EXPERIMENTAL SETUP\nAs the Section 2.1 has described, for all datasets, to simulate the agnostic selection bias scenario, we first follow the inductive setting in Wu et al. (2019) that masks the validation and test nodes in the training phase and validation and test with whole graph so that the test nodes will be agnostic. For GCN and GAT, we utilize the same two-layer architecture as their original paper (Kipf & Welling, 2016; Veličković et al., 2017). We use the following sets of hyperparameters for GCN on Cora,\nCiteseer, Pubmed: 0.5 (dropout rate), 5 ⋅ 10−4 (L2 regularization) and 32 (numbder of hidden units); and for NELL: 0.1 (dropout rate), 1 ⋅ 10−5 (L2 regularization) and 64 (number of hidden units). For GAT on Cora, Citeseer, we use: 8 (first layer attention heads), 8 (features each head), 1 (second layer attention head), 0.6 (dropout), 0.0005 (L2 regularization); and for Pubmed: 8 (second layer attention head), 0.001 (L2 regularization), other parameters are same with Cora and Citeseer. To fair comparison, the GNN part of our model uses the same architecture and hyper-parameters with base model and we grid search λ1 and λ2 from {0.01, 0.1, 1, 10, 100}. For other baselines, we use the optimal hyper-parameters in literatures on each dataset. For all the experiments, we run 10 times with different random seed and report its average Accuracy results." }, { "heading": "F EXTEND TO GAT", "text": "We can easily incorporate VD/DVD term to other GNNs. We combine them with GAT and more extensions leave as future work. GAT utilizes attention mechanism to aggregate neighbor information. It also follows the linear aggregation and transformation steps. Similar with GCN, the hidden embedding H̃(K−1) is the input of VD/DVD term, and the variable weights α are calculated from the transformation matrix W(K−1) and the sample weights w are used to reweight the softmax loss. Note that original paper utilizes same transformation matrix W(K−1) for transforming embedding and learning attention values. Because α means the importance of each variable for classification, and it should be computed from transformation matrix W(K−1) for transforming embedding, hence we use separate matrix for transforming embedding and learning attention values respectively. This modification does not change the performance of GAT in experiments." }, { "heading": "G ADDITIONAL EXPERIMENTS", "text": "G.1 SAMPLE WEIGHT ANALYSIS\nHere we analyze the effect of sample weights w in our model. We compute the amount of correlation in the labeled nodes’ embeddings H̃(K−1) learned by standard GCN and the weighted embeddings of the same layer learned by GCN-DVD. Note that, the weights are the last iteration of sample weights of GCN-DVD. Following Cogswell et al. (2016), the amount of correlation of GCN and GCN-DVD are measured by Frobenius norm of cross-corvairance matrix computed from vectors of H̃(K−1) and weighted H̃(K−1) respectively. Figure 3 shows the amount of correlation in unweighted and weight embeddings, and we can observe that the embeddings’ correlation in all datasets are reduced, demonstrating that the weights learned by GCN-DVD can reduce the correlations between embedded variables. Moreover, one can observe that it is hard to reduce the correlation to zero. Therefore, the necessity of differentiating variables’ weights will be further validated.\nG.2 PARAMETER SENSITIVITY\nWe study the sensitiveness of parameters and report the results of GCN-DVD on three citation networks in Fig. 4-6. The experimental results show that GCN-DVD is relatively stable to λ1 and λ2 with wide ranges in most cases, indicating the robustness of our model." } ]
2,020
null
SP:e42647e1efc0582b03c3fe8f1bb8c73d6403a97c
[ "This paper proposes a two-level hierarchical program synthesizer, Latent Programmer, which first predicts a sequence of latent codes from given input-output examples, and then decodes the latent codes into a program. The sequence of latent codes can be viewed as a high-level synthesis plan, guiding the subsequent low-level synthesis. Latent Programer significantly outperforms RobustFill on string manipulation tasks and achieves state-of-the-art results on Python code generation tasks. " ]
In many sequence learning tasks, such as program synthesis and document summarization, a key problem is searching over a large space of possible output sequences. We propose to learn representations of the outputs that are specifically meant for search: rich enough to specify the desired output but compact enough to make search more efficient. Discrete latent codes are appealing for this purpose, as they naturally allow sophisticated combinatorial search strategies. The latent codes are learned using a self-supervised learning principle, in which first a discrete autoencoder is trained on the output sequences, and then the resulting latent codes are used as intermediate targets for the end-to-end sequence prediction task. Based on these insights, we introduce the Latent Programmer, a program synthesis method that first predicts a discrete latent code from input/output examples, and then generates the program in the target language. We evaluate the Latent Programmer on two domains: synthesis of string transformation programs, and generation of programs from natural language descriptions. We demonstrate that the discrete latent representation significantly improves synthesis accuracy.
[]
[ { "authors": [ "Rajeev Alur", "Rastislav Bodík", "Garvit Juniwal", "Milo M.K. Martin", "Mukund Raghothaman", "Sanjit A. Seshia", "Rishabh Singh", "Armando Solar-Lezama", "Emina Torlak", "Abhishek Udupa" ], "title": "Syntaxguided synthesis", "venue": "In Formal Methods in Computer-Aided Design,", "year": 2013 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2016 }, { "authors": [ "Hareesh Bahuleyan", "Lili Mou", "Olga Vechtomova", "Pascal Poupart" ], "title": "Variational attention for sequence-to-sequence models", "venue": null, "year": 2017 }, { "authors": [ "Matej Balog", "Alexander L. Gaunt", "Marc Brockschmidt", "Sebastian Nowozin", "Daniel Tarlow" ], "title": "Deepcoder: Learning to write programs", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Matej Balog", "Rishabh Singh", "Petros Maniatis", "Charles Sutton" ], "title": "Neural program synthesis with a differentiable fixer", "venue": "CoRR, abs/2006.10924,", "year": 2020 }, { "authors": [ "Kai-Wei Chang", "Akshay Krishnamurthy", "Alekh Agarwal", "Daume III", "John Langford" ], "title": "Learning to search better than your teacher", "venue": "In International Conference on Machine Learning (ICML),", "year": 2015 }, { "authors": [ "Xinyun Chen", "Chang Liu", "Dawn Song" ], "title": "Execution-guided neural program synthesis", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "III Hal Daumé", "John Langford", "Daniel Marcu" ], "title": "Search-based structured prediction", "venue": "Machine Learning Journal,", "year": 2009 }, { "authors": [ "Jacob Devlin", "Jonathan Uesato", "Surya Bhupatiraju", "Rishabh Singh", "Abdel-rahman Mohamed", "Pushmeet Kohli" ], "title": "Robustfill: Neural program learning under noisy I/O", "venue": "CoRR, abs/1703.07469,", "year": 2017 }, { "authors": [ "Kevin Ellis", "Maxwell I. Nye", "Yewen Pu", "Felix Sosa", "Josh Tenenbaum", "Armando Solar-Lezama" ], "title": "Write, execute, assess: Program synthesis with a REPL", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Kevin Ellis", "Catherine Wong", "Maxwell Nye", "Mathias Sable-Meyer", "Luc Cary", "Lucas Morales", "Luke Hewitt", "Armando Solar-Lezama", "Joshua B. Tenenbaum" ], "title": "Dreamcoder: Growing generalizable, interpretable knowledge with wake-sleep bayesian program learning", "venue": "URL https://arxiv.org/abs/2006.08381", "year": 2006 }, { "authors": [ "Rafael Gómez-Bombarelli", "Jennifer N Wei", "David Duvenaud", "José Miguel Hernández-Lobato", "Benjamín Sánchez-Lengeling", "Dennis Sheberla", "Jorge Aguilera-Iparraguirre", "Timothy D Hirzel", "Ryan P Adams", "Alán Aspuru-Guzik" ], "title": "Automatic chemical design using a Data-Driven continuous representation of molecules", "venue": "ACS Cent Sci,", "year": 2018 }, { "authors": [ "Sumit Gulwani" ], "title": "Automating string processing in spreadsheets using input-output examples", "venue": "In PoPL’11, January 26-28,", "year": 2011 }, { "authors": [ "Sumit Gulwani", "Oleksandr Polozov", "Rishabh Singh" ], "title": "Program synthesis", "venue": "Foundations and Trends in Programming Languages,", "year": 2017 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with gumbel-softmax", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Łukasz Kaiser", "Aurko Roy", "Ashish Vaswani", "Niki Parmar", "Samy Bengio", "Jakob Uszkoreit", "Noam Shazeer" ], "title": "Fast decoding in sequence models using discrete latent variables", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "John R Koza" ], "title": "Genetic programming as a means for programming computers by natural selection", "venue": "Statistics and computing,", "year": 1994 }, { "authors": [ "Taku Kudo", "John Richardson" ], "title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations,", "year": 2018 }, { "authors": [ "Woosuk Lee", "Kihong Heo", "Rajeev Alur", "Mayur Naik" ], "title": "Accelerating search-based program synthesis using learned probabilistic models", "venue": "In Conference on Programming Language Design and Implementation (PLDI),", "year": 2018 }, { "authors": [ "Chris J. Maddison", "Andriy Mnih", "Yee Whye Teh" ], "title": "The concrete distribution: A continuous relaxation of discrete random variables", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Zohar Manna", "Richard J. Waldinger" ], "title": "Toward automatic program synthesis", "venue": "Commun. ACM,", "year": 1971 }, { "authors": [ "Yishu Miao", "Phil Blunsom" ], "title": "Language as a latent variable: Discrete generative models for sentence compression", "venue": "CoRR, abs/1609.07317,", "year": 2016 }, { "authors": [ "Maxwell I. Nye", "Luke B. Hewitt", "Joshua B. Tenenbaum", "Armando Solar-Lezama" ], "title": "Learning to infer program sketches", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Kishore Papineni", "Salim Roukos", "Todd Ward", "Wei-Jing Zhu" ], "title": "Bleu: A method for automatic evaluation of machine translation", "venue": "In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics,", "year": 2002 }, { "authors": [ "Emilio Parisotto", "Abdel-rahman Mohamed", "Rishabh Singh", "Lihong Li", "Dengyong Zhou", "Pushmeet Kohli" ], "title": "Neuro-symbolic program synthesis", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Ratish Puduppully", "Li Dong", "Mirella Lapata" ], "title": "Data-to-text generation with content selection and planning", "venue": "In The Thirty-Third AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "CoRR, abs/1401.4082,", "year": 2014 }, { "authors": [ "Stephane Ross", "Geoffrey Gordon", "Drew Bagnell" ], "title": "A reduction of imitation learning and structured prediction to No-Regret online learning", "venue": "Conference on Artificial Intelligence and Statistics (AISTATS),", "year": 2011 }, { "authors": [ "Aurko Roy", "Ashish Vaswani", "Arvind Neelakantan", "Niki Parmar" ], "title": "Theory and experiments on vector quantized autoencoders", "venue": null, "year": 2018 }, { "authors": [ "Eric Schkufza", "Rahul Sharma", "Alex Aiken" ], "title": "Stochastic superoptimization", "venue": "In Proceedings of the Eighteenth International Conference on Architectural Support for Programming Languages and Operating Systems,", "year": 2013 }, { "authors": [ "Armando Solar-Lezama", "Liviu Tancau", "Rastislav Bodík", "Sanjit A. Seshia", "Vijay A. Saraswat" ], "title": "Combinatorial sketching for finite programs", "venue": "In Conference on Architectural Support for Programming Languages and Operating Systems,", "year": 2006 }, { "authors": [ "Phillip D Summers" ], "title": "A methodology for lisp program construction from examples", "venue": "Journal of the ACM (JACM),", "year": 1977 }, { "authors": [ "Abhishek Udupa", "Arun Raghavan", "Jyotirmoy V Deshmukh", "Sela Mador-Haim", "Milo M K Martin", "Rajeev Alur" ], "title": "TRANSIT: Specifying protocols with concolic snippets", "venue": "In Conference on Programming Language Design and Implementation (PLDI),", "year": 2013 }, { "authors": [ "Aäron van den Oord", "Oriol Vinyals", "Koray Kavukcuoglu" ], "title": "Neural discrete representation learning", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "Ashwin K. Vijayakumar", "Michael Cogswell", "Ramprasaath R. Selvaraju", "Qing Sun", "Stefan Lee", "David J. Crandall", "Dhruv Batra" ], "title": "Diverse beam search: Decoding diverse solutions from neural sequence models", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Y. Wan", "Z. Zhao", "M. Yang", "G. Xu", "H. Ying", "J. Wu", "P.S. Yu" ], "title": "Improving automatic source code summarization via deep reinforcement learning", "venue": "In 2018 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE),", "year": 2018 }, { "authors": [ "Bolin Wei", "Ge Li", "Xin Xia", "Zhiyi Fu", "Zhi Jin" ], "title": "Code generation as a dual task of code summarization", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Amit Zohar", "Lior Wolf" ], "title": "Automatic program synthesis of long programs with a learned garbage collector", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Our focus in this paper is program synthesis, one of the longstanding grand challenges of artificial intelligence research (Manna & Waldinger, 1971; Summers, 1977). The objective of program synthesis is to automatically write a program given a specification of its intended behavior, such as a natural language description or a small set of input-output examples. Search is an especially difficult challenge within program synthesis (Alur et al., 2013; Gulwani et al., 2017), and many different methods have been explored, including top-down search (Lee et al., 2018), bottom up search (Udupa et al., 2013), beam search (Devlin et al., 2017), and many others (see Section 2).\nWe take a different philosophy: Can we learn a representation of programs specifically to help search? A natural way of representing a program is as a sequence of source code tokens, but the synthesis task requires searching over this representation, which can be difficult for longer, more complex programs. A programmer often starts by specifying high-level components of a program as a plan, then fills in the details of each component i.e. in string editing, a plan could be to extract the first name, then the last initial. We propose to use a sequence of latent variable tokens, called discrete latent codes, to represent such plans. Instead of having a fixed dictionary of codes, we let a model discover and learn what latent codes are useful and how to infer them from specification.\nOur hypothesis is that a discrete latent code – a sequence of discrete latent variables – can be a useful representation for search (van den Oord et al., 2017; Roy et al., 2018; Kaiser et al., 2018). This is because we can employ standard methods from discrete search, such as beam search, over a compact space of high-level plans and then over programs conditioned on the plan, in a two-level procedure. We posit that the high-level search can help to organize the search over programs. In the string editing example earlier, a model could be confident that it needs to extract the last initial, but is less sure about whether it needs to extract a first name. By changing one token in the latent code, two-level search can explore alternative programs that do different things in the beginning. Whereas in traditional single-level search, the model would need to change multi-token prefixes of the alternatives, which is difficult to achieve in limited budget search.\nWe propose the Latent Programmer, a program synthesis method that uses learned discrete representations to guide search via a two-level synthesis. The Latent Programmer is trained by a self-\nsupervised learning principle. First a discrete autoencoder is trained on a set of programs to learn discrete latent codes, and then an encoder is trained to map the specification of the synthesis task to these latent codes. Finally, at inference time, Latent Programmer uses a two-level search. Given the specification, the model first produces a L-best list of latent codes from the latent predictor, and uses them to synthesize potential programs. On two different program synthesis domains, we find empirically that the Latent Programmer improves synthesis accuracy by over 10% compared to standard sequence-to-sequence baselines as RobustFill (Devlin et al., 2017). We also find that our method improves diversity of predictions, as well as accuracy on long programs." }, { "heading": "2 BACKGROUND", "text": "Problem Setup The goal in program synthesis is to find a program in a given language that is consistent with a specification. Formally, we are given a domain specific language (DSL) which defines a space Y of programs. The task is described by a specification X ∈ X and is solved by some, possibly multiple, unknown program(s) Y ∈ Y . For example, each specification can be a set of input/output (I/O) examples denoted X = {(I1, O1), . . . (IN , ON )}. Then, we say that we have solved specification X if we found a program Y which correctly solves all the examples: Y (Ii) = Oi, ∀i = 1, . . . , N . As another example, each specification can be a natural language description of a task, and the corresponding program implements said task. An example string transformation synthesis task with four I/O examples together with a potential correct program in the string transformation DSL is shown in Figure 1.\nVector Quantization Traditionally, neural program synthesis techniques process the input specification as a set of sequences and predicts the output program token-by-token (Devlin et al., 2017). In this work, we present a new approach for synthesis that performs structured planning in latent space using a discrete code. We conjecture that programs have an underlying discrete structure; specifically, programs are compositional and modular with components that get reused across different problems. Our approach leverages this structure to guide the search over large program spaces. Following works in computer vision (van den Oord et al., 2017; Roy et al., 2018), we discover such discrete structure by using a Vector Quantized Variational Autoencoder (VQ-VAE). VQ-VAEs work by feeding the intermediate representation of an autoencoder through a discretization bottleneck (van den Oord et al., 2017). For completeness, we provide background on VQ-VAEs below.\nIn a VQ-VAE, latent codes are drawn from a discrete set of learned vectors c ∈ RK×D, or codebook. Each element in the codebook can be viewed as either a token with id k ∈ [K] or as an embedding ck ∈ RD. To generate the discrete codes, the continuous autoencoder output e is quantized via nearest-neighbor lookup into the codebook. Formally, the token id qk(e) and quantized embedding qc(e) are defined as\nqc(e) = cqk(e) where qk(e) = arg min k∈[K] ||e− ck||2. (1)\nFor input x, the training loss for a VQ-VAE consists of: a reconstruction loss for the encoder-decoder weights, a codebook loss that encourages codebook embeddings to be close to the continuous vectors which are quantized to them, and a commitment loss that encourages the encoded input ec(x) to \"commit\" to codes i.e. not switch which discrete code it is quantized to. The loss is given by,\nL(c, θ, φ) = log pθ (x | qc(ecφ(x))) + ||sg(ecφ(x))− c)||22 + β||sg(c)− ecφ(x)||22, (2)\nwhere θ, φ are the parameters of the decoder and encoder, respectively, sg(·) is the stop gradient operator that fixes the operand from being updated by gradients, and β controls the strength of the commitment loss. To stabilize training, van den Oord et al. (2017) also proposed removing the codebook loss and set the codebook to an exponential moving average (EMA) of encoded inputs." }, { "heading": "3 SYNTHESIS WITH DISCRETE LATENT VARIABLES", "text": "We propose a two-level hierarchical approach to program synthesis that first performs high-level planning over an intermediate sequence, which is then used for fine-grained generation of the program. In our approach, a top-level module first infers a latent code, which gets used by a low-level module to generate the final program." }, { "heading": "3.1 HIERARCHY OF TWO TRANSFORMERS", "text": "Our proposed Latent Programmer (LP) architecture consists of two Transformers in a two-level structure. The architecture comprises of two modules: a latent predictor which produces a latent code, which can be interpreted as a course sketch of the program, and a latent program decoder, which generates a program conditioned on the code. The latent code consists of discrete latent variables as tokens, which we arbitrarily denote TOK_1,..., TOK_K, whose meanings are assigned during training. Both components use a Transformer architecture due to their impressive performance on natural language tasks (Vaswani et al., 2017).\nTo help the model assign useful meanings to the latents, we also leverage a program encoder, which is only used during training. The program encoder ec(Y ) encodes the true program Y = [y1, y2, . . . , yT ] into a shorter sequence of discrete latent variables Z = [z1, z2, . . . , zS ], represented as codebook entries; that is, each zi ∈ RD is one of K entries in a codebook c. The latent sequence serves as the ground-truth high-level plan for the task. The function ec(Y ) is a Transformer encoder, followed by a stack of convolutions of stride 2, each halving the size of the sequence. We apply the convolution ` times, which reduces a T -length program to a latent sequence of length dT/2`e. This provides temporal abstraction, since the high-level planning actions are made only every 2` steps. In summary, the program encoder is given by\nec(Y )← h`; hm ← Conv(hm−1) for m ∈ 1 . . . `; h0 ← TransformerEncoder(Y ). (3)\nHere TransformerEncoder(·) applies a stack of self-attention and feed-forward units on input embeddings via a residual path, described in detail by Vaswani et al. (2017). This will be used, along with the latent program decoder, as an autoencoder during training (see Section 3.2).\nThe latent predictor lp(X) autoregressively predicts a coarse latent code lp(X) ∈ RS×K , conditioned on the program specification X . The latent predictor outputs a sequence of probabilities, which can be decoded using search algorithms such as beam search to generate a predicted latent code Z ′. This is different than the program encoder, which outputs a single sequence Z, because we use the latent predictor to organize search over latent codes; at test time, we will obtain a L-best list of latent token sequences from lp(X). The latent predictor is given by a stack of Transformer blocks with the specification X as inputs.\nSimilarly, the latent program decoder d(Z,X) defines an autoregressive distribution over program tokens given the specification X and the coarse plan Z ∈ RS×K , represented as codebook entries. The decoder is a Transformer that jointly attends to the latent sequence and program specification. This is performed via two separate attention modules, whose outputs are concatenated into the hidden unit. Formally, given a partially generated program Y ′ = [y′1, y ′ 2, . . . , y ′ t−1], and the encoded\nspecification E = TransformerEncoder(X), the latent program decoder performs\nht = Concat (TransformerDecoder(Y ′, E)t−1,TransformerDecoder(Y ′, Z)t−1) , (4)\nwhere TransformerDecoder(x, y) denotes a Transformer decoder applied to outputs y while attending to inputs encoding x, and the subscript indexes an entry in the resulting output sequence. Finally, the distribution over output token k is given by dt(Z,X) = Softmax (W (ht)) , where W is a learned parameter matrix. Finally, the latent program decoder defines a distribution over programs autoregressively as p(Y |Z,X) = ∏ t p(yt|y<t, Z,X), where p(yt|y<t, Z,X) = dt(Z,X). When X is multiple I/O examples, each example is encoded asEi = TransformerDecoder(Ii, Oi). Then, a separate hidden state per I/O is computed following equation 4, followed by a late max-pool to get the final hidden state. Note that the program encoder and latent program decoder make up a VQ-VAE model of programs, with additional conditioning on the specification.\nThe complete LP architecture is summarized in Figure 2, and an end-to-end example run of our architecture is shown in Figure 4." }, { "heading": "3.2 TRAINING", "text": "Our LP performs program synthesis using a two-level search, first over latent sequences then over programs. Given program specification, we want to train our latent predictor to produce an informative latent sequence from which our latent program decoder can accurately predict the true program. Our training loss for the LP model consists of three supervised objectives.\nThe autoencoder loss ensures that the latent codes contain information about the program. It is a summation of the reconstruction loss between the autoencoder output d(qc(Y ), X) and true program Y , as well as a commitment loss to train the encoder output ec(Y ) to be close to codebook c. Like in Roy et al. (2018), codebook is not trained but set to the EMA of encoder outputs. This loss is similar to the loss function of a VQ-VAE as in equation 2, but also depends on specification X . This objective trains the latent tokens in the codebook so that they correspond to informative high-level actions, as well as make sure our latent program decoder can accurately recover true program given the specification and a plan comprising of such actions.\nThe latent prediction loss ensures that latent codes can be predicted from specifications. It is a reconstruction loss between the distribution over latents predicted from the specification lp(X) and the autoencoded latents qk(ec(Y )) from the ground-truth program. This is a self-supervised approach that treats the autoencoded latent sequence as the ground-truth high-level plan, and trains the latent predictor to generate the plan using just the program specificationX . Note that the program encoder is only used in training, as at test time ec(Y ) is unknown, so the LP model uses lp(X) instead.\nFinally, the end-to-end loss ensures that programs can be predicted from specifications. This is especially important because in the reconstruction loss, the latent program decoder receives as input latent codes from the autoencoded latent sequences ec(Y ), whereas at test time, the decoder receives a latent code from the latent predictor lp(X). This can result in mistakes in the generated program since the decoder has never been exposed to noisy results from the latent predictor. The end-to-end loss alleviates this issue. The end-to-end loss is probability of the correct program Y when predicted from a soft-quantized latent code, given by lp(X)T c. This has the added benefit of allowing gradient to flow through the latent predictor, training it in an end-to-end way.\nIn summary, the full loss for a training instance is\nL(c, θ, φ, ψ) = log pθ (Y | qc(ecφ(Y )), X) + β||sg(c)− ecφ(Y )||22︸ ︷︷ ︸ autoencoder\n+ log p ( qk(ecφ(Y )) | lpψ(X) )︸ ︷︷ ︸ latent prediction + log pθ ( Y | lpψ(X)T c,X )︸ ︷︷ ︸ end-to-end (5)\nwhere we explicitly list out θ, φ, and ψ representing the parameters of the latent program decoder, program encoder, and latent decoder respectively.\nFurthermore, for the first 10K steps of training, we give embeddings of the ground-truth program Y , averaged over every 2` tokens, as the latent sequence instead of ec(Y ). This pre-training ensures that initially, the latent code carries some information about the program so that the attention to the code\nhas reasonable gradients that can then to propagated to the program encoder afterward pre-training. Doing this was empirically shown to prevent the bypassing phenomenon where the latent code is ignored during decoding (Bahuleyan et al., 2017)." }, { "heading": "3.3 INFERENCE", "text": "During inference, we use a multi-level variant of beam search to decode the output probabilities of our LP model. Standard beam search with beamB will generate the top-B most likely programs according to the model, and find the first one (if any) that is consistent with the specification (Parisotto et al., 2017; Devlin et al., 2017). In our case, we first perform beam search for L latent beams, then for bB/Lc programs per latent sequence. Note that during inference, the latent predictor will continue to generate latent tokens until an end-of-sequence token is produced. This means that the generated latent sequence does not necessarily satisfy having length dT/2`e as during training; however, we found the latent sequence lengths during training and evaluation to be close in practice. Setting L = B allows for the maximum exploration of the latent space, while setting L = 1 reduces our method to standard beam search, or exploitation of the most likely latent decoding. We choose L = √ B in our experiments, but explore the effect of various choices of L in Section 5.2." }, { "heading": "4 RELATED WORK", "text": "Program Synthesis Our work deals with program synthesis, which involves combinatorial search for programs that match a specification. Many different search methods have been explored within program synthesis, including search within a version-space algebra (Gulwani, 2011), bottom-up enumerative search (Udupa et al., 2013), stochastic search (Schkufza et al., 2013), genetic programming (Koza, 1994), or reducing the synthesis problem to logical satisfiability (Solar-Lezama et al., 2006). Neural program synthesis involves learning neural networks to predict function distributions to guide a synthesizer (Balog et al., 2017), or the program autoregressively in an end-to-end fashion (Parisotto et al., 2017; Devlin et al., 2017). SketchAdapt (Nye et al., 2019) combined these approaches by first generating a program sketch with holes, and then filling holes using a conventional synthesizer. Related to our work, DreamCoder (Ellis et al., 2020) iteratively builds a sketches using progressively more complicated primitives though a wake-sleep algorithm. Our work is closely related in spirit but fundamentally differs in two ways: (1) our sketches are comprised of a general latent vocabulary that is learned in a simple, self-supervised fashion, and (2) our method avoids enumerative search, which is prohibitively expensive for large program spaces. There is also a line of work that deals with learning to process partial programs in addition to the specification. In execution-guided program synthesis, the model guides iterative extensions of the partial programs until a matching one is found (Zohar & Wolf, 2018; Chen et al., 2019; Ellis et al., 2019). Balog et al. (2020) of late proposed a differentiable fixer that is trained to iteratively edit incorrect programs. We treat these works as complementary, and can be combined with ours to refine predictions.\nDiscrete Latent Bottlenecks Variational autoencoders (VAE) were first introduced using continuous latent representations (Kingma & Welling, 2014; Rezende et al., 2014). Several promising approaches were proposed to use discrete bottlenecks instead, such as continuous relaxations of categorical distributions i.e. the Gumbel-Softmax reparametrization trick (Jang et al., 2017; Maddison et al., 2017). Recently, VQ-VAEs using nearest-neighbor search on a learned codebook (see Section 2 for more details) achieved impressive results almost matching continuous VAEs (van den Oord et al., 2017; Roy et al., 2018). Discrete bottlenecks have also been used for sentence compression (Miao & Blunsom, 2016) and text generation (Puduppully et al., 2019), but these works does not learn the semantics of the latent codes, like ours does. Within the domain of synthesis of chemical molecules, Gómez-Bombarelli et al. (2018) have applied Bayesian optimization within a continuous latent space to guide this structured prediction problem. Learning to search has also been considered in the structured prediction literature (Daumé et al., 2009; Chang et al., 2015; Ross et al., 2011), but to our knowledge, these works do not consider the problem of learning a discrete representation for search. Notably, VQ-VAE methods have been successfully used to encode natural language into discrete codes for faster decoding in machine translation (Kaiser et al., 2018). Our work similarly uses a VQ-VAE to learn a discrete code, but we use the learned code in a two-level search that improves accuracy. To do so, we propose a model that is autoregressive on both the latent and program space, and perform two-level beam search on latent codes and programs. The key novelty behind our work is that first searching over a learned discrete latent space can assist search over the complex program space; using a VQ-VAE as Kaiser et al. (2018) did enables us to do so." }, { "heading": "5 EXPERIMENTS", "text": "We now present the results of evaluating our Latent Programmer model in two test domains: synthesis of string transformation programs from examples and code generation from natural language descriptions. We compare our LP model against several strong baselines.\nRobustFill [LSTM] is a seq-to-seq LSTM with attention on the input specification, and trained to autoregressively predict the true program. The architecture is comparable to the RobustFill model designed originally for the string transformation tasks in our first domain (Devlin et al., 2017), but easily generalizes to all program synthesis domains. We detail the architecture in Appendix A.\nRobustFill [Transformer] alternatively uses a Transformer architecture, equivalent in architecture to the latent planner in our LP model, also trained to autoregressively predict the program. Transformers were found to perform much better than LSTMs in language tasks because they process the entire input as a whole, and have no risk of forgetting past dependencies (Vaswani et al., 2017). This baseline can be also be considered of an ablation of our LP model without any latent codes.\nThe central novelty of our work is in realizing that by learning a discrete representation, we can perform structured search on two levels. We introduce two ablative baselines, which replace the VQ-VAE with either a generic autoencoder or a VAE. In both cases the latent space is continuous, and well-known combinatorial search algorithms such as beam search cannot search over the space.\nLatent RobustFill [AE] replaces the VQ-VAE component of our LP model with a generic autoencoder. This makes the latent code a sequence of continuous embeddings. The latent prediction loss in equation 5 is simply replaced by a squared error between the output of the autoencoder and the latent predictor. Performing beam search over the continuous latent space is intractable, so during inference we generate only one latent sequence per task; this is equivalent to two-level beam search described earlier with L = 1. In addition, because we cannot define an end-of-sequence token in the latent space, this baseline must be given knowledge of the true program length even during inference, and always generates a latent sequence of length dT/2`e. Latent RobustFill [VAE] substitutes the VQ-VAE component with a VAE (Kingma & Welling, 2014). This again produces a continuous latent space, but regularized to be distributed approximately as a standard Gaussian. Performing beam search is still intractable, but we can sample L latent sequences from the Gaussians determined by the VAE, and perform beam search on the programs afterwards. Again, we assume that the true program length is known during inference." }, { "heading": "5.1 STRING TRANSFORMATION", "text": "The first test domain is a string transformation DSL frequently studied in the program synthesis literature (Parisotto et al., 2017; Devlin et al., 2017; Balog et al., 2020). Tasks in this domain involve finding a program which maps a set of input strings to a corresponding set of outputs. Programs in the DSL are a concatenation of expressions that perform regex-based string transformations (see Appendix A for the full DSL).\nWe perform experiments on a synthetic dataset generated by sampling programs from the DSL, then the corresponding I/O examples using an heuristic similar to the one used in NSPS (Parisotto et al., 2017) and RobustFill (Devlin et al., 2017) to ensure nonempty output for each input. We consider programs comprising of a concatenation of up to 10 expressions and limit the lengths of strings\nin the I/O to be at most 100 characters. All models have an embedding size of 128 and hidden size of 512, and the attention layers consist of 3 stacked layers with 4 heads each. For the LP model, we used a latent compression factor ` = 2 and vocabulary size K = 40. The models are trained on roughly 25M tasks, and evaluated on 1K held-out ones.\nIn Table 1, we report the accuracy–the number of time a program was found conforming to the I/O examples–of our method against the baselines. Across all beam sizes, our LP model performed 5-7\npercentage points better (over 10% of baseline accuracy) than the next best model. From our ablative study, we see that having two-level using discrete latent codes was important, as the baselines over continuous latent spaces performed comparably to the traditional RobustFill model.\nRecently, SketchAdapt also proposed two-level search (Nye et al., 2019), but in the top-level, it performs beam search over program space augmented with a HOLE token. In constrast, our method searches over a learned, general latent space. During low-level search, SketchAdapt enumerates partial programs to co-opt the HOLE tokens using a learned syn-\nthesizer similar to DeepCoder (Balog et al., 2017), whereas we again perform beam search. To compare the two, we evaluate our LP model on samples generated according to Nye et al. (2019), which slightly modifies the DSL to increase the performance of synthesizers, and report results in Table 2. Since enumeration can be done more quickly than beam search, we let SketchAdapt synthesize 3, 000 programs using B top-level beams, whereas our LP model can only generate B programs. Our LP model is able to outperform SketchAdapt even in the modified DSL." }, { "heading": "5.2 ANALYSIS", "text": "We conduct extensive analysis to better understand our LP model in terms of learning, the ability to generate long programs, and diversity in the beams. All results are reported with beam size B = 10.\nModel Size Our LP model uses an additional latent code for decoding, which introduces additional parameters into the model than the baseline RobustFill model. To make a fair comparison, we vary the embedding and hidden dimension of all of our evaluated methods, and compare the effect of the number of trainable parameters on the accuracy. Figure 3(a) shows that all methods respond well to an increase in model size. Nevertheless, we see that even when normalized for size, our LP model outperforms baselines by a significant margin.\nProgram Length Prior work has shown that program length is a reasonable proxy measure of problem difficulty. We hypothesize that using latent codes is most beneficial when generating long programs. Figure 3(b) shows how ground-truth program length affects the accuracy of our LP model compared to RobustFill, which lacks latent codes. As expected, accuracy decreases with problem complexity. Perhaps surprisingly, though, we see a large improvement in our LP model’s ability to handle more complex problems. In Figure 4, we also show an illustrative example in the domain where our LP model found a valid program whereas the RobustFill model did not. In this example, the ground-truth program was long but had a repetitive underlying structure. Our LP model correctly detected this structure, as evidenced by the predicted latent sequence. We show additional examples" }, { "heading": "LP GetAll_NUMBER | Const(:) | GetToken_ALL_CAPS_1 | Const(.) | GetToken_ALL_CAPS_2 | Const(.) | GetToken_ALL_CAPS_-1 | Const(.)", "text": "in Figure 9 of Appendix B. It is important to note that our method allows tokens in the discrete latent code to have arbitrary meaning, yielding rich and expressive latent representations. However, the trade-off is that because the latent codes were not grounded, it is difficult to objectively interpret the latent codes. Grounding the latent space to induce interpretability is an avenue for future work.\nLatent Beam Size In multi-level beam search of beam size B, first L latent beams are decoded, then bB/Lc programs per latent sequence. The latent beam size L controls how much search is performed over latent space. We theorize that higher L will produce more diverse beams; however, too high L can be harmful in missing programs with high joint log-probability. We show the effect of latent beam size on both the beam-10 accuracy and a proxy measure for diversity. Following prior work, we measure diversity by counting the number of distinct n-grams in the beams, normalized by the total number of tokens to bias against long programs (Vijayakumar et al., 2018). We report the results varying L for B = 10 in Figure 5(a). As expected, increasing the latent beam size L improves diversity of output programs, but excessively large L harms the final accuracy. An important observation is that the L = 1 case, where one latent code is used to decode all programs, performs similarly to baseline RobustFill. In this extreme, no search is performed over the latent space, and our proposed two-level search reduces to only searching over programs; this is further evidence that explicitly having two-level search is critical to the LP model’s improved performance.\nLatent Length and Vocabulary Size Since the discretization bottleneck is a critical component in generating latent codes in our LP model, we also investigate its performance in conjunction with different settings of hyperparameters. Two important variables for the VQ are the latent length compression factor c, and size of latent vocabulary K. If c is too small, the latent space becomes too large to search; on the other hard, too large c can mean individual latent tokens cannot encoded the information needed to reconstruct the program. Similarly, we expect that too small of a vocabulary K can limit the expressiveness of the latent space, but too large K can make predicting the correct latent code too difficult. We confirm this in our evaluations in Figure 5(b) and Figure 5(c)." }, { "heading": "5.3 PYTHON CODE GENERATION", "text": "Our next test domain is a Python code generation (CG) task, which involves generating code for a function that implements a natural-language specification. The dataset used consists of 111K python examples, which consist of a docstring and corresponding code snippet, collected from Github (Wan et al., 2018). An example docstring and program from the dataset is shown in Figure 6.\nWe used a language-independent tokenizer jointly on data (Kudo & Richardson, 2018), and processed the dataset into a vocabulary of 35K sub-word tokens. Furthermore, following Wei et al. (2019), we set the maximum length of the programs to be 150 tokens resulting in 85K examples. Across all models, we set the embedding size to be 256 and hidden size to be 512, and the attention layers consist of 6 stacked layers with 16 heads each, similar to in neural machine translation (Vaswani et al., 2017). For the LP model, we used a latent compression factor c = 2 and vocabulary size K = 400 after a hyperparameter search. The models are evaluated on 1K held-out examples.\nWe initially found that it was difficult for the program encoder to detect latent sequence structure in the ground-truth programs as is due to the noise in variable names. To remedy this, we used an abstract syntax tree (AST) parser on the ground-truth programs to replace the i-th function argument and variable appearing the program with the token ARG_i and VAR_i, respectively. This was only used in training the program encoder and did not impact evaluation.\nMethod BLEU\nB = 1 10 100\nBase (Wei et al., 2019) 10.4 - - Dual (Wei et al., 2019) 12.1 - -\nRobustFill [LSTM] 11.4 14.8 16.0 RobustFill [Transformer] 12.1 15.5 17.2 Latent Programmer 14.0 18.6 21.3\nTable 3: BLEU score on code generation task.\nWe evaluate performance by computing the best BLEU score among the output beams (Papineni et al., 2002). We computed BLEU as the geometric mean of n-gram matching precision scores up to n = 4. Table 3 shows that our LP model outperforms the baselines. From the results, it can be seen that this is a difficult task, which may be due to the ambiguity in specifying code from a short docstring description. As evidence, we addition-\nally include results from a recent work that proposed seq-to-seq CG models on the same data that performed similar to our baselines (Wei et al., 2019). These results show that improvements due to the LP model exist even in difficult CG domains. For example docstrings and code generated by the LP Model, refer to Figure 9 in Appendix B." }, { "heading": "6 CONCLUSION", "text": "In this work we proposed the Latent Programmer (LP), a novel neural program synthesis technique that leverages a structured latent sequences to guide search. The LP model consists of a latent predictor, which maps the input specification to a sequence of discrete latent variables, and a latent program decoder that generates a program token-by-token while attending to the latent sequence. The latent predictor was trained via a self-supervised method in which a discrete autoencoder of programs was learned using a discrete bottleneck, specifically a VQ-VAE (van den Oord et al., 2017), and the latent predictor tries to predict the autoencoded sequence as if it were the ground-truth. During inference, the LP model first searches in latent space for discrete codes, then conditions on those codes to search over programs. Empirically, we showed that the Latent Programmer outperforms state-ofthe-art baselines as Robustfill (Devlin et al., 2017), which ignore latent structure. Exciting future avenues of investigation include achieving better performance by grounding the latent vocabulary and generalizing our method to other tasks in natural language and structured prediction." }, { "heading": "A EXTENDED DESCRIPTION OF DSL AND ROBUSTFILL MODEL", "text": "The DSL for string transformations we use is the same as used in RobustFill (Devlin et al., 2017), and is shown in Figure 7. The top-level operator for programs in the DSL is a Concat operator that concatenates a random number (up to 10) of expressions ei. Each expression e can either be a substring expression f , a nesting expression n, or a constant string c. A substring expression can either return the substring between left k1 and right k2 indices, or between the i1-th occurence of regex r1 and i2-th occurence of regex r2. The nesting expressions also return substrings of the input, such as extracting the i-th occurrence of a regex, but can also be composed with existing substring or nesting expressions for more complex string transformations.\nRobustFill Model RobustFill (Devlin et al., 2017) is a seq-to-seq neural network that uses a encoder-decoder architecture where the encoder computes a representation of the input e(X), and the decoder autoregressively generates the output given the source representation, i.e. conditional likelihood of Y = [y1, . . . , yT ] decomposes as p(Y |X) = ∏T t=1 p(yt|y<t, X).\nIn RobustFill, the probability of decoding each token yt is given by p(yt|y<t, X) = Softmax (W (ht)) with W being the projection onto logits, or unnormalized log probabilities. The hidden representation ht is an LSTM hidden unit given by,\nEt = Attention (ht−1, e(X)) ,\nht = LSTM(ht−1, Et) .\nHere e(X) is the sequence of hidden states after processing the specifications with an LSTM encoder, and Attention (Q,V ) denotes the scaled dot-product attention with query Q and key-value sequence V (Bahdanau et al., 2016). In the case of X being multiple I/O examples, the RobustFill model of Devlin et al. (2017) uses double attention\nsIt,i = Attention (ht−1, e(Ii)) sOt,i = Attention ( Concat ( ht−1, s I t,i ) , e(Oi) ) ht,i = LSTM ( ht−1,Concat ( sIt,i, s O t,i )) ∀1 ≤ i ≤ N,\nand hidden states are pooled across examples before being fed into the final softmax layer, or ht = maxpool1≤i≤N tanh(V (ht,i)) , where V is another projection." }, { "heading": "B EXAMPLES OF GENERATED PROGRAMS AND LATENT CODES", "text": "" } ]
2,020
LATENT PROGRAMMER: DISCRETE LATENT CODES